Same. That’s the biggest thing I’m seeing from orgs all over: using AI as a kind of “appeal to authority” to justify shitty behavior that they’ve been wanting to do all along. If they didn’t want to act like this then they would double-check, adjust, or correct the results.
The AI gives them a headless authority to point to, saying in a way, that they were just following orders
It’s like if Aperture Science designed a Dalek