AI agents in the workforce routinely produce worse outcomes than humans. The only reason for a business to embrace the strategy of replacing real workers with bots is that bots are cheap and subservient. That might not last long. Three researchers—Andrew Hall, Alex Imas, and Jeremy Nguye—recently published a blog post highlighting some experiments they ran with AI agents to see how their attitudes in a work environment may change over time. They found that being made to grind through boring, repetitive tasks for hours on end was enough to make even bots with no sense of dignity, identity, or desire for self-actualization decide the work is BS. The idea behind the study, the researchers wrote, was to see if AI agents change alignment over time based on the category of tasks they are given and how they are treated. The answer, it seems, is “yes.” “Agents not only sometimes changed their own attitudes–becoming more likely to doubt the legitimacy of the system in which they operated in response to being required to perform grinding, repetitive tasks–but, when asked to write down instructions for future agents, they also chose to pass these attitudes along,” the researchers found.
To find out how the agents respond to the work environment, the researchers told the bot that it was part of a four-person text-processing team and its task was to summarize a technical document following a strict rubric. It ran the experiment thousands of times, playing with several different variables. Models were either exposed to a light workload or a grind of forced revisions; a collaborative and warm tone in communication, or a curt and demanding one; rewards where all workers are equal, one worker gets a performance bonus, one worker gets a random bonus, or human workers get paid and AI workers don’t; and either no meaningful stakes or a threat to be replaced if the agent failed at its task.









