DARIO AMODEI’s AI safety contingent was growing disquieted with some of Sam Altman’s behaviors. Shortly after OpenAI’s Microsoft deal was inked in 2019, several of them were stunned to discover the extent of the promises that Altman had made to Microsoft for which technologies it would get access to in return for its investment. The terms of the deal didn’t align with what they had understood from Altman. If AI safety issues actually arose in OpenAI’s models, they worried, those commitments would make it far more difficult, if not impossible, to prevent the models’ deployment. Amodei’s contingent began to have serious doubts about Altman’s honesty.
Buy This Book At:
“We’re all pragmatic people,” a person in the group says. “We’re obviously raising money; we’re going to do commercial stuff. It might look very reasonable if you’re someone who makes loads of deals like Sam, to be like, ‘All right, let’s make a deal, let’s trade a thing, we’re going to trade the next thing.’ And then if you are someone like me, you’re like, ‘We’re trading a thing we don’t fully understand.’ It feels like it commits us to an uncomfortable place.”
This was against the backdrop of a growing paranoia over different issues across the company. Within the AI safety contingent, it centered on what they saw as strengthening evidence that powerful misaligned systems could lead to disastrous outcomes. One bizarre experience in particular had left several of them somewhat nervous. In 2019, on a model trained after GPT‑2 with roughly twice the number of parameters, a group of researchers had begun advancing the AI safety work that Amodei had wanted: testing reinforcement learning from human feedback (RLHF) as a way to guide the model toward generating cheerful and positive content and away from anything offensive.







