His job was safety research then. He is now the “chief futurist” at OpenAI, where he tries to think about side-effects of AI (such as social impacts, economic impacts, and consequences for national and international security). “It is my best attempt to have us fulfill the mission of OpenAI,” he says. The idea is to ensure AGI benefits everyone, he says. It’s “one of the highest and noblest callings we could possibly have.”

Kolter laid out OpenAI’s different safety groups: the safety systems team, which works on guardrails and evaluations; the preparedness team, which deals with OpenAI’s preparedness…

His job was safety research then. He is now the “chief futurist” at OpenAI, where he tries to think about side-effects of AI (such as social impacts, economic impacts, and…

He said when he joined, OpenAI was a team of about 50 people, and that it essentially felt like “an extension of a graduate student lab in a university” — a “collegiate, academic,…

He said Brockman and Sutskever were the “main leaders,” and that Brockman was the “engineering workhorse that pushed to build scaled-up systems that would train the AI and make it…

That’s according to Josh Achiam, currently the company’s chief futurist, who joined in 2017. He said Sutskever’s impassioned speeches would typically be about the…