Tech companies are hurdling toward a goal of artificial general intelligence, or AGI—technology that matches or exceeds human cognitive abilities. Anthropic CEO Dario Amodei and Tesla CEO Elon Musk have predicted the advent of human-level artificial intelligence could arrive as early as this year. Despite optimism about the technology among business leaders, AI experts say it could have catastrophic impacts if left uncontrolled.
Ex-Google insider and AI expert Tristan Harris joined The Diary of a CEO podcast with host Steven Bartlett last November to discuss the pursuit of AGI, which he acknowledges most industry leaders believe could arrive by as soon as 2027. Harris said the mad dash to achieve human-level AI could create harmful incentives for unchecked growth, ultimately deteriorating safety, security, and economic well-being.
“It’s a kind of competitive logic that self-reinforces itself,” Harris said. “It forces everyone to be incentivized to take the most shortcuts, to care the least about safety or security, to not care about how many jobs get disrupted, to not care about the well-being of regular people.”
Today, AI companies are operating with minimal regulation. On the first day of his second term, President Donald Trump rolled back Biden-era AI regulations aimed at ensuring safe and secure implementation that supported workers facing job disruptions. And in December, Trump signed an executive order preempting regulation of the technology, preventing a patchwork of state laws that the president said could “stymie innovation.” Harris argued unfettered AI growth is not in the average American’s best interest.







