Cash-hungry Silicon Valley firms are scrambling for revenue. Regulate them now before the tech becomes too big to fail
H
ardly a month passes without an AI grandee cautioning that the technology poses an existential threat to humanity. Many of these warnings might be hazy or naive. Others may be self-interested. Calm, level-headed scrutiny is needed. Some warnings, though, are worth taking seriously.
Last week, some notable ground-level AI safety researchers quit, warning that firms chasing profits are sidelining safety and pushing risky products. In the near term, this suggests a rapid “enshittification” in pursuit of short-term revenue. Without regulation, public purpose gives way to profit. Surely AI’s expanding role in government and daily life – as well as billionaire owners’ desire for profits – demand accountability.
The choice to use agents – chatbots – as the main consumer interface for AI was primarily commercial. The appearance of conversation and reciprocity promotes deeper user interaction than a Google search bar. The OpenAI researcher Zoë Hitzig has warned that introducing ads into that dynamic risks manipulation. OpenAI says ads do not influence ChatGPT’s answers. But, as with social media, they may become less visible and more psychologically targeted – drawing on extensive private exchanges.








