Mental health concerns linked to the use of AI chatbots have been dominating the headlines. One person who’s taken careful note is Joe Braidwood, a tech executive who last year launched an AI therapy platform called Yara AI. Yara was pitched as a “clinically-inspired platform designed to provide genuine, responsible support when you need it most,” trained by mental health experts to offer “empathetic, evidence-based guidance tailored to your unique needs.” But the startup is no more: earlier this month, Braidwood and his co-founder, clinical psychologist Richard Stott, shuttered the company and discontinued its free-to-use product and canceled the launch of its upcoming subscription service, citing safety concerns.
“We stopped Yara because we realized we were building in an impossible space. AI can be wonderful for everyday stress, sleep troubles, or processing a difficult conversation,” he wrote on LinkedIn. “But the moment someone truly vulnerable reaches out—someone in crisis, someone with deep trauma, someone contemplating ending their life—AI becomes dangerous. Not just inadequate. Dangerous.” In a reply to one commenter, he added, “the risks kept me up all night.”
The use of AI for therapy and mental health support is only just starting to be researched, with early resultsbeing mixed. But users aren’t waiting for an official go-ahead, and therapy and companionship is now the top way people are engaging with AI chatbots today, according to an analysis by Harvard Business Review.







