The clash between the two leading AI labs shows how much AI safety rests on the personalities shaping the industry

OpenAI says its AI won’t be used for mass domestic surveillance or autonomous weapons, but legal experts warn gray areas in U.S. law could leave loopholes.

Anthropic—valued at $380 billion—faces what my colleague Jeremy Kahn calls “the biggest crisis in its five-year existence.”

The lesson here isn’t that one AI company is more ethical than another. It’s that we must renovate our democratic structures

The dispute between the AI company and the Department of War raises key questions about who should control AI, and how.

AI has made battlefield decision-making faster. But it also raises questions about risk and ethics.

Anthropic AI is growing rapidly in usage among companies, but with the Trump administration declaring economic war on the company, existential risk is real.

The clash between the two leading AI labs shows how much AI safety rests on the personalities shaping the industry

Tech policy professor who served in US air force explains how a feud between an AI startup and the US military illuminates ethical fault lines