AI coding tools proliferated widely across technical teams in 2025, shifting how developers work and how companies across industries develop and launch products and services. According to Stack Overflow’s 2025 survey of 49,000 developers, 84% said they’re using the tools, with 51% doing so daily.

AI coding tools have also caught the interest of another group: malicious actors. While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses, and cyberthreat researchers have discovered critical vulnerabilities in several popular tools that make clear what could go horribly wrong.

Any emerging technology creates a new opening for cyberattacks, and in a way, AI coding tools are just another door. At the same time, the agentic nature of many AI-assisted coding capabilities makes it crucial for developers to check every aspect of the AI’s work, making it easy for small oversights to warp into critical security issues. Security experts also say the nature of how AI coding tools function makes them susceptible to prompt injection and supply-chain attacks, the latter of which are especially damaging as they affect companies downstream that use the tool.