While the software industry has made genuine strides over the past few decades to deliver products securely, the furious pace of AI adoption is putting that progress at risk. Businesses are moving fast to self-host LLM infrastructure, drawn by the promise of AI as a force multiplier and the pressure to deliver more value faster. But speed is coming at the expense of security.

In the wake of the ClawdBot fiasco — the viral self-hosted AI assistant that’s averaging an eye-watering 2.6 CVEs per day — the Intruder team wanted to investigate how bad the security of AI infrastructure actually is.

To scope the attack surface, we used certificate transparency logs to pull just over 2 million hosts with 1 million exposed services. What we found wasn’t pretty. In fact, the AI infrastructure we scanned was more vulnerable, exposed, and misconfigured than any other software we've ever investigated.

No authentication by default

It didn’t take long to spot an alarming pattern: a significant number of hosts had been deployed straight out of the box, with no authentication in place. Looking into the source code revealed why: authentication simply isn't enabled by default in many of these projects.