There’s a pattern playing out inside almost every engineering organization right now. A developer installs GitHub Copilot to ship code faster. A data analyst starts querying a new LLM tool for reporting. A product team quietly embeds a third-party model into a feature branch. By the time the security team hears about any of it, the AI is already running in production — processing real data, touching real systems, making real decisions.
That gap between how fast AI enters an organization and how slowly governance catches up is exactly where risk lives. According to a new practical framework guide ‘AI Security Governance: A Practical Framework for Security and Development Teams,’ from Mend, most organizations still aren’t equipped to close it. It doesn’t assume you have a mature security program already built around AI. It assumes you’re an AppSec lead, an engineering manager, or a data scientist trying to figure out where to start — and it builds the playbook from there.
The framework begins with the critical premise that governance is impossible without visibility (‘you cannot govern what you cannot see’). To ensure this visibility, it broadly defines ‘AI assets’ to include everything from AI development tools (like Copilot and Codeium) and third-party APIs (like OpenAI and Google Gemini), to open-source models, AI features in SaaS tools (like Notion AI), internal models, and autonomous AI agents. To solve the issue of ‘shadow AI’ (tools in use that security hasn’t approved or catalogued), the framework stresses that finding these tools must be a non-punitive process, ensuring developers feel safe disclosing them







