Good morning. Many boards are approving AI strategies without clear visibility into whether the underlying controls actually work, leaving CFOs exposed when regulators, auditors, or investors ask for proof, according to new research. In the private sector, health care appears to face the steepest challenge.
Kiteworks, a technology security company, has released its “Data Security and Compliance Risk: 2026 Forecast Report,” based on a survey of 225 security, IT, compliance, and risk leaders across 10 industries and eight regions.
One of the key findings is that 53% of organizations cannot remove personal data from AI models once it has been used, creating long-term exposure under GDPR, CPRA, and emerging AI regulations.
All respondents said agentic AI is on their roadmap, but the controls to govern those systems are lagging. Overall, 63% cannot enforce purpose limitations on AI agents, 60% lack kill-switch capabilities, and 72% have no software bill of materials (SBOM) for AI models in their environment. The result: AI systems are accessing, processing, and learning from sensitive data while organizations cannot fully track where that data goes or prove how it is being used, according to the report.






