As the business world comes to grips with artificial intelligence, the biggest risk may be one where those running the economy can’t possibly stay ahead. As AI systems become more complex, humans aren’t able to fully understand, predict, or control them. That inability to understand at a fundamental level where AI models are going in the coming years makes it harder for organizations deploying AI to anticipate risks and apply guardrails.
“We’re fundamentally aiming at a moving target,” said Alfredo Hickman, chief information security officer at Obsidian Security.
A recent experience Hickman had spending time with the founder of a company building core AI models left him shocked, he says, “when they told me that they don’t understand where this tech is going to be in the next year, two years, three years. ... The technology developers themselves don’t understand and don’t know where this technology is going to be.”
As organizations connect AI systems to real-world business operations to approve transactions, to write code, to interact with customers, and move data between platforms, they are encountering a growing gap between how they expect these systems to behave and how they actually perform once deployed. They are quickly discovering that AI isn’t dangerous because it’s autonomous but because it increases system complexity beyond human comprehension.







