by Jonathan Anthony

Open source AI trust has become a central concern for enterprises moving agentic AI into production, where governance, security and reliability matter as much as model performance.

That pressure is landing squarely on platform companies to provide standardized, shared foundations that absorb the complexity so enterprises don’t have to. Notably, the industry has been here before — with Linux and Kubernetes — but the velocity of AI hardware and model cycles is forcing a new kind of co-engineering discipline, according to Chris Wright (pictured), chief technology officer and senior vice president of global engineering at Red Hat Inc.

“As you’re building agents that can write code and do things — make real actions within your real business — how do you trust that?” Wright said. “You got to give it the right sandboxing. You got to put protections around the agent, give it least privileges so it doesn’t think about read versus read-write — very big difference. How do you manage that in scale with potentially hundreds or thousands of agents? I think building trust is critical.”

Wright spoke with theCUBE’s Rob Strechay and Rebecca Knight at Red Hat Summit 2026, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed open source AI trust, inference economics, agent identity governance and Red Hat’s push to establish a standard execution layer for the AI era. (* Disclosure below.)