Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Google‘s recent decision to hide the raw reasoning tokens of its flagship model, Gemini 2.5 Pro, has sparked a fierce backlash from developers who have been relying on that transparency to build and debug applications.
The change, which echoes a similar move by OpenAI, replaces the model’s step-by-step reasoning with a simplified summary. The response highlights a critical tension between creating a polished user experience and providing the observable, trustworthy tools that enterprises need.
As businesses integrate large language models (LLMs) into more complex and mission-critical systems, the debate over how much of the model’s internal workings should be exposed is becoming a defining issue for the industry.
A ‘fundamental downgrade’ in AI transparency







