Urvish Parikh is the co-founder and CTO of Nirvana.gettyThe eligibility check runs in under two seconds. It returns either a code that unlocks care or one that doesn't. The payer sends the response. The system logs it. The provider acts on it.Somewhere in that chain is a person who doesn't know any of this is happening.We spent years building that infrastructure. APIs, clearinghouse integrations, denial pattern analysis. We got fast. We got accurate. We hit the SLAs. And somewhere in all of that, I started noticing something uncomfortable: The system was incredibly precise about the question it was built to answer—and completely silent about every question it wasn't.Does this patient qualify? Yes. No. Maybe with authorization.What does this patient need? The field doesn't exist.There's a concept in linguistics called Sapir-Whorf—the idea that language doesn't just describe reality; it constrains what reality you can perceive in the first place. I think about this a lot when I look at healthcare data. Every schema is a theory of the world. It decides what can be named, counted, transmitted and acted on. Healthcare's schemas were built by people solving real problems—billing accuracy, fraud reduction, throughput. But those schemas carried a theory inside them that nobody quite said out loud: that health is a collection of discrete, encodable events. Diagnoses. Procedures. Authorizations.The body doesn't experience itself that way. It generates continuous signal. The system captures fragments of that signal and then acts on the fragments as though they were the whole.We didn't build healthcare infrastructure. We built a particular theory of what health is, and then we forgot we'd built it.Now AI is moving into that space. And I think, if we're paying attention, this is actually the most interesting moment healthcare technology has ever been in.Andrej Karpathy recently released a project called AutoResearch. It's an AI agent that autonomously runs hundreds of machine learning experiments overnight while you sleep, relentlessly optimizing toward a single metric. The insight at its center is deceptively simple: The agent will saturate whatever space you define for it. It doesn't skip corners. It fills completely. But here's what Karpathy made explicit in the design: The most important file in the whole system isn't the code. It's program.md—the markdown file a human writes that tells the agent what to optimize for and what constraints must never change. Get that file right, and the agent is extraordinary. Get it wrong, and the agent will optimize a proxy rather than the actual outcome, with relentless efficiency, at a scale no human could match.Healthcare's schemas are its program.md. And most of the industry hasn't looked at that file in decades.The same property that makes AI dangerous inside a bad ontology makes it powerful inside a good one—it elaborates whatever space you give it with a confidence and speed no prior system could match. Which means the schema matters more now than it ever did before. Not less.But here's what's genuinely new: For the first time, we have tools that can map the edges of our own assumptions. AI models are constantly encountering the limits of the ontologies they operate inside. Confidence drops in particular places. Exceptions cluster in ways that aren't random. The model strains at the boundaries. That's not noise—that's the schema telling you something it couldn't tell you before. We've never had that signal at this resolution. The question is whether we build systems that listen to it.That's the opportunity inside the risk. The same system filling the space can also show you where the space runs out.So, what actually helps? Two things seem directionally right to me.One is feedback loops—building systems designed to surface what the schema is missing, not just optimize what it captures. Most organizations absorb the signal of ontological strain as operational friction and patch it at the workflow layer. Instead, it should flow back into how the data model itself evolves. That's a different kind of infrastructure, but it's buildable.The other is owning schema decisions as decisions. Not defaults inherited from a system nobody remembers designing. The theory of health embedded in your data model should be legible, documented, versioned and revisable. This ensures that teams building on top of it know what they're actually standing on and can change it when they learn something new. Right now, most can't. The assumptions are load-bearing and invisible simultaneously.Neither of these is technically complicated. Both require treating schema design as an ongoing epistemic responsibility rather than a problem you solve once and move past. But organizations that treat their ontology as a living thing build something with a very different ceiling than everyone else.The theory of health embedded in our schemas was never final. It was always provisional—a best approximation built under constraints that no longer fully apply. What AI gives us, if we use it well, is the ability to see that provisionality clearly for the first time. To surface the assumptions, examine them and build something closer to what health actually is.That's not a small thing. That's the whole game.Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Healthcare's Program.md
Healthcare's schemas are its program.md. And most of the industry hasn't looked at that file in decades.















