Stephen D Turner of the University of Virginia explores the importance of governance and oversight around AI in the design and execution of lab experiments.
A version of this article was originally published by The Conversation (CC BY-ND 4.0)
Artificial intelligence is rapidly learning to autonomously design and run biological experiments, but the systems intended to govern those capabilities are struggling to keep pace.
AI company OpenAI and biotech company Ginkgo Bioworks announced in February 2026 that OpenAI’s flagship model GPT-5 had autonomously designed and run 36,000 biological experiments. It did this through a robotic cloud laboratory, a facility where automated equipment controlled remotely by computers carries out experiments. The AI model proposed study designs, and robots carried them out and fed the data back to the model for the next round. Humans set the goal, and the machines did much of the work in the lab, cutting the cost of producing a desired protein by 40pc.
This is programmable biology: designing biological components on a computer and building them in the physical world, with AI closing the loop.







