The New Yorker magazine’s recently published investigation into OpenAI chief executive Sam Altman posed a loaded question: can the people building this powerful technology actually be trusted?
The report described a system where commercial incentives drive behaviour and oversight is treated as a nuisance. In Aotearoa New Zealand, there is a similarly urgent question: can the governance frameworks we are building to manage that technology be trusted to work?
This is particularly relevant to the public service and government agencies, now being encouraged to embrace AI. At a recent International Research Society for Public Management conference, the global research community grappled with how AI can align with the public interest.
A clear divergence is emerging. Some jurisdictions are building surveillance-heavy data systems, while others are constructing robust, binding systems to protect citizen consent.
Aotearoa New Zealand occupies a precarious middle ground. The Public Service AI Framework names the right principles: transparency, fairness and human oversight. But it is explicitly non-binding.











