For the past several years, Yoshua Bengio, a professor at the Université de Montréal whose work helped lay the foundations of modern deep learning, has been one of the AI industry’s most alarmed voices, warning that superintelligent systems could pose an existential threat to humanity—particularly because of their potential for self-preservation and deception.

In a new interview with Fortune, however, the deep-learning pioneer says his latest research points to a technical solution for AI’s biggest safety risks. As a result, his optimism has risen “by a big margin” over the past year, he said.

Bengio’s nonprofit, LawZero, which launched in June, was created to develop new technical approaches to AI safety based on research led by Bengio. Today, the organization—backed by the Gates Foundation and existential-risk funders such as Coefficient Giving (formerly Open Philanthropy) and the Future of Life Institute—announced that it has appointed a high-profile board and global advisory council to guide Bengio’s research, and advance what he calls a “moral mission” to develop AI as a global public good.

The board includes NIKE Foundation founder Maria Eitel as chair, along with Mariano-Florentino Cuellar, president of the Carnegie Endowment for International Peace, and historian Yuval Noah Harari. Bengio himself will also serve.