To revisit this article, visit My Profile, then View saved stories.

DANIEL RAUSCH, AMAZON’S vice president of Alexa and Echo, is in the midst of a major transition. More than a decade beyond the launch of Amazon’s Alexa, he’s been tasked with creating a new version of the marquee voice assistant, one that’s powered by large language models. As he put it in my interview with him, this new assistant, dubbed Alexa+, is “a complete rebuild of the architecture.”

How did his team approach Amazon’s largest ever revamp of its voice assistant? They used AI to build AI, of course.

“The rate with which we're using AI tooling across the build process is pretty staggering,” Rausch says. While creating the new Alexa, Amazon used AI during every step of the build. And yes, that includes generating parts of the code.

The Alexa team also brought generative AI into the testing process. The engineers used “a large language model as a judge on answers” during reinforcement learning processes where the AI selected what it considered to be the best answers between two Alexa+ outputs.