Meta CEO Mark Zuckerberg took to his social network Threads two weeks ago to announce Meta Compute, a new “top-level initiative” led by the company’s most senior executives. In doing so, he reemphasized Meta’s commitment to being an AI infrastructure behemoth—and signaled that Meta has no intention of being an also-ran in the data center build-out race.

The new organization is designed to secure the massive amounts of computing power—measured in gigawatts, each of which could power hundreds of thousands of homes—needed for Meta’s drive to build AI models that lead to “superintelligence.”

“Meta is planning to build tens of gigawatts this decade, and hundreds of gigawatts or more over time,” Zuckerberg wrote. “How we engineer, invest, and partner to build this infrastructure will become a strategic advantage.”

Under the new structure, Zuckerberg said longtime Meta executive Santosh Janardhan will continue to run the company’s technical architecture, software, custom chips, and the day-to-day building and operation of Meta’s vast data center network. Meanwhile, Daniel Gross—one of Zuckerberg’s high-profile AI hires from last summer, who was previously cofounder of Safe Superintelligence with former OpenAI chief scientist Ilya Sutskever—will lead a new group focused on the long game: how much computing power Meta will need years from now, where it should be built, how to secure scarce chips and energy, and how to model the business impact of those bets.