Meta Compute: Meta's big bet on AI superintelligence

Last update: 14/01/2026

  • Meta launches Meta Compute to build a gigawatt-scale AI infrastructure
  • The project will be led by Santosh Janardhan and Daniel Gross, under the supervision of Dina Powell McCormick.
  • The company plans to invest hundreds of billions of dollars and reach hundreds of gigawatts of computing capacity.
  • Meta relies on long-term energy agreements, including nuclear power, to power its data centers

Goal has decided to take a leap forward in its bet on next-generation artificial intelligence with the Meta Compute launchThis is a high-level initiative with which Mark Zuckerberg's company aims to position itself at the forefront of the race for superintelligence. The proposal aspires to redesign their technological infrastructure from top to bottom to support increasingly powerful and ubiquitous AI models.

As the CEO himself has explained in several public communications, the goal is to deploy a unprecedented computing power, based on new data centers, specialized hardware and large-scale energy agreements. The plan is not limited to adding servers: It seeks to create a platform capable of offering what Meta calls "personal superintelligence" to billions of users worldwide.

What exactly does the Meta Compute initiative consist of?

Meta Compute initiative

Meta Compute is presented as the strategic umbrella under which all the computing infrastructure for AI of the company over the next few years. The company had already been hinting at its intention to aggressively increase the available power in its data centers, but now it is articulating that ambition in a formal program, with defined leadership and very clear capacity goals.

Mark Zuckerberg has detailed that Meta anticipates to build “tens of gigawatts” of computing power over the course of this decadewith the intention of eventually reaching "hundreds of gigawatts or more." We're talking about an energy scale similar to that consumed by entire cities or even small countries, primarily intended for training and running advanced AI models.

This infrastructure will rely on a global network of next-generation data centersdesigned to house high-performance chips and architectures optimized for massive AI workloads. Meta had already announced facilities with capacities exceeding one gigawatt that will begin operating this year, and Meta Compute will be the framework coordinating their construction, operation, and evolution.

The project is closely linked to the plans that Meta announced in July of last year, when it launched Superintelligence Labs, a team specializing in the development of more sophisticated AI modelsUnder the guidance of industry experts such as Alexander Wang and Nat Friedman, Meta Compute, in this context, becomes the infrastructure component that must support these aspirations of superintelligence.

At the same time, the company has indicated that it expects to invest “hundreds of billions of dollars” in computing in the coming years. These figures include both the construction of data centers and the design of its own chips, improvements in the software layer and tools for development teams to better leverage that computing power.

Exclusive content - Click Here  How to recover deleted files in the Amazon Drive app?

A very energy-intensive plan

Scaling up the infrastructure to this level means fully confronting the problem of energy consumption associated with AIMeta itself acknowledges that the amount of energy needed to power Meta Compute's future server fleet will be comparable to that of several medium-sized cities, which comes amid growing concerns about the environmental impact of data centers.

To guarantee long-term supply, the company It is closing large-scale deals with energy suppliersIn the United States, for example, it has signed multi-year contracts to purchase electricity from nuclear power plants and advanced projectsincluding small modular reactors that could become operational in the next decade. These types of initiatives aim to ensure a relatively stable source of energy free from direct carbon emissions.

Meta's energy strategy aligns with that of other major technology companies which, faced with increased electricity demand from AI and data centers, are trying to to secure their access to reliable power sourcesThe shift towards nuclear energy, which until a few years ago seemed improbable in the digital sector, is gaining traction as an option to sustain the growth of computing without triggering a surge in CO₂ emissions.

At the same time, Meta is aware of the criticism surrounding the intensive use of resources such as water for data center cooling, as well as the impact on regional power grids. Within Meta Compute, the company states that it will work on more efficient designs and in cooling and thermal management technologies that reduce the ecological footprint of their facilities.

This point is not insignificant for Europe and, in particular, for countries like Spain, where the debate on the sustainability of digital infrastructure It is increasingly present, and investments in data centers are being closely scrutinized in terms of energy and water consumption.

Who's in charge at Meta Compute: the new organizational chart

artificial intelligence infrastructure Meta Compute

To understand the importance of Meta Compute within the company, one need only look at the caliber of the management team taking over. The initiative will be co-led by Santosh Janardhan and Daniel Gross, two profiles with strong technical and strategic weight, and will have the political and financial supervision of Dina Powell McCormick.

Janardhan, current Director of Global Infrastructure at MetaIt will continue to oversee the technical architecture of the systems, the silicon program (i.e., chip development and selection), the data center-related software stack, and developer productivity. It will also retain responsibility for the construction and operation of the global fleet of data centers and the network that interconnects them.

Exclusive content - Click Here  Apple tests Veritas, the new Siri with an internal ChatGPT-style chatbot.

For their part, Daniel Gross, former CEO of Safe Superintelligence, takes the helm of a new group within Meta Compute in charge of the long-term capacity strategyAmong its functions will be industry analysis, infrastructure expansion planning, forming alliances with key suppliers, and business modeling associated with all this investment.

The third pillar of leadership is Dina Powell McCormick, recently appointed president and vice president of Meta. Her role will focus, as Zuckerberg explained, on the relationship with governments and sovereign entitiesIn practice, this means negotiate regulatory frameworks, facilitate permits for new facilities and structure public-private financing mechanisms to build the infrastructure.

This management structure places Meta Compute very close to the company's highest decision-making level. Zuckerberg has indicated that the way Meta design, invest, and partner Building this infrastructure will become a decisive element of their strategic advantage over other industry players.

Personal superintelligence for billions of users

Beyond the gigawatt figures or the names leading the initiative, Meta Compute's ultimate goal is to support a new generation of services based on what the company calls “personal superintelligence”The idea is that users will have access to AI assistants and systems much more advanced than the current ones, integrated into Meta's platforms such as Facebook, Instagram, WhatsApp and other products.

This vision aligns with the creation of Superintelligence Labs, the team dedicated to exploring AI models with more sophisticated cognitive capabilities, which They approach the theoretical concept of superintelligenceSystems that could outperform humans in multiple reasoning and decision-making tasks. To ensure these capabilities don't remain confined to the laboratory, Meta Compute must provide the physical and logical foundation that makes them usable on a large scale.

Zuckerberg has insisted that the company's ambition is for this personal superintelligence to be accessible to “billions of people”This involves not only training gigantic models, but also deploying them in a way that... efficient and safeso that they work in real time for users around the world, with different devices and connectivity conditions.

In Europe, this approach poses additional challenges, as the deployment of advanced AI services must comply with a stricter regulatory framework regarding data protection, algorithmic transparency and securityThe future application of the European Union's AI Act will force Meta to adapt the design and use of its models to comply with EU regulations.

The company is aware that, if it wants its personal superintelligence to have a strong presence in markets like Europe, it will have to combine the Meta Compute power with strict compliance with legal obligations and clear communication about how those systems work.

Exclusive content - Click Here  Google Photos Recap gets a refresh with more AI and editing options

Massive investment and a global race for AI infrastructure

Meta Mesa

The launch of Meta Compute comes at a time when major technology companies are competing to secure computing resources, energy and talent for its AI projects. After a lukewarm reception to some versions of its Llama models, Meta has intensified its focus on infrastructure as a way to regain ground against other players in the sector.

The company has gone so far as to commit $72.000 billion in capital expenditures between now and 2025, focusing on data centers and AI systems, and has advanced investment forecasts that could reach $600.000 billion in infrastructure and jobs related to artificial intelligence by 2028. Meta Compute thus becomes the organizational vehicle for this enormous investment effort.

In parallel, the company has signed 20-year electricity supply contracts with power generation plants, especially in the United States, to ensure that the energy needed to power their data centers will be available at relatively predictable prices. This is a strategy also being adopted by other tech giants, given the realization that AI is changing electricity demand patterns, which were beginning to rise again after decades of stability.

For Europe, these dynamics could translate into an increase in data center projects in countries with stable regulatory frameworks and good energy availabilitysuch as Spain, Ireland, or the Nordic countries. Although Meta has not yet specified the full location of the new Meta Compute facilities, the European market is among the priorities due to its size and the sophistication of its telecommunications infrastructure.

Meta's strategy is also under intense scrutiny from financial analysts. International investment firms are closely monitoring the evolution of capital costs, the expected return on investment in AI, and the impact on the company's stock market value. Currently, the majority consensus continues to see upside potential, but also highlights the risks associated with such concentrated and long-term investments.

Meta Compute is shaping up to be one of the most ambitious movements in the current technology sector: a project that combines physical infrastructure, energy, regulation and product vision to try to position Meta at the center of the next wave of artificial intelligence. Its success or failure will largely determine the balance of power in the industry for the next decade.

Related article:
Amazon Bee: This is the new AI-powered wrist assistant that wants to be your digital memory.