• News
  • Subscribe Now

Inside Amazon’s Race to Build the AI Industry’s Biggest Datacenters

By Unknown Author|Source: Time|Read Time: 4 mins|Share

The competition between companies investing in custom chips for AI development is intense and could have significant impacts on the industry. Billions of dollars are being poured into this infrastructure battle, highlighting the importance of these technologies. The outcome of this high-stakes competition will likely influence the direction of AI research and development in the future. Companies are strategically positioning themselves to gain a competitive edge in this emerging field. The decisions made in this battle could have long-lasting implications for the AI industry as a whole.

Inside Amazon’s Race to Build the AI Industry’s Biggest Datacenters
Representational image

Rami Sinno is crouched beside a filing cabinet, wrestling a beach-ball sized disc out of a box, when a dull thump echoes around his laboratory. “I just dropped tens of thousands of dollars’ worth of material,” he says with a laugh.

Straightening up, Sinno reveals the goods: a golden silicon wafer, which glitters in the fluorescent light of the lab. This circular platter is divided into some 100 rectangular tiles, each of which contains billions of microscopic electrical switches. These are the brains of Amazon’s most advanced chip yet: the Trainium 2, announced in December.

The AI Chip Race

For years, artificial intelligence firms have been dependent on one company, Nvidia, to design the cutting-edge chips required to train the world’s most powerful AI models. But as the AI race heats up, cloud giants like Amazon and Google have accelerated their in-house efforts to design their own chips, in pursuit of market share in the rapidly-growing cloud computing industry, which was valued at $900 billion at the beginning of 2025.

This unassuming Austin, Texas, laboratory is where Amazon is mounting its bid for semiconductor supremacy. Sinno is a key player. He’s the director of engineering at Annapurna Labs, the chip design subsidiary of Amazon’s cloud computing arm, Amazon Web Services (AWS).

Project Rainier

Large as this unit may be, it’s only a miniaturized simulacrum of the chips’ natural habitat. Soon thousands of these fridge-sized supercomputers will be wheeled into several undisclosed locations in the U.S. and connected together to form “Project Rainier”—one of the largest datacenter clusters ever built anywhere in the world, named after the giant mountain that looms over Amazon’s Seattle headquarters.

The precise number of chips involved in Project Rainier, the total cost of its datacenters, and their locations are all closely-held secrets. Amazon claims the finished Project Rainier will be “the world’s largest AI compute cluster”—bigger, the implication is, than even Stargate.

Amazon's Strategy

Amazon is building Project Rainier specifically for one client: the AI company Anthropic, which has agreed to a long lease on the massive datacenters. There, on hundreds of thousands of Trainium 2 chips, Anthropic plans to train the successors to its popular Claude family of AI models. The chips inside Rainier will collectively be five times more powerful than the systems that were used to the best of those models.

Anthropic isn’t just a customer of Amazon; it’s also partially owned by the tech giant. Amazon has invested $8 billion in Anthropic for a minority stake in the company. Much of that money, in a weirdly circular way, will end up being spent on AWS datacenter rental costs.

The Future of AI

To be sure, Amazon is still heavily reliant on Nvidia chips. Meanwhile, Google’s custom chips, known as TPUs, are considered by many in the industry to be superior to Amazon’s. However, Project Rainier and the Trainium 2 chips that will fill its datacenters are the culmination of Amazon’s effort to accelerate its flywheel into pole position.

As the more sophisticated Amazon’s in-house chips become, the less it will need to rely on industry leader Nvidia—demand for whose chips far outstrips supply. Amazon’s strategy of not selling its Trainium chips but providing access to them in AWS-operated datacenters creates efficiencies that Nvidia would find difficult to replicate.

Back in the lab, Sinno gestures at the various stages of the design process for chips that might help summon powerful new AIs into existence. He is excitedly reeling off statistics about the Trainium 3, expected later this year, which he says will be twice the speed and 40% more energy-efficient than its predecessor. Neural networks running on Trainium 2s assisted with the team’s design of the upcoming chip, indicating how AI is accelerating the speed of its own development.


By entering your email you agree to our terms & conditions and privacy policy. You will be getting daily AI news in your inbox at 7 am your time to keep you ahead of the curve. Don't worry you can always unsubscribe.