Amazon has officially rolled out Project Rainier, a sprawling compute infrastructure designed to handle the immense demands of next-generation AI models.
The company revealed that AI startup Anthropic plans to leverage over one million of Amazon’s custom-built Trainium2 chips within this setup before the year wraps up.
Project Rainier, which kicked off last year, consists of a distributed AI compute cluster spanning several data centers throughout the United States.
At its core, it integrates close to 500,000 of Amazon’s proprietary Trainium2 processors, engineered specifically for high-performance AI training and inference tasks.
With AI technologies evolving rapidly, cloud providers like Amazon Web Services (AWS) are ramping up their infrastructure investments to keep pace with the surging requirements for computational power.
Anthropic, which counts Amazon as a key investor, is already tapping into Project Rainier’s resources to develop and roll out its flagship AI model, Claude.
The firm intends to expand its usage to exceed 1 million Trainium2 chips on AWS by the close of 2025, with the cluster also set to support upcoming iterations of Claude.


















