Bip San Francisco

collapse
Home / Daily News Analysis / CoreWeave signs multi-year Anthropic deal as nine of ten top AI model providers join its platform

CoreWeave signs multi-year Anthropic deal as nine of ten top AI model providers join its platform

Apr 12, 2026  Twila Rosenbaum  50 views
CoreWeave signs multi-year Anthropic deal as nine of ten top AI model providers join its platform

In short: CoreWeave announced a multiyear agreement with Anthropic on April 10, 2026, granting the Claude maker access to Nvidia GPU capacity across U.S. data centers for production-scale AI workloads. Financial terms were not disclosed. This deal comes just a day after CoreWeave revealed a $21 billion expansion of its partnership with Meta, positioning Anthropic among a customer list that now includes nine of the ten leading AI model providers. CoreWeave generated $5.13 billion in revenue in 2025 and is projecting over $12 billion in 2026, supported by a contracted backlog exceeding $66 billion.

Ex-Crypto Miner Becomes AI's Landlord

Founded in 2017 as Atlantic Crypto, CoreWeave began as an Ethereum mining operation, purchasing Nvidia graphics processing units in bulk to mine cryptocurrency and rent surplus GPU capacity to other miners. However, as crypto margins shrank in 2019, the company rebranded to CoreWeave and shifted focus to GPU-on-demand cloud services for general computing. This strategic pivot coincided with the AI model training boom that surged in 2023, transforming CoreWeave's inventory of Nvidia hardware into a highly valuable infrastructure asset in technology. CoreWeave went public on Nasdaq under the ticker CRWV on March 28, 2025, with shares priced at $40, raising $1.5 billion and valuing the company at approximately $23 billion. CoreWeave operates 32 data centers housing over 250,000 GPUs and 1.3 gigawatts of contracted power capacity. Its 2025 revenue of $5.13 billion represented a 168% year-on-year increase, with management forecasting revenue exceeding $12 billion in 2026.

The rapid growth of CoreWeave comes with significant concentration risk, as Microsoft accounted for about 67% of its 2025 revenue, raising concerns among investors and analysts ahead of the IPO. Microsoft's initiative to develop its own AI models adds further complexity, prompting questions about how much of Microsoft's compute demand may shift to in-house solutions rather than third-party GPU rental. The Anthropic deal, following a $21 billion expansion with Meta, marks CoreWeave's effort to diversify its customer base and reduce reliance on any single hyperscaler.

What Anthropic is Paying For

Anthropic’s compute strategy has evolved in tandem with its revenue growth. As of early April 2026, the company's annualized revenue run rate exceeded $30 billion, significantly up from $9 billion at the end of 2025. This acceleration, fueled by enterprise adoption of Claude and the rapid growth of Claude Code, necessitated expanded infrastructure commitments across diverse chip architectures. Anthropic’s primary training workloads operate on Amazon Web Services Trainium hardware via Project Rainier, which involves a large-scale cluster of hundreds of thousands of AI chips distributed across multiple U.S. data centers. Just three days before the CoreWeave announcement, Anthropic secured a deal with Google and Broadcom for multi-gigawatt TPU capacity, providing access to approximately 3.5 gigawatts of next-generation tensor processing unit compute expected to be operational by 2027. The CoreWeave deal adds another layer, providing Nvidia GPU capacity for production inference workloads, essential for the scale and latency performance required by enterprise Claude deployments. Earlier this year, Anthropic committed $100 million to its Claude partner network, indicating its aim to broaden the ecosystem of developers and enterprises leveraging Claude, a move that directly influences compute procurement decisions like this one.

CoreWeave CEO Michael Intrator emphasized that the deal extends beyond mere infrastructure capacity, stating, “AI is no longer just about infrastructure; it’s about the platforms that turn models into real-world impact.” He expressed enthusiasm about collaborating with Anthropic to enhance model deployment and performance in production, aligning with CoreWeave's foundational purpose. Anthropic plans to deploy compute under a phased infrastructure rollout, with options for future expansion. While specific Nvidia chip architectures involved remain undisclosed, CoreWeave's infrastructure includes both current and next-generation Nvidia GPU generations, with Nvidia's Vera Rubin GPUs, revealed at GTC 2026, anticipated for volume shipments in the latter half of 2026.

Nine of Ten, Two Deals in 48 Hours

The agreement with Anthropic signifies that nine of the ten leading AI model providers now utilize CoreWeave’s platform, a market penetration statistic highlighted in CoreWeave’s press release. Alongside Microsoft, CoreWeave has built a customer roster that includes Meta, OpenAI, Mistral, Cohere, IBM, and Nvidia, along with a subleasing arrangement where Microsoft supplies some of CoreWeave’s capacity to third-party clients. The Meta partnership deepened significantly on April 9, 2026, just before the Anthropic announcement, when Meta committed an additional $21 billion to CoreWeave for dedicated AI cloud capacity from 2027 to December 2032, bringing the total value of their infrastructure collaboration to approximately $35 billion. Earlier in 2026, CoreWeave also expanded its agreement with OpenAI by up to $6.5 billion. These two announcements within 48 hours illustrate CoreWeave’s strategy of converting its infrastructure position into long-duration contracted revenue as opposed to relying on spot-market GPU rentals. Additionally, CoreWeave secured an $8.5 billion GPU-backed debt facility in March 2026, using the Meta relationship as collateral. Although the specifics of the Anthropic deal’s value remain undisclosed, it will contribute to a backlog that analysts are closely monitoring as a key indicator of the company’s long-term revenue predictability.

The Infrastructure Tells a Story About Dependence

On the same day that CoreWeave announced the Anthropic partnership, reports surfaced regarding Anthropic's exploration of custom AI chip designs—a move that could potentially lessen its dependence on the Nvidia-powered infrastructure provided by CoreWeave. This irony speaks volumes: Anthropic's current infrastructure commitments across AWS, Google Cloud, and now CoreWeave illustrate a company balancing increased compute dependency in the short term while also seeking pathways toward architectural independence in the long term. This tension is not unique to Anthropic. Meta, OpenAI, and Google have all heavily invested in custom silicon initiatives while still relying on third-party Nvidia capacity, as the timelines for custom chip readiness do not align sufficiently with the demand curve for AI compute. Thus, CoreWeave's position as the premier GPU landlord for the AI industry not only reflects the immediate market conditions but also represents a strategic bet that Nvidia-native cloud capacity will remain a critical necessity for the duration of the contracts currently being signed. With AI infrastructure spending accelerating through 2025, the GPU cloud market is evolving from a temporary solution to a permanent layer within the AI stack, and CoreWeave's two significant deals in just two days serve as compelling evidence of this ongoing shift.


Source: TNW | Anthropic News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy