
However, Anthropic isn’t alone when it comes to model providers trying to add compute capacity to train and run their models.
Earlier in February, rival OpenAI signed a deal with Amazon, Nvidia, and SoftBank to raise around $110 billion to add infrastructure to increase compute capacity.
As part of the arrangement, OpenAI has committed to consuming at least 2GW of AWS Trainium-based compute tied to Amazon’s $50 billion investment, along with 3GW of dedicated inference capacity from Nvidia under its separate $30 billion commitment.
From funding to supply chain financing
In fact, deals such as these, analysts say, reflect a broader shift in how AI infrastructure is getting financed presently.
“Rather than simple cash-for-equity, these deals bundle equity investment with massive cloud-spend, or GPU spend commitments by locking in customers, securing capex returns, and validating infrastructure buildouts in a single transaction. This isn’t venture capital anymore, it’s supply chain financing,” Jain said.
The pattern present in these deals, Jain noted, is consistent across the ecosystem, giving examples of Microsoft, Oracle, and Nvidia.

