The AI race keeps shifting from models to infrastructure

Google plans to invest up to $40 billion in Anthropic and deepen its role as the startup’s infrastructure supplier, according to the report cited by TechCrunch. The package includes an immediate $10 billion investment at a reported $350 billion valuation and another $30 billion that would follow if Anthropic reaches certain performance targets.

The scale of the commitment is remarkable on its own, but the larger significance is what it says about the current AI market. Competition among top labs is no longer centered only on model quality or product reach. It is increasingly defined by access to the chips, power, and data-center capacity required to train and run frontier systems.

A competitor, investor, and supplier at once

Google’s relationship with Anthropic is unusual because the companies are both collaborators and rivals. Anthropic competes in the market for foundation models, while also relying heavily on Google Cloud for infrastructure and access to Google’s tensor processing units, or TPUs. The new investment appears to expand that arrangement rather than replace it.

According to the source text, Google Cloud is now providing a fresh 5 gigawatts of capacity over the next five years, with room to scale further. Earlier in the month, Anthropic had also announced a partnership with Google and Broadcom to access multiple gigawatts of TPU-based compute beginning in 2027, with a subsequent Broadcom securities filing putting that figure at 3.5 gigawatts.

Those numbers show how capital and compute are becoming inseparable in frontier AI. Funding is no longer just a balance-sheet tool. It is a way to secure the hardware backbone required to stay competitive.

Anthropic’s escalating demand for compute

The report lands at a time when Anthropic has been under visible pressure to expand capacity. The source notes that the company has faced widespread complaints about Claude use limits in recent weeks, suggesting infrastructure constraints are already affecting users.

Anthropic has responded by signing multiple deals. Earlier in April it reached an agreement with CoreWeave for data-center capacity. It also secured an additional $5 billion investment from Amazon as part of a broader arrangement under which Anthropic is expected to spend up to $100 billion for around 5 gigawatts of compute capacity over time.

Put together, those figures make the present moment look less like a typical venture-financing cycle and more like an industrial buildout. Top AI companies are not merely raising money for hiring and research. They are assembling supply chains.

Why the timing matters

The funding news comes shortly after Anthropic released Mythos to a limited group of partners. Anthropic describes it as its most powerful model so far and says it has significant cybersecurity applications. The company has restricted broader access because of potential misuse risks, though the source also says the model has already fallen into unsanctioned hands.

That context matters because more capable models generally require more expensive deployment. If Mythos is both powerful and costly to run, Anthropic’s appetite for large-scale compute is not a side story. It is part of the product strategy.

The deal also shows how major cloud providers are using capital to lock in long-term AI demand. For Google, backing Anthropic may help ensure years of cloud and TPU consumption even as the two companies compete directly in models and enterprise AI.