A prominent AI executive is making the case against the slowdown thesis

Mustafa Suleyman argues that artificial intelligence is not close to exhausting its growth path. In a new essay published by

MIT Technology Review

, the Microsoft AI chief says repeated predictions that AI development will soon hit a wall misunderstand the scale and structure of the compute expansion now driving the field. His central claim is straightforward: the compute explosion behind frontier AI remains the defining technology story of the era, and the underlying drivers still have room to compound.

The essay is explicitly an argument rather than a neutral industry report, but it is notable because it comes from a senior figure at one of the companies most directly invested in AI infrastructure. Suleyman’s position is that skeptics keep looking for a single chokepoint, such as slower Moore’s Law, limited data, or energy constraints, while missing how multiple technical advances are converging at once.

The scale of the compute claim

Suleyman says the amount of training computation used in frontier AI models has increased by roughly one trillion times from early systems to today’s largest models. He describes the change from around 10^14 floating-point operations in the early years of his work in 2010 to more than 10^26 flops for current frontier systems. Whether one accepts every implication of that comparison, the point he is making is clear: AI progress has been powered by a historic jump in the amount of computation brought to bear on training.

That framing matters because it shifts the discussion away from abstract notions of intelligence and back toward industrial capacity. AI progress, in this view, is not mainly a story of isolated algorithmic breakthroughs. It is a story of increasingly vast systems that can keep more processors busier, with more data moving through them more efficiently, for longer periods of time.

Three technical pillars in Suleyman’s argument

The essay identifies three advances that he says are now working together. First is faster raw chip performance. He points to Nvidia hardware improving from 312 teraflops in 2020 to 2,500 teraflops today, an eightfold increase in six years. He also cites Microsoft’s Maia 200 chip, launched in January, which he says delivers 30% better performance per dollar than any other hardware in the company’s fleet.

Second is memory bandwidth. Suleyman highlights high bandwidth memory, or HBM, and says the latest generation, HBM3, triples the bandwidth of its predecessor. In practical terms, his argument is that training systems are becoming better not only at doing calculations, but at feeding processors quickly enough to prevent expensive accelerators from sitting idle waiting for data.

Third is large-scale interconnection. Technologies such as NVLink and InfiniBand, he writes, now connect hundreds of thousands of GPUs into warehouse-scale supercomputers that act as a single system. This is a key part of the essay’s thesis. The story is not just “better chips.” It is also the engineering of ever-larger compute fabrics that reduce wasted time and coordinate enormous numbers of processors together.

Why the essay matters even as an opinion piece

Suleyman’s argument lands in the middle of an active debate. AI critics and some researchers have questioned whether current scaling trends can continue economically or physically. Concerns usually focus on power demand, capital intensity, data scarcity, and diminishing returns from simply making models larger. Suleyman does not dismiss those concerns individually so much as argue that they do not yet outweigh the combined force of chip improvements, memory advances, and systems integration.

That position is important because major infrastructure planning decisions are being made now. If the field believes the compute curve is still steep, then hyperscalers, chip designers, and governments are more likely to keep investing on extraordinary scale. If they believe a hard wall is near, capital allocation changes. The essay is therefore not just descriptive. It is part of the competition to define what sort of future AI companies should build for.

It also reflects how the AI industry increasingly frames progress in systems terms. Performance per chip matters, but so do bandwidth, networking, and software coordination. The practical outcome is that AI leadership is becoming inseparable from supply chains, data-center design, and the ability to integrate hardware into coherent training infrastructure.

The strengths and limits of the case

The strength of Suleyman’s argument is that it does not rely on one magic breakthrough. It emphasizes compounding engineering gains across several layers of the stack. That is often how major technology shifts sustain momentum longer than expected. Bottlenecks in one area can be partially offset when multiple neighboring layers improve together.

The limit is that an opinion essay is not the same thing as proof that exponential improvement will continue indefinitely. The article argues that the trend “seems quite predictable” when the full technical picture is considered, but long-term trajectories remain contingent on economics, energy availability, supply constraints, and the value customers ultimately derive from larger systems. Suleyman is making a forceful case for continued scaling, not closing the debate.

A useful signal about industry confidence

Still, the essay is useful as a signal. It shows that one of the sector’s leading executives is publicly arguing not for moderation, but for continued belief in the infrastructure buildout behind frontier AI. The confidence is not framed in mystical terms. It is grounded in teraflops, bandwidth, and interconnects. That alone says something important about the current phase of the industry.

For all the public fascination with chatbots and agents, the center of gravity in AI remains compute. Suleyman’s essay is a reminder that the strategic battle is still being fought deep in the hardware and systems stack. If he is right, the industry is still early in a much larger expansion. If he is wrong, the next few years will expose the limits. Either way, the piece captures the mindset of the companies building the AI era: they do not believe the wall is here yet.

This article is based on reporting by MIT Technology Review. Read the original article.

Originally published on technologyreview.com