The Intelligence Infrastructure
From Platform Capitalism to Infrastructural AI
In 2023, I published a book, The Quickest Revolution, that attempted to provide a historical and conceptual map of the computing revolution that had been unfolding over the previous decades, and that, by then, was converging toward the emergence of the first wave of large-scale artificial intelligence systems.
Although AI took a good half of the book, its true focus was the deeper technological substrate that made AI possible in the first place: particularly tracing the structural, socioeconomic dynamics that pushed all the past and recent advances in computation and automation, as well as a projection of where these trends were likely to lead.
A central argument of that analysis was that the computing revolution could not be understood without getting deeply familiar with its exponential character. Not as the kind of mysticism that leads futurists to speculate about technological singularities and grand utopias, but as a clear result of the forces that regulate the market in which computing technology gets produced.
Within that framework, I argued that “Moore’s Law”, the best known historical observation of this exponential character — stating that transistor density doubles every 18 months at equal cost — was fundamentally misleading, and was in need of a serious update.
In fact, while Moore’s Law had been declared about to hit a wall countless times due to physical manufacturing limits (even by knowledgeable figures like Jensen himself), the density of transistors on individual chips is only a tiny component in determining compute performance — and treating it as the decisive variable obscured what was actually happening.
The truly relevant quantity that I proposed to measure in a revised law was not transistor count, but compute throughput: the amount of usable computation that could be deployed per unit of time and cost across real systems. Overall throughput in fact depends not only on chip-level hardware, but increasingly also on architecture, software, interconnects, memory hierarchies, parallelization strategies, and energy density.
And crucially, its acceleration compounds whenever improvements across these layers reinforce one another. Faster chips matter, but so do better model architectures, more efficient training pipelines, optimized compilers, high-bandwidth interconnects, and data centers designed as integrated systems rather than collections of machines.
This is precisely why, already in 2021 when I began writing, I was predicting Moore’s Law was actually about to accelerate — not slow down.
And in fact, over the past two years, this is precisely what happened. Scaling has shifted away from the chip as an isolated object and toward the cluster, the network, and the data center as a whole.
As a consequence, we now see cluster-level scaling replacing single-chip gains, increasingly specialized interconnects, highly optimized training stacks, distributed inference, and architectural designs built explicitly for hyper-scale data center parallelism. These developments did not slow the exponential; they amplified it.
In this context the repeated announcement of the “end of Moore’s Law” functioned less as a technical assessment than as the legacy of an observation focused on the wrong variable. The exponential trajectory of computation did not disappear. It simply migrated upward, from the semiconductor to the system, and from the system to the infrastructure.
What emerged was not the end of exponential growth, but its consolidation at a much higher, faster and — crucially — far more consequential level.
Moore’s Law’s Phase Change
Between 2023 and 2025, the scale at which computation is deployed has shifted by another order of magnitude. At the system level, NVIDIA’s DGX-class machines moved from the A100-based DGX systems — delivering on the order of a few hundred petaflops of mixed-precision performance — to H100 and now Blackwell-based configurations that push aggregate system throughput into the multi-exaflop range when deployed at cluster scale. At the data-center level, training runs that in 2023 required tens of thousands of GPUs are now increasingly planned around clusters of one hundred thousand accelerators or more.
Capital expenditures followed the same unprecedented trajectory. A recent analysis of major U.S. tech firms has forecast over $360 billion in AI-related infrastructure and R&D spending for 2025 across firms like Amazon, Microsoft, Alphabet, and Meta — with Amazon alone projected to spend on the order of $100 billion.1 Meanwhile, global data-center investment is expected to climb past $650 billion, more than doubling from 2023 levels.
These staggering figures highlight how the exponential scaling of Moore’s Law is materializing both as performance improvements and exceptional market growth. But even more significantly, they also mark a deeper qualitative transition.
The magnitude and distribution of the numbers in fact shows that sustaining exponential increase in compute throughput now requires investments so large, coordinated, and long-horizon that they are effectively only accessible to a handful of supersized private actors.
In other words, in this next stage of Moore’s Law — in which the unit of progress is no longer just the chip but the industrial-scale system as a whole — far from becoming more competitive, the market has concentrated to extreme levels, consolidating a pre-existing configuration that has now expanded to new heights.
Platform capitalism had in fact already exhibited a tendency to form near-monopolies of unprecedented size, driven by the new dynamics of accelerating returns deriving from the use of free raw material (users’ information), the commerce of intangibles, network effects, and the extremely low-cost scalability offered by computing equipment.
But the AI transition intensifies these phenomena even further. The availability of extreme levels of capital, compute capacity, data access, knowhow, deployment channels — and better and better AI that can in turn be used for system design optimization itself — are now reinforcing one another, allowing the largest companies to make even larger strides.
These positive feedback loops favor incumbents and raise barriers that late entrants cannot realistically overcome — making the new regime of exponential scaling depend on the ownership and control of the very systems that companies like NVIDIA, Microsoft, Amazon and Meta have essentially transformed into a (partially joint) near-vertical stack.
Furthermore, in the AI age, what was once dominance over interfaces, marketplaces, and user attention is now also becoming dominance over the conditions under which intelligence itself is extracted, produced, distributed, and applied.
Therefore, it becomes increasingly misleading to describe these actors simply as technology companies.
As their ownership and control extends to worldwide critical infrastructure, a dominant share of core research, information networks, knowledge, and the new means of production, their competitive horizon is no longer confined to markets: it extends far into the geopolitical domain.
Practically speaking, what we are witnessing is the emergence of a new intelligence infrastructure that is increasingly acquiring near-sovereign capabilities.
The New Economic Engine: Reflexive Scaling
The joint acceleration of compute throughput and concentration of infrastructural power are not independent developments. Each reinforces the other. Faster scaling demands greater capital concentration and coordination; greater concentration enables further acceleration.
But if this type of feedback loops had already appeared in earlier platform capitalism, its old engine is now being turbo-charged.
What distinguishes the present phase in fact is the emergence of novel circular economic structures layered on top of the standard market expansion that pushes the technological trajectory forward.
These structures do not merely respond to demand; they actively manufacture it, using financial commitments, strategic partnerships, and narrative coordination to sustain the appearance of continuous exponential growth.
At the core of this mechanism is a tightly coupled loop connecting hardware suppliers, model developers, cloud providers, capital markets, and, increasingly, governments. Large volumes of specialized hardware are sold to a small set of customers who, in turn, must demonstrate ever-growing compute needs in order to justify their valuations, strategic relevance, and access to capital. These commitments then become the basis for further infrastructure investment, which is interpreted by markets as evidence of expanding external demand.
A recent example documented by Bloomberg, in which NVIDIA, OpenAI, Oracle, and major cloud intermediaries have entered a web of interdependent commitments, illustrates how these loops operate in practice:2
Nvidia pledges an investment of up to $100 billion in OpenAI;
OpenAI pledges hundreds of billions to buy servers from Oracle;
Oracle uses the pledges to justify buying tens of billions in Nvidia hardware;
Wall Street interprets the resulting revenue spike as proof of limitless AI demand;
Nvidia’s valuation explodes;
This kind of loops is self-reinforcing. Hardware sales justify model scaling; model scaling justifies cloud expansion; cloud expansion justifies capital expenditure; capital expenditure validates market expectations; and those expectations, once priced in, compel all participants to behave as if the growth they anticipate were already guaranteed.
This process does not require any conspiracy. Each participant simply acts rationally within its own constraints. Yet the aggregate outcome is a reflexive system in which belief, investment, and infrastructure recursively validate one another.
Layered on top of this financial circuit is a narrative mechanism that further stabilizes the loop. Public claims about imminent AGI breakthroughs play a functional role: they align investor expectations and frame continued infrastructure buildout as both necessary and absolutely urgent. In this context, narratives no longer serve only to push demand; they also serve to legitimize the next round of exorbitant capital allocation.
A third reinforcing layer is geopolitical. Competition between major powers—particularly between the United States and China—provides an additional justification for sustained national investment, independent of near-term commercial returns. Each side’s acceleration is framed as a defensive necessity, turning compute expansion into a strategic imperative rather than a market choice. Once embedded in national security discourse, the growth trajectory becomes difficult to slow or reverse.
The result is a convergence of three dynamics: physical scaling, financial reflexivity, and geopolitical competition. Together, they produce an exponential curve that appears to have detached from any single underlying driver. What began as an engineering and market phenomenon increasingly behaves like an infrastructural commitment that must be maintained to preserve economic, strategic, and political coherence.
This marks a departure from earlier phases of platform capitalism. User growth, advertising markets, and network effects once served as the primary engines of expansion. Today, the infrastructure itself—its scale, sunk costs, and strategic centrality—has become both the means and the justification for continued growth.
Understanding this shift is essential. It explains why the current expansion persists even amid uncertainty about applications, profitability, or social impact. And more importantly, it clarifies how the acceleration of computation has become inseparable from the concentration of capital and power that sustains it.
If this trajectory holds, the question then is no longer how fast compute and AI improve, but who controls the key infrastructure and the conditions under which advances, and intelligence itself, are produced — and where the costs are ultimately borne.
Big Tech’s A.I. Spending Is Accelerating (Again), New York Times, Oct. 2025.
OpenAI, Nvidia Fuel $1 Trillion Al Market With Web of Circular Deals, Bloomberg, Oct 2025.



Jacopo, an excellent post. One thing I already loved in your book was your ability to bring together technological and market trends with a very clear-eyed and pragmatic perspective. This post goes one step beyond with the geopolitical dimension. One question that remains open, in my view, is the following: it seems to me that there is still a wide gap between the existential stakes that dominate the narrative and drive these seemingly unstoppable dynamics, and the actual value delivered by this technology in efficiency gains. If that is the case, I wonder how long this detachment can be sustained. Thanks and kudos for a highly insightful analysis.