17 February 2026
The AI Bubble That Isn't
Why the last six months have made an AI bust vanishingly unlikely, and what the bubble hawks got wrong.
Six months ago, the AI bubble narrative was everywhere. Sequoia Capital’s David Cahn had set the intellectual framework with his “$600 Billion Question” in 2024, laying out the staggering gap between AI infrastructure spending and the revenue needed to justify it. By late 2025, others were piling in. Julien Garran at MacroStrategy Partnership called it “the biggest and most dangerous bubble the world has ever seen,” seventeen times larger than the dotcom crash. Ray Dalio at Bridgewater said the investment levels were “very similar” to the dotcom era. CNN, Bloomberg, and CNBC ran the story on repeat.
The parallels looked compelling. In Q1 2025, 58% of all global venture capital funding went to AI startups. The Magnificent Seven accounted for over a third of the S&P 500’s value, a level of concentration not seen since the dotcom era. Nvidia’s market cap had quadrupled since 2023. The Case-Shiller price-to-earnings ratio for the US market exceeded 40 for the first time since the dotcom crash. If you squinted hard enough, it looked like 1999 all over again.
I would have argued even then that the comparison was flawed. Unlike Pets.com and Webvan, the companies driving AI investment are among the most profitable in history, funding expansion primarily from operating cash flow rather than speculative financing. But that is not the argument I want to make today. Something more fundamental has changed in the last six months. The entire economic basis of the bubble thesis has collapsed, and it has nothing to do with valuation multiples or balance sheet strength.
The bubble narrative was built on a specific assumption, that all this spending was going toward infrastructure to train and deliver large language models for chatbot conversations. Under that framing, the maths was hard to make work. You could count every ChatGPT subscription, every Claude Pro account, every Gemini Advanced user, and the revenue was a rounding error against the hundreds of billions being invested. Cahn’s analysis was methodologically sound given that premise. The problem is that the premise is now obsolete.
The Rise of the Agents
The last six months have seen the emergence of agentic AI as a practical, deployed technology rather than a research curiosity. Agents are not chatbots with extra steps. They are autonomous systems that take goals, decompose them into tasks, use tools, make decisions, and execute workflows across multiple systems without human intervention at each step.
This is not theoretical. OpenClaw went viral in January 2026 because it demonstrated what people actually want from AI, not a system that answers questions, but one that gets things done. It booked restaurants by placing phone calls. It navigated across apps, APIs, and workflows to close loops that no single piece of software could handle alone. One hundred thousand GitHub stars in a week. Two million visitors. The demand signal was unmistakable.
But the consumer story is just the visible tip. The real shift is happening in the enterprise, and it is restructuring the entire economics of the software industry.
The SaaS-pocalypse
In the first week of February 2026, over $1 trillion in market capitalisation was erased from software stocks. Atlassian dropped 35% in a single week. Intuit fell 34% over the quarter. Salesforce, already battered, continued its decline. The trigger was not a recession or a regulatory crackdown. It was the market’s sudden recognition that agentic AI threatens to replace the traditional software-as-a-service model entirely.
Microsoft CEO Satya Nadella said it plainly on the BG2 podcast: “The notion that business applications exist, that’s probably where they’ll all collapse in the agent era, because if you think about it, they are essentially CRUD databases with a bunch of business logic. The business logic is all going to these agents.”
Forrester declared that “SaaS as we know it is dead,” using the term “SaaS-pocalypse” to describe the shift. IDC predicts that by 2028, pure seat-based pricing will be obsolete, with 70% of software vendors refactoring their pricing models around consumption and outcomes. Gartner projects that 40% of enterprise applications will integrate task-specific AI agents by the end of 2026, up from less than 5% in early 2025. These are not fringe predictions. These are the industry’s most conservative analysts telling their enterprise clients to prepare for structural disruption.
The logic is straightforward. A SaaS application like Jira or Salesforce or Intuit is, at its core, a database with a user interface wrapped around business logic. Humans interact with the interface to create, read, update, and delete records. AI agents do not need the interface. They interact directly with APIs and databases, orchestrating actions across multiple systems simultaneously. Where a human logs into three different applications to complete a workflow, an agent accesses all three concurrently. If a single AI agent can manage the workload of ten human sales representatives, the traditional model of charging for ten seats becomes unsustainable.
This is not a prediction about the distant future. Klarna shut down Salesforce and Workday, consolidating onto a mix of lighter SaaS tools and in-house AI, a move that sent shockwaves through the enterprise software market even as the details proved more nuanced than the headlines. Developers are using Claude Code to build internal coordination systems that bypass Jira and Confluence entirely. The per-seat model, the foundation of a global SaaS market projected to reach $576 billion by 2029, is under existential pressure.
Why This Changes the Maths on AI Spending
Here is where the bubble narrative falls apart completely. The hawks calculated the revenue gap based on chatbot subscriptions. A human user might interact with a chatbot a few times a day, generating perhaps a few thousand tokens per session. The total compute required to serve a chatbot conversation is modest. Under that model, yes, $700 billion in annual infrastructure spending looks absurdly out of proportion to the revenue it could generate.
Agentic AI operates on a completely different scale. A coding agent working on a single task can consume millions of tokens, with context lengths routinely reaching 300,000 tokens and sometimes exceeding a million. Recent research shows that agentic workloads, including coding, web browsing, and computer use, require fundamentally different compute profiles from chatbot interactions. The “snowballing effect,” where context length rapidly increases through multiple environment-agent exchanges, means a single agentic task can consume vastly more compute than a chatbot conversation. Janus Henderson’s analysis estimates that agentic AI requires 30 to 100 times more compute than generative AI alone. NVIDIA’s own analysis of reasoning tokens confirms the scale, noting that extended inference can require over 100 times more compute compared with a single pass on a traditional LLM.
Now multiply that by every enterprise workflow that currently runs through SaaS applications. Every expense report. Every sales pipeline update. Every customer service ticket. Every code review. Every compliance check. When agents replace human interaction with software, the token consumption is not additive. It is multiplicative. Chatbot usage becomes a rounding error against the inference demand generated by autonomous agents operating continuously across enterprise systems.
This is why the hyperscalers are supply-constrained rather than demand-constrained, and why they keep revising their capital expenditure upward. Amazon, Alphabet, Microsoft, Meta, and Oracle have collectively committed to spending between $660 billion and $690 billion on capital expenditure in 2026, nearly doubling 2025 levels. Goldman Sachs notes that consensus capex estimates have proven too low for two years running, with actual spending exceeding 50% growth in both 2024 and 2025. JP Morgan estimates that the build-out of AI data centres will require $1.5 trillion in investment-grade bonds over the next five years. Every quarter, the numbers go up, and every quarter, the analysts revise their models because demand continues to outstrip supply.
Janus Henderson’s analysis puts a specific number on the compute escalation, generative AI uses more than 1,000 times the compute that perception AI needed, and agentic AI requires a further 30 to 100 times increase in compute power beyond that. We are not talking about incremental growth in demand. We are talking about orders-of-magnitude increases in the compute required to run the systems that will replace today’s enterprise software.
This Is Not the Dotcom Boom
The dotcom bubble burst because the companies driving it were loss-making startups with no revenue, funded by speculative venture capital, building products for a market that did not yet exist. The infrastructure they built, the fibre optic cables and data centres, turned out to be valuable, but the companies that funded the build-out went bankrupt before the market caught up.
The AI infrastructure build-out is structurally different in almost every respect.
The companies doing the spending are among the most profitable in history. Alphabet, Amazon, Microsoft, and Meta had a combined $420 billion in cash and equivalents at the end of their latest quarter. Their liabilities-to-asset ratios are near 2015 levels. Even as capex approaches $700 billion, CreditSights data shows their leverage remains well below the S&P 500 average. These are not Pets.com. These are companies that have managed enormous compute logistics for hundreds of thousands of customers over the past decade.
The demand is real and measurable. Every hyperscaler reports that their AI infrastructure is pre-sold before it is even built. Google’s Gemini saw a 130-fold rise in AI token usage over 18 months. OpenAI ended 2025 with approximately $20 billion in annual recurring revenue, a threefold increase from the prior year. The market is supply-constrained, not demand-constrained.
Most importantly, what they are building is not single-purpose infrastructure. They are building intelligence as a utility. A fibre optic cable can only carry data. A GPU cluster running inference can power any application, any workflow, any agent, in any industry. The same infrastructure that runs a coding agent today can run a legal review agent, a financial analysis agent, or a supply chain optimisation agent tomorrow. The value is not locked into one use case. It compounds across every domain where intelligence can be applied.
Federal Reserve Chair Jerome Powell has distinguished the current landscape from the dotcom bubble, stating that the AI sector is underpinned by substantial realised revenue and that the massive capital expenditure directed toward AI data centres is functioning as an engine of broader economic growth rather than a sink for speculative capital. JPMorgan’s December 2025 analysis applied a five-factor diagnostic framework to the AI rally and concluded that the sector exhibits genuine structural utility rather than pure speculation.
The Real Question
The bubble hawks asked the right question six months ago: where is the revenue going to come from to justify hundreds of billions in infrastructure spending? It was a fair question when the visible market was chatbot subscriptions.
The answer is now clear. The revenue will come from agents restructuring a global SaaS market that Forrester projects at $576 billion by 2029, from agents automating workflows that currently require human labour, from agents generating demand for inference compute at 100 times the rate of chatbot conversations, and from intelligence becoming embedded in every business process across every industry.
The infrastructure being built is not speculative excess. It is the foundation of the next generation of enterprise technology. And if the last six months have shown us anything, it is that the current spending plans, as extraordinary as they are, may not be enough.