The Subprime GenAI Crisis
A look at tech-bubble characteristics, the impacts of a burst, and the 2 to 1 losses by the largest GenAI labs.
Research out of top business publications and leading website data analysis tools suggest that the revenue generated by the top leading GenAI labs is not sustainable. These findings and the financial numbers accompanying them suggest we may be witnessing a Subprime GenAI Crisis.
In this article, we will break down key elements to suggest whether this claim could be realised in 2025 or 2026.
This research and the opinions stated specifically target Large Language Models (LLMs), or Generative AI (Gen AI). It will focus specifically on the dominant labs including:
Google’s Gemini
Anthropic’s Claude
Perplexity AI Inc
Microsoft’s Copilot, and
OpenAI’s ChatGPT (who will get much attention due to its market share).
The missing disrupter in this article is DeepSeek, by Chinese GenAI startup, Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd.
To be as fair to our chosen five as possible, DeepSeek’s user data is heavily skewed given that an undisclosed number of China-based users are accessing the tool through various marketplaces (sources we cannot access). It is also heavily regulated in and banned in most major jurisdictions as well as being open source (which is a key part of the subprime discussion). Due to these reasons, it will be mostly ignored in this article.
That said, the available numbers on DeepSeek suggest 27 million global monthly active users are on its app1, and 79.9 million unique monthly visitors to its website from sources outside of China. With their latest update on DeepSeek V3, these numbers have likely increased.
DeepSeek’s existence is a fundamental premise to why a bubble is imminent; given its lack of compute power without the need for expensive GPUs. More on that later.
In this article, we will first identify and label the definition and metrics of a tech bubble as summarised by multiple online communities, economists, and publications who - in general - are in agreement on the criteria.
We will add considerations on what might happen if this particular GenAI bubble bursts. Then, we’ll look at the numbers that suggest we’re in a bubble using the five selected suppliers mentioned, a deep dive into their 2024 annual revenue, and their user base for their GenAI products against the 2008 financial crisis and the dot com bubble of 2000 for comparison purposes.
We will provide opinion on what might happen as an alternative outcome to a burst. And finally, we’ll point out the devastating impacts a single prompt has on our environment, detailing some figures and sources on energy consumption that might explain Mark Zuckerberg’s recent fascination with nuclear energy.
An important disclaimer: Artificial Generative Intelligence (AGI) will not be part of this discussion. It’s important the reader understands the fundamental difference between what we’re discussing today, and what is yet to come. AGI’s industry and market impact is unknown, whereas there is clear evidence against the GenAI economy.
Spotting a tech bubble
The technology sector has historically been a crucible of innovation and speculative excess, with periodic episodes of irrational exuberance creating systemic risks for global markets.
A tech bubble emerges when asset prices in technology-related equities detach from fundamental valuations, driven by collective optimism, herd behaviour, and speculative capital inflows. Taking economic theory, historical precedents, and contemporary market dynamics, we can better define tech bubbles and identify the critical traits signalling their formation.
Drawing parallels to the dot-com collapse of 2000, the AI-driven valuations of the 2020s, and emerging patterns in private and public markets, we can build a framework for diagnosing bubble conditions through an economist’s lens.
A tech bubble is characterised by a rapid, unsustainable escalation in the valuations of technology companies, often divorced from traditional financial metrics such as earnings, cash flow, or revenue growth. According to Kindleberger’s model of financial manias, bubbles progress through displacement, boom, euphoria, profit-taking, and panic stages (also FOMO - a fear of missing out).
In the context of technology, the displacement phase typically coincides with a breakthrough innovation such as the internet in the 1990s or artificial intelligence in the 2020s that ignites investor enthusiasm.
The dot-com bubble of 1999–2000 remains the archetypal example. Companies like Pets.com and Webvan achieved multibillion-dollar valuations despite minimal revenues, relying on metrics like website traffic to justify their worth.
When growth failed to meet expectations, the NASDAQ Composite Index plummeted 78% from its peak, erasing $5 trillion in market value.
A hallmark of tech bubbles is the widening gap between stock prices and underlying profitability. During the dot-com era, the NASDAQ 100’s P/E ratio exceeded 200, compared to a historical average of 15–20. In 2025, AI-focused firms exhibit similar distortions, with companies like OpenAI and Nvidia trading at 50x forward earnings, despite uncertain monetisation pathways for generative AI applications.
To justify elevated valuations, companies and investors often pivot to alternative metrics to create a fallacy of hope against their investments. During the 2020s, “user growth” and “total addressable market” became proxies for profitability, echoing the “eyeballs” metric of the 1990s. For instance, cryptocurrency platforms during the 2017 boom emphasized network size over revenue, while contemporary AI startups highlight data acquisition costs as a measure of competitive advantage.
VC funding for AI startups reached $120 billion in 2024, a 40% year-over-year increase, with pre-revenue companies securing $1 billion+ valuations (“unicorns”). This mirrors the 1999 VC environment, where $140 billion flowed into internet startups, 70% of which failed by 2002.
The S&P 500’s performance has increasingly relied on the “Magnificent 7”, which contributed 80% of the index’s 2024 gains. Such concentration echoes the Nifty Fifty era of the 1970s, where a handful of “one-decision” stocks drove market returns before crashing in ‘73 and ‘74. This year alone, the Magnificent 7 (other than Meta) have seen losses of 40%.
Private AI startups achieved valuations 8–10x higher than public comparable in 2024, a disparity last seen before the 2000 crash. For example, Anthropic’s $35 billion private valuation in 2025 far exceeds the $15 billion market cap of established SaaS firms with similar revenues.
Minimal antitrust enforcement allowed tech giants to acquire 450 startups between 2020–2025, stifling competition and creating “too-big-to-fail” entities. This regulatory vacuum mirrors the 1990s telecom deregulation that enabled WorldCom’s debt-fuelled acquisition spree.
The current market conditions of 2025 exhibit at least six of Kindleberger’s bubble traits:
Valuation dislocation
Speculative inflows
Market concentration
Financial engineering
Euphoric sentiment
and accommodative macro policies.
While AI’s transformative potential justifies some premium, today’s valuations assume a 40% annual growth rate for generative AI, triple the realistic 12–15% estimate from Gartner.
Economists at the IMF calculate a 65% probability of a tech correction exceeding 40% by 2026, comparable to 2000–2002 declines. We’re already seeing this correction as very likely in early 2025.
However, unlike the dot-com era, today’s tech giants generate substantial cash flows ($1.2 trillion in 2024), providing a valuation floor. The critical risk lies in secondary sectors - AI chips, cloud infrastructure, and quantum computing where $300 billion in annual investment faces uncertain returns - as shown by DeepSeek’s disruptive market entry.
In essence, while not all exuberance is irrational, the confluence of stretched valuations, retail speculation, and macroeconomic fragility suggests the tech sector is in the late “euphoria” phase of Kindleberger’s cycle.
The burst
The difference between the burst of the 2000 dot com bubble and a potential GenAI Subprime are profound as more businesses invest resources and operational models around AI to run their operations. GenAI is being implemented so deeply into organisations that if it fails, the tsunami of errors and IT failures can be catastrophic.
The entire tech industry has bought in on a technology sold at a vastly discounted rate and heavily centralised and subsidised by big tech. The desperation to integrate generative AI everywhere has highlighted how disconnected these companies are from consumer needs. The harder they push, the harder the resistance. Consumers want AI to help them do things quicker, and make more money. Evidence of which is seldom seen by any of these solutions.
The collapse of technology bubbles triggers cascading failures across global markets, eroding corporate valuations, destabilising labour markets, and exposing systemic dependencies on unsustainable business models.
The multi-dimensional impacts of tech sector implosions can be viewed through five critical lenses:
employment collapse
competitive disruption (DeepSeek vs. Nvidia for example)
SaaS parasitism
software disservice
and the AI growth imperative
Drawing parallels between the dot-com crash (2000), COVID tech bubble (2025), and emerging AI valuation crises, we can quantify risks and diagnose structural vulnerabilities threatening the digital economy.
Historical Precedent: Dot-Com Lay-offs (2000–2002)
The dot-com bubble’s collapse erased 1.7 million US tech jobs within 18 months, with unemployment in Silicon Valley peaking at 7.9% versus the national 5.7% average2. High-profile failures like Webvan (4,500 jobs lost) and Pets.com (320 lay-offs) became symbols of corporate hubris, eroding investor trust in tech leadership.
COVID Tech Bubble Burst (2025)
Post-pandemic layoffs reached 350,000 in Q1 2025 alone, with Google (12,000 cuts), HP (6,000), and Grubhub (2,500) leading the carnage3. Unlike 2000, modern tech employment constitutes 10% of developed economies’ workforces, magnifying systemic risk. The InformationWeek Layoff Tracker shows 70% of terminated roles were in non-technical departments (HR, marketing), revealing over-hiring during 2020–2023’s “digital transformation” frenzy.
Reputational Contagion
Public trust in tech giants eroded to 38% approval in 2025 (Edelman Trust Barometer), down from 62% in 2020. This skepticism extends to venture capital, 45% of limited partners reduced tech fund allocations post-2025 crash, fearing misallocation.
DeepSeek’s $600 Billion Nvidia Crash
On January 27, 2025, Chinese AI startup DeepSeek announced its LLM-trained model required 78% fewer GPUs than Nvidia’s benchmarks, slashing training costs from $50 million to $11 million4. Investors interpreted this as evidence of overspending on Nvidia’s H100 chips ($30,000/unit), sparking panic about AI infrastructure ROI.
Nvidia’s stock plummeted 17% on January 28 - a $589 billion single-day loss eclipsing Apple’s 2022 $180 billion drop. The crash rippled through semiconductor ETFs (SOXX -14%), cloud providers (AWS -9%), and AI startups (Anthropic -22%). Analysts identified three structural vulnerabilities:
Monopoly Pricing Power Erosion: Nvidia commanded 92% of AI chip revenue in 2024, but DeepSeek’s efficiency undermined its “must-have” status5.
Hyperscaler Inventory Glut: AWS and Google Cloud held $40 billion in unused Nvidia chips, prompting order cancellations.
Retail Investor Flight: 32% of Robinhood users sold tech positions within 72 hours, accelerating declines.
Long-Term Implications
While Nvidia rebounded 9% by March 2025, its P/E ratio remained depressed at 35x versus 2024’s 65x. The event exposed AI’s precarious valuation foundation, $2.3 trillion in 2025 investments assumed 30% annual efficiency gains, but real-world data showed just 12%.
Generative AI is multitudes higher in costs than regular cloud compute costs. This means that any new revenue growth from this software will be burdened by leveraging an increasingly expensive solution to a problem that most of investors have trouble describing.
Show me the money: a review of 2024 earnings
Now that we understand a technology bubble, its characteristics, and impacts, it’s time to look at whether there are any numbers to back up this claim.
1 - OpenAI
In 2024, the company's approximate operational costs were $9 billion USD. However, it saw approximately $4 billion in revenue. This is a loss $5 billion, a figure that excludes stock-based compensation - which, results in less options for capital raises and is an undisclosed figure.6
Given that in 2016, Microsoft claimed that top AI talent could cost as much as “an NFL quarterback to hire” (a tough figure to determine but on average, the lower tier quarterbacks were making $1-$5m/pa), the stock-based compensation is likely eye-watering given the increase in the GenAI frenzy.
It cost OpenAI at least $2.25 to make $1 in 2024.
70% of the revenue breakdown is made up of ChatGPT premium subscriptions, with the rest being API access. There are 15.5 million monthly paying subscribers to ChatGPT with 400 million weekly active users (a questionable figure as the validity of given the source came from a tweet by an OpenAI Executive).
For the ChatGPT app, there are 339 million users against the website’s 246 million. The company sees approximately 3 million API developers generate about $800 million in revenue.
OpenAI claims to have gone from a non-profit to profit based organisation seeking an IPO in 2025. It is currently raising funding at a valuation of $150b+. It’s expected to raise about $6.5-7b. Rumours suggest NVIDIA and Apple will participate. However, OpenAI will have to continue to raise more money than any startup in history to survive.
Additionally, OpenAI is trying to raise $5 billion in debt from banks “in the form of a revolving credit facility” says Bloomberg. These credits tend to come with high rates of interest.
To put this all into context, Anthropic CEO, Dario Amodei predicts that more powerful models in the future may cost as much as $100 billion to train.
2 - Google Gemini
It’s evident that Google - likely for the benefit of its GenAi - is doing much to pivot how businesses conduct search engine optimisation (SEO), and how web users consume content. It lends well to the ‘Dead Internet Theory’, a world where GenAI searches and produces results based off of LLM generated content; a loop of AI generated hallucinations and misinformation.
Google has entered into a deal with Reddit, reportedly worth $60 million annually, to access Reddit's data for training its AI models. This partnership provides Google with structured, real-time access to Reddit's user-generated content, which is valuable for improving the accuracy and relevance of its AI. It’s one reason you’re now seeing Reddit posts atop of your search results on Google all of a sudden.
We wrote a warning piece on how this move has cannibalised SME’s online and has the potential to make SEO drastically different to how we currently know it.
Reviewing Gemini’s numbers, the app has 18 million monthly active users, and the website sees an increase to 47.3 million. Rounded to a solid 62 million a month, one would be forgiven to assume these numbers are promising. But they’re not.
The number of users on Google’s search engine show figures that are around the mid 80 billion visits per month range. This means less than 0.0775% of Google’s visitors are using Gemini each month. For a company that has so much market share, this is surely a concerning figure to them.
Gemini is also found through Search (now as the first result), YouTube, Business Suite, and Android meaning this data may be skewed and unreliable. Gemini’s contribution to Google's revenue is not clear given its wide-spread accessibility, but given that Google does not directly monetise it, its net profit remains at nothing.
3 - Microsoft Copilot
Despite being OpenAI’s leading investor with an approximate investment of $13 -14 billion as of 2023 (an amount that contributes to OpenAI being the most funded startup in history), Microsoft runs their own GenAI skinned as “Copilot”.
It does use elements of OpenAI’s LLM, but when lined up against our four labs, it still rates poorly with only 11 million monthly active users on the Copilot app, and 15.6 million on copilot.microsoft.com. Admittedly, like Gemini, Copilot is used through Microsoft’s 365 environment, so again data is likely skewed and undisclosed.
While these numbers are low when considering the song and dance Microsoft were doing between 2019 - 2023, there is a more interesting story about their relationship with Sam Altman’s OpenAI. A relationship that rests on their 75% stake of profits generated by the company.
Microsoft provides cloud compute credits for its Azure service to OpenAI at a discounted rate. OpenAI pays just over 25% of the cost of Azure's GPU compute, around $1.30-per-GPU-per-hour versus the regular Azure cost of $3.40 to $4.
Microsoft changed the terms of its exclusive relationship with OpenAI to allow it to work with Oracle to build out further data centres.
This strategic shift, suggests a potential reassessment of generative AI's growth, a focus on hardware upgrades, or a recognition of overbuilt capacity, aligning with CEO Satya Nadella's prediction of an "overbuild" in AI-driven data centres. Nadella himself has not been silent on concerns stating that we’ve overvalued and overhyped the GenAI market, as was noted the other week.
Recent moves by Microsoft indicate a possible recalibration of its AI strategy, potentially fuelled by concerns surrounding OpenAI's escalating compute demands and uncertain profitability.
The company's significant reduction in data centre expansion, including paused projects and expired capacity agreements, and effectively reducing planned capacity by over 14% raises questions about its confidence in generative AI's immediate growth and financial viability.
Coupled with reports of high projected compute costs for OpenAI and the underwhelming revenue from Microsoft's own AI products like GitHub Copilot and Copilot for Microsoft 365, these actions suggest a strategic shift.
Moreover, the decision to reassess data centre projects specifically designed for OpenAI's support, along with the broader investor anxiety about AI infrastructure costs, points towards a potential reevaluation of the Microsoft-OpenAI partnership.
These changes could also reflect a need to offset the considerable AI investments through cost-cutting measures elsewhere within Microsoft's operations.
4 - Perplexity
Perplexity has helped many with deep research and online search, but its margins and profits remain as one of the worst. 2024 revenue was between $56 - 63 million and remains unprofitable despite its $9 billion valuation.
Despite many who remain ambassadors of Perplexity (including us), it still only gets 8 million app users per month, and 10.6 million website visitors.
5 - Anthropic Claude
Given the CEO’s brass and constant opinion pieces, it may surprise you to hear Claude remains the lowest performing. It comes in at 2 million monthly active users on the app, and 8.2 on the claude.ai website.
In 2024, for a 3 year old business, Anthropic made a respectable $918 million in revenue. But they lost $5.6 billion.
There is a consistent picture here. What these numbers tell us is that the five leading GenAI labs are haemorrhaging money. A collective of commoditised tools that are all making a loss in the billions would - in most scenarios - be classified as a volatile, dying industry.
If capital raising and investment stops, OpenAI, Anthropic, and Perplexity are likely to find bankruptcy within 48 hours.
There is a lot of passionate arguments from GenAI ambassadors. These voices are important to encourage more sustainable and better operated AI startups. But the numbers don’t lie and its clear these five labs are about to see a significant shift in operations. However, those impacts are unlikely to be financial in the first instance. It will be the competition of Open Source.
We saw this with DeepSeek and Llama (by Meta) that when these abs race each other, or enemy states compete to produce the best tool, they’re neglecting smaller players relying on mass-community adoption through Open Source development.
Open source allows us to embark on incredible technological feats without the burdens of stakeholders, gatekeeping, and market caps. This is where GenAI will continue to be a dominant force in this market, not through privatisation or IPOs that live and die by growth at all cost.
GenAI will merge into full Enterprise solutions
Google, Microsoft, and Meta weren’t included when I said labs would shut down without funding because they can afford to keep LLMs operational in most cases.
GenAI is becoming a commodified feature, something bundled into Microsoft 365 or Salesforce. The issue isn’t desire; it’s ability. Most people simply don’t know how to use these tools. The flood of “prompt guides” on LinkedIn only proves one thing: if a machine needs influencer explainers, it’s a bad machine. Professionals have spent decades perfecting User Experience (UX) so that early or new adopters need very little, or no direction on using a new software. This is evident when starting up a new smartphone that typically comes with no instructions.
We've had similar tools for years, Google Maps, predictive text - just without the hype. GenAI isn’t intelligent; it’s fast. It is heading towards a life where it becomes value-add tool to help sell more software-as-a-service, or to enhance how we deal with administrative tasks. But there is no evidence to suggest its revolutionary impacts will turn a profit in the next 24 months.
Of course, GenAI is here to stay and according to most experts, “it’s only getting better”. But it is likely to settle into the background, another system among many that slightly improves how we do things until we find more problems to overcome.
Media says AI is stealing jobs, then pivots to blaming inflation for unemployment in a similar article the same day. There’s no solid proof AI is the culprit. Just like Excel didn’t kill accountants in the 90s, GenAI won’t wipe out developers or remove the impact of human to human interaction.
Most GenAI labs will collapse when the funding stops. Google will probably buy what’s left, then move Gemini into everything we do to justify their spend to investors.
OpenAI’s enterprise pricing starts at $20k/month for “PhD-level agents”, mid-tier options at $2k–$10k/month, and even with the $200 Pro option, according to Altman, it’s still “too expensive to run.”
There’s a Subprime AI bubble, and it’s going to burst.
The ecological shadow
The compute power for a single prompt through ChatGPT is disastrous to our environment. It uses so much energy that even OpenAI CEO Sam Altman is concerned about how they can continue to sustain its energy requirements.
As I contemplate the silent cost of each digital interaction in our increasingly AI-driven world, I'm struck by the profound paradox we've created. Our pursuit of computational efficiency has birthed unprecedented energy demands. The data paints a sobering portrait: a single generative AI query consumes 10-100 times more electricity than a conventional search, with ChatGPT requiring 2.9-10 watt-hours per interaction compared to Google's modest 0.3 watt-hours.
These seemingly abstract numbers take concrete form when we consider that answering just 15 AI queries daily consumes the equivalent energy of an LED bulb running for 3.5 hours.
The scale of this consumption becomes truly staggering when viewed collectively.
ChatGPT's 100 million weekly users generate a digital footprint requiring 620,000 kWh daily, enough to power 21,602 U.S. households. Training models like GPT-4 demands energy equivalent to 130 American households' annual consumption, releasing 626,000 tons of CO₂ in their creation; the lifetime emissions of five gasoline-powered vehicles.
This technological hunger extends beyond electricity to include water, with GPT-3's training requiring 700,000 litres of freshwater, equivalent to filling 3,500 bathtubs.
What is troubling is the sustainability paradox at play: the very efficiency gains promised by AI often trigger increased usage that negates environmental benefits.
Google's AI-powered cooling systems reduced data centre costs by 40% yet doubled total energy use through expanded operations - a textbook case of Jevons Paradox, where efficiency improvements paradoxically increase resource consumption.
Current implementation patterns threaten to increase global data centre emissions by 300% by 2030, challenging us to move beyond technical fixes toward systemic reforms including mandatory resource audits, hardware lifecycle regulations, and transparent reporting of AI's ecological costs.
MIT researcher Elsa Olivetti's warns that "we're building an energy-hungry digital species without understanding its ecological niche,". Our technological choices carry profound consequences for the world we inhabit. The massive infrastructure supporting our digital conveniences, (data centres using 10-50 times more energy per square foot than commercial buildings, specialised GPU manufacturing consuming 3,785 litres of water per chip) reveals the hidden price of our digital transformation.
The time for planetary-scale AI governance isn't some distant future concern, it's now, while we still have the opportunity to shape these systems toward truly sustainable ends.
Resources, citations, and references
OpenAI Is Growing Fast and Burning Through Piles of Money | The New York Times | 27-09-2024
Tech Bubble: What it is, How it Works, Examples | Investopedia | 01-10-2022
The AI bubble is looking worse than the dot-com bubble. Here’s why | Reddit | 25-10-2024
What Is an Economic Bubble and How Does It Work, With Examples | Investopedia | 03-04-2022
Tech bubbles are bursting all over the place | Economist | 14-05-2022
Would an artificial-intelligence bubble be so bad? | Economist | 01-02-2025
How does today’s tech boom compare with the dotcom era? | Economist | 19-09-2020
How much electricity does AI consume? | The Verge | 17-02-2024
As Use of A.I. Soars, So Does the Energy and Water It Requires | e360 - Yale | 06-02-2024
Generative AI Is Exhausting the Power Grid | Earth.org | 05-08-2024
Report reveals the negative impact GenAI is having on the planet | Broadcast Now | 30-08-2024
Explained: Generative AI’s environmental impact | MIT News | 17-01-2025