The Coming AI Credit Crunch: Datacenters, Debt, and the Signals Wall Street Is Starting to Price In

Introduction

Artificial intelligence may be the most powerful technology of the century—but behind the demos, the breakthroughs, and the trillion-dollar valuations, a very different story is unfolding in the credit markets. CDS traders, structured finance desks, and risk analysts have quietly begun hedging against a scenario the broader industry refuses to contemplate: that the AI boom may be running ahead of its cash flows, its customers, and its capacity to sustain the massive debt fueling its datacenter expansion. The Oracle–OpenAI megadeals, trillion-dollar infrastructure plans, and unprecedented borrowing across the sector may represent the future—or the early architecture of a credit bubble that will only be obvious in hindsight. As equity markets celebrate the AI revolution, the people paid to price risk are asking a far more sobering question: What if the AI boom is not underpriced opportunity, but overleveraged optimism?

Over the last few months, we’ve seen a sharp rise in credit default swap (CDS) activity tied to large tech names funding massive AI data center expansions. Trading volume in CDS linked to some hyperscalers has surged, and the cost of protection on Oracle’s debt has more than doubled since early fall, as banks and asset managers hedge their exposure to AI-linked credit risk. Bloomberg

At the same time, deals like Oracle’s reported $300B+ cloud contract with OpenAI and OpenAI’s broader trillion-dollar infrastructure commitments have become emblematic of the question hanging over the entire sector:

Are we watching the early signs of an AI credit bubble, or just the normal stress of funding a once-in-a-generation infrastructure build-out?

This post takes a hard, finance-literate look at that question—through the lens of datacenter debt, CDS pricing, and the gap between AI revenue stories and today’s cash flows.


1. Credit Default Swaps: The Market’s Geiger Counter for Risk

A quick refresher: CDS are insurance contracts on debt. The buyer pays a premium; the seller pays out if the underlying borrower defaults or restructures. In 2008, CDS became infamous as synthetic ways to bet on mortgage credit collapsing.

In a normal environment:

  • Tight CDS spreads ≈ markets view default risk as low
  • Widening CDS spreads ≈ rising concern about leverage, cash flow, or concentration risk

The recent spike in CDS pricing and volume around certain AI-exposed firms—especially Oracle—is telling:

  • The cost of CDS protection on Oracle has more than doubled since September.
  • Trading volume in Oracle CDS reached roughly $4.2B over a six-week period, driven largely by banks hedging their loan and bond exposure. Bloomberg

This doesn’t mean markets are predicting imminent default. It does mean AI-related leverage has become large enough that sophisticated players are no longer comfortable being naked long.

In other words: the credit market is now pricing an AI downside scenario as non-trivial.


2. The Oracle–OpenAI Megadeal: Transformational or Overextended?

The flashpoint is Oracle’s partnership with OpenAI.

Public reporting suggests a multi-hundred-billion-dollar cloud infrastructure deal, often cited around $300B over several years, positioning Oracle Cloud Infrastructure (OCI) as a key pillar of OpenAI’s long-term compute strategy. CIO+1

In parallel, OpenAI, Oracle and partners like SoftBank and MGX have rolled the “Stargate” concept into a massive U.S. data-center platform:

  • OpenAI, Oracle, and SoftBank have collectively announced five new U.S. data center sites within the Stargate program.
  • Together with Abilene and other projects, Stargate is targeting ~7 GW of capacity and over $400B in investment over three years. OpenAI
  • Separate analyses estimate OpenAI has committed to $1.15T in hardware and cloud infrastructure spend from 2025–2035 across Oracle, Microsoft, Broadcom, Nvidia, AMD, AWS, and CoreWeave. Tomasz Tunguz

These numbers are staggering even by hyperscaler standards.

From Oracle’s perspective, the deal is a once-in-a-lifetime chance to leapfrog from “ERP/database incumbent” into the top tier of cloud and AI infrastructure providers. CIO+1

From a credit perspective, it’s something else: a highly concentrated, multi-hundred-billion-dollar bet on a small number of counterparties and a still-forming market.

Moody’s has already flagged Oracle’s AI contracts—especially with OpenAI—as a material source of counterparty risk and leverage pressure, warning that Oracle’s debt could grow faster than EBITDA, potentially pushing leverage to ~4x and keeping free cash flow negative for an extended period. Reuters

That’s exactly the kind of language that makes CDS desks sharpen their pencils.


3. How the AI Datacenter Boom Is Being Funded: Debt, Everywhere

This isn’t just about Oracle. Across the ecosystem, AI infrastructure is increasingly funded with debt:

  • Data center debt issuance has reportedly more than doubled, with roughly $25B in AI-related data center bonds in a recent period and projections of $2.9T in cumulative AI-related data center capex between 2025–2028, about half of it reliant on external financing. The Economic Times
  • Oracle is estimated by some analysts to need ~$100B in new borrowing over four years to support AI-driven datacenter build-outs. Channel Futures
  • Oracle has also tapped banks for a mix of $38B in loans and $18B in bond issuance in recent financing waves. Yahoo Finance+1
  • Meta reportedly issued around $30B in financing for a single Louisiana AI data center campus. Yahoo Finance

Simultaneously, OpenAI’s infrastructure ambitions are escalating:

  • The Stargate program alone is described as a $500B+ project consuming up to 10 GW of power, more than the current energy usage of New York City. Business Insider
  • OpenAI has been reported as needing around $400B in financing in the near term to keep these plans on track and has already signed contracts that sum to roughly $1T in 2025 alone, including with Oracle. Ed Zitron’s Where’s Your Ed At+1

Layer on top of that the broader AI capex curve: annual AI data center spending forecast to rise from $315B in 2024 to nearly $1.1T by 2028. The Economic Times

This is not an incremental technology refresh. It’s a credit-driven, multi-trillion-dollar restructuring of global compute and power infrastructure.

The core concern: are the corresponding revenue streams being projected with commensurate realism?


4. CDS as a Real-Time Referendum on AI Revenue Assumptions

CDS traders don’t care about AI narrative—they care about cash-flow coverage and downside scenarios.

Recent signals:

  • The cost of CDS on Oracle’s bonds has surged, effectively doubling since September, as banks and money managers buy protection. Bloomberg
  • Trading volumes in Oracle CDS have climbed into multi-billion-dollar territory over short windows, unusual for a company historically viewed as a relatively stable, investment-grade software vendor. Bloomberg

What are they worried about?

  1. Concentration Risk
    Oracle’s AI cloud future is heavily tied to a small number of mega contracts—notably OpenAI. If even one of those counterparties slows consumption, renegotiates, or fails to ramp as expected, the revenue side of Oracle’s AI capex story can wobble quickly.
  2. Timing Mismatch
    Debt service is fixed; AI demand is not.
    Datacenters must be financed and built years before they are fully utilized. A delay in AI monetization—either at OpenAI or among Oracle’s broader enterprise AI customer base—still leaves Oracle servicing large, inflexible liabilities.
  3. Macro Sensitivity
    If economic growth slows, enterprises might pull back on AI experimentation and cloud migration, potentially flattening the growth curve Oracle and others are currently underwriting.

CDS spreads are telling us: credit markets see non-zero probability that AI revenue ramps will fall short of the most optimistic scenarios.


5. Are AI Revenue Projections Outrunning Reality?

The bull case says:
These are long-dated, capacity-style deals. AI demand will eventually fill every rack; cloud AI revenue will justify today’s capex.

The skeptic’s view surfaces several friction points:

  1. OpenAI’s Monetization vs. Burn Rate
    • OpenAI reportedly spent $6.7B on R&D in the first half of 2025, with the majority historically going to experimental training runs rather than production models. Ed Zitron’s Where’s Your Ed At Parallel commentary suggests OpenAI needs hundreds of billions in additional funding in short order to sustain its infrastructure strategy. Ed Zitron’s Where’s Your Ed At
    While product revenue is growing, it’s not yet obvious that it can service trillion-scale hardware commitments without continued external capital.
  2. Enterprise AI Adoption Is Still Shallow
    Most enterprises remain stuck in pilot purgatory: small proof-of-concepts, modest copilots, limited workflow redesign. The gap between “we’re experimenting with AI” and “AI drives 20–30% of our margin expansion” is still wide.
  3. Model Efficiency Is Improving Fast
    If smaller, more efficient models close the performance gap with frontier models, demand for maximal compute may underperform expectations. That would pressure utilization assumptions baked into multi-gigawatt campuses and decade-long hardware contracts.
  4. Regulation & Trust
    Safety, privacy, and sector-specific regulation (especially in finance, healthcare, public sector) may slow high-margin, high-scale AI deployments, further delaying returns.

Taken together, this looks familiar: optimistic top-line projections backed by debt-financed capacity, with adoption and unit economics still in flux.

That’s exactly the kind of mismatch that fuels bubble narratives.


6. Theory: Is This a Classic Minsky Moment in the Making?

Hyman Minsky’s Financial Instability Hypothesis outlines a familiar pattern:

  1. Displacement – A new technology or regime shift (the Internet; now AI).
  2. Boom – Rising investment, easy credit, and growing optimism.
  3. Euphoria – Leverage increases; investors extrapolate high growth far into the future.
  4. Profit Taking – Smart money starts hedging or exiting.
  5. Panic – A shock (macro, regulatory, technological) reveals fragility; credit tightens rapidly.

Where are we in that cycle?

  • Displacement and Boom are clearly behind us.
  • The euphoria phase looks concentrated in:
    • trillion-dollar AI infrastructure narratives
    • multi-hundred-billion datacenter plans
    • funding forecasts that assume near-frictionless adoption
  • The profit-taking phase may be starting—not via equity selling, but via:
    • CDS buying
    • spread widening
    • stricter credit underwriting for AI-exposed borrowers

From a Minsky lens, the CDS market’s behavior looks exactly like sophisticated participants quietly de-risking while the public narrative stays bullish.

That doesn’t guarantee panic. But it does raise a question:
If AI infrastructure build-outs stumble, where does the stress show up first—equity, debt, or both?


7. Counterpoint: This Might Be Railroads, Not Subprime

There is a credible argument that today’s AI debt binge, while risky, is fundamentally different from 2008-style toxic leverage:

  • These projects fund real, productive assets—datacenters, power infrastructure, chips—rather than synthetic mortgage instruments.
  • Even if AI demand underperforms, much of this capacity can be repurposed for:
    • traditional cloud workloads
    • high-performance computing
    • scientific simulation
    • media and gaming workloads

Historically, large infrastructure bubbles (e.g., railroads, telecom fiber) left behind valuable physical networks, even after investors in specific securities were wiped out.

Similarly, AI infrastructure may outlast the most aggressive revenue assumptions:

  • Oracle’s OCI investments improve its position in non-AI cloud as well. The Motley Fool+1
  • Power grid upgrades and new energy contracts have value far beyond AI alone. Bloomberg+1

In this framing, the “AI bubble” might hurt capital providers, but still accelerate broader digital and energy infrastructure for decades.


8. So Is the AI Bubble Real—or Rooted in Uncertainty?

A mature, evidence-based view has to hold two ideas at once:

  1. Yes, there are clear bubble dynamics in parts of the AI stack.
    • Datacenter capex and debt are growing at extraordinary rates. The Economic Times+1
    • Oracle’s CDS and Moody’s commentary show real concern around concentration risk and leverage. Bloomberg+1
    • OpenAI’s hardware commitments and funding needs are unprecedented for a private company with a still-evolving business model. Tomasz Tunguz+1
  2. No, this is not a pure replay of 2008 or 2000.
    • Infrastructure assets are real and broadly useful.
    • AI is already delivering tangible value in many production settings, even if not yet at economy-wide scale.
    • The biggest risks look concentrated (Oracle, key AI labs, certain data center REITs and lenders), not systemic across the entire financial system—at least for now.

A Practical Decision Framework for the Reader

To form your own view on the AI bubble question, ask:

  1. Revenue vs. Debt:
    Does the company’s contracted and realistic revenue support its AI-related debt load under conservative utilization and pricing assumptions?
  2. Concentration Risk:
    How dependent is the business on one or two AI counterparties or a single class of model?
  3. Reusability of Assets:
    If AI demand flattens, can its datacenters, power agreements, and hardware be repurposed for other workloads?
  4. Market Signals:
    Are CDS spreads widening? Are ratings agencies flagging leverage? Are banks increasingly hedging exposure?
  5. Adoption Reality vs. Narrative:
    Do enterprise customers show real, scaled AI adoption, or still mostly pilots, experimentation, and “AI tourism”?

9. Closing Thought: Bubble or Not, Credit Is Now the Real Story

Equity markets tell you what investors hope will happen.
The CDS market tells you what they’re afraid might happen.

Right now, credit markets are signaling that AI’s infrastructure bets are big enough, and leveraged enough, that the downside can’t be ignored.

Whether you conclude that we’re in an AI bubble—or just at the messy financing stage of a transformational technology—depends on how you weigh:

  • Trillion-dollar infrastructure commitments vs. real adoption
  • Physical asset durability vs. concentration risk
  • Long-term productivity gains vs. short-term overbuild

But one thing is increasingly clear:
If the AI era does end in a crisis, it won’t start with a model failure.
It will start with a credit event.


We discuss this topic in more detail on (Spotify)

Further reading on AI credit risk and data center financing

Reuters

Moody’s flags risk in Oracle’s $300 billion of recently signed AI contracts

Sep 17, 2025

theverge.com

Sam Altman’s Stargate is science fiction

Jan 31, 2025

Business Insider

OpenAI’s Stargate project will cost $500 billion and will require enough energy to power a whole city

29 days ago

AI at an Inflection Point: Are We Living Through the Dot-Com Bubble 2.0 – or Something Entirely Different?

Introduction

For months now, a quiet tension has been building in boardrooms, engineering labs, and investor circles. On one side are the evangelists—those who see AI as the most transformative platform shift since electrification. On the other side sit the skeptics—analysts, CFOs, and surprisingly, even many technologists themselves—who argue that returns have yet to materialize at the scale the hype suggests.

Under this tension lies a critical question: Is today’s AI boom structurally similar to the dot-com bubble of 2000 or the credit-fueled collapse of 2008? Or are we projecting old crises onto a frontier technology whose economics simply operate by different rules?

This question matters deeply. If we are indeed replaying history, capital will dry up, valuations will deflate, and entire markets will neutralize. But if the skeptics are misreading the signals, then we may be at the base of a multi-decade innovation curve—one that rewards contrarian believers.

Let’s unpack both possibilities with clarity, data, and context.


1. The Dot-Com Parallel: Exponential Valuations, Minimal Cash Flow, and Over-Narrated Futures

The comparison to the dot-com era is the most popular narrative among skeptics. It’s not hard to see why.

1.1. Startups With Valuations Outrunning Their Revenue

During the dot-com boom, revenue-light companies—eToys, Pets.com, Webvan—reached massive valuations with little proven demand. Today, many AI model-centric startups are experiencing a similar phenomenon:

  • Enormous valuations built primarily on “strategic potential,” not realized revenue
  • Extremely high compute burn rates
  • Reliance on outside capital to fund model training cycles
  • No defensible moat beyond temporary performance advantages

This is the classic pattern of a bubble: cheap capital + narrative dominance + no proven path to sustainable margins.

1.2. Infrastructure Outpacing Real Adoption

In the late 90s, telecom and datacenter expansion outpaced actual Internet usage.
Today, hyperscalers and AI-focused cloud providers are pouring billions into:

  • GPU clusters
  • Data center expansion
  • Power procurement deals
  • Water-cooled rack infrastructure
  • Hydrogen and nuclear plans

Yet enterprise adoption remains shallow. Few companies have operationalized AI beyond experimentation. CFOs are cutting budgets. CIOs are tightening governance. Many “enterprise AI transformation” programs have delivered underwhelming impact.

1.3. The Hype Premium

Just as the 1999 investor decks promised digital utopia, 2024–2025 decks promise:

  • Fully autonomous enterprises
  • Real-time copilots everywhere
  • Self-optimizing supply chains
  • AI replacing entire departments

The irony? Most enterprises today can’t even get their data pipelines, governance, or taxonomy stable enough for AI to work reliably.

The parallels are real—and unsettling.


2. The 2008 Parallel: Systemic Concentration Risk and Capital Misallocation

The 2008 financial crisis was not just about bad mortgages; it was about structural fragility, over-leveraged bets, and market concentration hiding systemic vulnerabilities.

The AI ecosystem shows similar warning signs.

2.1. Extreme Concentration in a Few Companies

Three companies provide the majority of the world’s AI computational capacity.
A handful of frontier labs control model innovation.
A small cluster of chip providers (NVIDIA, TSMC, ASML) underpin global AI scaling.

This resembles the 2008 concentration of risk among a small number of banks and insurers.

2.2. High Leverage, Just Not in the Traditional Sense

In 2008, leverage came from debt.
In 2025, leverage comes from infrastructure obligations:

  • Multi-billion-dollar GPU pre-orders
  • 10–20-year datacenter power commitments
  • Long-term cloud contracts
  • Vast sunk costs in training pipelines

If demand for frontier-scale AI slows—or simply grows at a more “normal” rate than predicted—this leverage becomes a liability.

2.3. Derivative Markets for AI Compute

There are early signs of compute futures markets, GPU leasing entities, and synthetic capacity pools. While innovative, they introduce financial abstraction that rhymes with the derivative cascades of 2008.

If core demand falters, the secondary financial structures collapse first—potentially dragging the core ecosystem down with them.


3. The Skeptic’s Argument: ROI Has Not Materialized

Every downturn begins with unmet expectations.

Across industries, the story is consistent:

  • POCs never scaled
  • Data was ungoverned
  • Model performance degraded in the real world
  • Accuracy thresholds were not reached
  • Cost of inference exploded unexpectedly
  • GenAI copilots produced hallucinations
  • The “skills gap” became larger than the technology gap

For many early adopters, the hard truth is this: AI delivered interesting prototypes, not transformational outcomes.

The skepticism is justified.


4. The Optimist’s Counterargument: Unlike 2000 or 2008, AI Has Real Utility Today

This is the key difference.

The dot-com bubble burst because the infrastructure was not ready.
The 2008 crisis collapsed because the underlying assets were toxic.

But with AI:

  • The technology works
  • The usage is real
  • Productivity gains exist (though uneven)
  • Infrastructure is scaling in predictable ways
  • Fundamental demand for automation is increasing
  • The cost curve for compute is slowly (but steadily) compressing
  • New classes of models (small, multimodal, agentic) are lowering barriers

If the dot-com era had delivered search, cloud, mobile apps, or digital payments in its first 24 months, the bubble might not have burst as severely.

AI is already delivering these equivalents.


5. The Key Question: Is the Value Accruing to the Wrong Layer?

Most failed adoption stems from a structural misalignment:
Value is accruing at the infrastructure and model layers—not the enterprise implementation layer.

In other words:

  • Chipmakers profit
  • Hyperscalers profit
  • Frontier labs attract capital
  • Model inferencing platforms grow

But enterprises—those expected to realize the gains—are stuck in slow, expensive adoption cycles.

This creates the illusion that AI isn’t working, even though the economics are functioning perfectly for the suppliers.

This misalignment is the root of the skepticism.


6. So, Is This a Bubble? The Most Honest Answer Is “It Depends on the Layer You’re Looking At.”

The AI economy is not monolithic. It is a stacked ecosystem, and each layer has entirely different economics, maturity levels, and risk profiles. Unlike the dot-com era—where nearly all companies were overvalued—or the 2008 crisis—where systemic fragility sat beneath every asset class—the AI landscape contains asymmetric risk pockets.

Below is a deeper, more granular breakdown of where the real exposure lies.


6.1. High-Risk Areas: Where Speculation Has Outrun Fundamentals

Frontier-Model Startups

Large-scale model development resembles the burn patterns of failed dot-com startups: high cost, unclear moat.

Examples:

  • Startups claiming they will “rival OpenAI or Anthropic” while spending $200M/year on GPUs with no distribution channel.
  • Companies raising at $2B–$5B valuations based solely on benchmark performance—not paying customers.
  • “Foundation model challengers” whose only moat is temporary model quality, a rapidly decaying advantage.

Why High Risk:
Training costs scale faster than revenue. The winner-take-most dynamics favor incumbents with established data, compute, and brand trust.


GPU Leasing and Compute Arbitrage Markets

A growing field of companies buy GPUs, lease them out at premium pricing, and arbitrage compute scarcity.

Examples:

  • Firms raising hundreds of millions to buy A100/H100 inventory and rent it to AI labs.
  • Secondary GPU futures markets where investors speculate on H200 availability.
  • Brokers offering “synthetic compute capacity” based on future hardware reservations.

Why High Risk:
If model efficiency improves (e.g., SSMs, low-rank adaptation, pruning), demand for brute-force compute shrinks.
Exactly like mortgage-backed securities in 2008, these players rely on sustained upstream demand. Any slowdown collapses margins instantly.


Thin-Moat Copilot Startups

Dozens of companies offer AI copilots for finance, HR, legal, marketing, or CRM tasks, all using similar APIs and LLMs.

Examples:

  • A GenAI sales assistant with no proprietary data advantage.
  • AI email-writing platforms that replicate features inside Microsoft 365 or Google Workspace.
  • Meeting transcription tools that face commoditization from Zoom, Teams, and Meet.

Why High Risk:
Every hyperscaler and SaaS platform is integrating basic GenAI natively. The standalone apps risk the same fate as 1999 “shopping portals” crushed by Amazon and eBay.


AI-First Consulting Firms Without Deep Engineering Capability

These firms promise to deliver operationalized AI outcomes but rely on subcontracted talent or low-code wrappers.

Examples:

  • Consultancies selling multimillion-dollar “AI Roadmaps” without offering real ML engineering.
  • Strategy firms building prototypes that cannot scale to production.
  • Boutique shops that lock clients into expensive retainer contracts but produce only slideware.

Why High Risk:
Once AI budgets tighten, these firms will be the first to lose contracts. We already see this in enterprise reductions in experimental GenAI spend.


6.2. Moderate-Risk Areas: Real Value, but Timing and Execution Matter

Hyperscaler AI Services

Azure, AWS, and GCP are pouring billions into GPU clusters, frontier model partnerships, and vertical AI services.

Examples:

  • Azure’s $10B compute deal to power OpenAI.
  • Google’s massive TPU v5 investments.
  • AWS’s partnership with Anthropic and its Bedrock ecosystem.

Why Moderate Risk:
Demand is real—but currently inflated by POCs, “AI tourism,” and corporate FOMO.
As 2025–2027 budgets normalize, utilization rates will determine whether these investments remain accretive or become stranded capacity.


Agentic Workflow Platforms

Companies offering autonomous agents that execute multi-step processes—procurement workflows, customer support actions, claims handling, etc.

Examples:

  • Platforms like Adept, Mesh, or Parabola that orchestrate multi-step tasks.
  • Autonomous code refactoring assistants.
  • Agent frameworks that run long-lived processes with minimal human supervision.

Why Moderate Risk:
High upside, but adoption depends on organizations redesigning workflows—not just plugging in AI.
The technology is promising, but enterprises must evolve operating models to avoid compliance, auditability, and reliability risks.


AI Middleware and Integration Platforms

Businesses betting on becoming the “plumbing” layer between enterprise systems and LLMs.

Examples:

  • Data orchestration layers for grounding LLMs in ERP/CRM systems.
  • Tools like LangChain, LlamaIndex, or enterprise RAG frameworks.
  • Vector database ecosystems.

Why Moderate Risk:
Middleware markets historically become winner-take-few.
There will be consolidation, and many players at today’s valuations will not survive the culling.


Data Labeling, Curation, and Synthetic Data Providers

Essential today, but cost structures will evolve.

Examples:

  • Large annotation farms like Scale AI or Sama.
  • Synthetic data generators for vision or robotics.
  • Rater-as-a-service providers for safety tuning.

Why Moderate Risk:
If self-supervision, synthetic scaling, or weak-to-strong generalization trends hold, demand for human labeling will tighten.


6.3. Low-Risk Areas: Where the Value Is Durable and Non-Speculative

Semiconductors and Chip Supply Chain

Regardless of hype cycles, demand for accelerated compute is structurally increasing across robotics, simulation, ASR, RL, and multimodal applications.

Examples:

  • NVIDIA’s dominance in training and inference.
  • TSMC’s critical role in advanced node manufacturing.
  • ASML’s EUV monopoly.

Why Low Risk:
These layers supply the entire computation economy—not just AI. Even if the AI bubble deflates, GPU demand remains supported by scientific computing, gaming, simulation, and defense.


Datacenter Infrastructure and Energy Providers

The AI boom is fundamentally a power and cooling problem, not just a model problem.

Examples:

  • Utility-scale datacenter expansions in Iowa, Oregon, and Sweden.
  • Liquid-cooled rack deployments.
  • Multibillion-dollar energy agreements with nuclear and hydro providers.

Why Low Risk:
AI workloads are power-intensive, and even with efficiency improvements, energy demand continues rising.
This resembles investing in railroads or highways rather than betting on any single car company.


Developer Productivity Tools and MLOps Platforms

Tools that streamline model deployment, monitoring, safety, versioning, evaluation, and inference optimization.

Examples:

  • Platforms like Weights & Biases, Mosaic, or OctoML.
  • Code generation assistants embedded in IDEs.
  • Compiler-level optimizers for inference efficiency.

Why Low Risk:
Demand is stable and expanding. Every model builder and enterprise team needs these tools, regardless of who wins the frontier model race.


Enterprise Data Modernization and Taxonomy / Grounding Infrastructure

Organizations with trustworthy data environments consistently outperform in AI deployment.

Examples:

  • Data mesh architectures.
  • Structured metadata frameworks.
  • RAG pipelines grounded in canonical ERP/CRM data.
  • Master data governance platforms.

Why Low Risk:
Even if AI adoption slows, these investments create value.
If AI adoption accelerates, these investments become prerequisites.


6.4. The Core Insight: We Are Experiencing a Layered Bubble, Not a Systemic One

Unlike 2000, not everything is overpriced.
Unlike 2008, the fragility is not systemic.

High-risk layers will deflate.
Low-risk layers will remain foundational.
Moderate-risk layers will consolidate.

This asymmetry is what makes the current AI landscape so complex—and so intellectually interesting. Investors must analyze each layer independently, not treat “AI” as a uniform asset class.


7. The Insight Most People Miss: AI Fails Slowly, Then Succeeds All at Once

Most emerging technologies follow an adoption curve. AI’s curve is different because it carries a unique duality: it is simultaneously underperforming and overperforming expectations.
This paradox is confusing to executives and investors—but essential to understand if you want to avoid incorrect conclusions about a bubble.

The pattern that best explains what’s happening today comes from complex systems:
AI failure happens gradually and for predictable reasons. AI success happens abruptly and only after those reasons are removed.

Let’s break that down with real examples.


7.1. Why Early AI Initiatives Fail Slowly (and Predictably)

AI doesn’t fail because the models don’t work.
AI fails because the surrounding environment isn’t ready.

Failure Mode #1: Organizational Readiness Lags Behind Technical Capability

Early adopters typically discover that AI performance is not the limiting factor — their operating model is.

Examples:

  • A Fortune 100 retailer deploys a customer-service copilot but cannot use it because their knowledge base is out-of-date by 18 months.
  • A large insurer automates claim intake but still routes cases through approval committees designed for pre-AI workflows, doubling the cycle time.
  • A manufacturing firm deploys predictive maintenance models but has no spare parts logistics framework to act on the predictions.

Insight:
These failures are not technical—they’re organizational design failures.
They happen slowly because the organization tries to “bolt on AI” without changing the system underneath.


Failure Mode #2: Data Architecture Is Inadequate for Real-World AI

Early pilots often work brilliantly in controlled environments and fail spectacularly in production.

Examples:

  • A bank’s fraud detection model performs well in testing but collapses in production because customer metadata schemas differ across regions.
  • A pharmaceutical company’s RAG system references staging data and gives perfect answers—but goes wildly off-script when pointed at messy real-world datasets.
  • A telecom provider’s churn model fails because the CRM timestamps are inconsistent by timezone, causing silent degradation.

Insight:
The majority of “AI doesn’t work” claims stem from data inconsistencies, not model limitations.
These failures accumulate over months until the program is quietly paused.


Failure Mode #3: Economic Assumptions Are Misaligned

Many early-version AI deployments were too expensive to scale.

Examples:

  • A customer-support bot costs $0.38 per interaction to run—higher than a human agent using legacy CRM tools.
  • A legal AI summarization system consumes 80% of its cloud budget just parsing PDFs.
  • An internal code assistant saves developers time but increases inference charges by a factor of 20.

Insight:
AI’s ROI often looks negative early not because the value is small—but because the first wave of implementation is structurally inefficient.


7.2. Why Late-Stage AI Success Happens Abruptly (and Often Quietly)

Here’s the counterintuitive part: once the underlying constraints are fixed, AI does not improve linearly—it improves exponentially.

This is the core insight:
AI returns follow a step-function pattern, not a gradual curve.

Below are examples from organizations that achieved this transition.


Success Mode #1: When Data Quality Hits a Threshold, AI Value Explodes

Once a company reaches critical data readiness, the same models that previously looked inadequate suddenly generate outsized results.

Examples:

  • A logistics provider reduces routing complexity from 29 variables to 11 canonical features. Their route-optimization AI—previously unreliable—now saves $48M annually in fuel costs.
  • A healthcare payer consolidates 14 data warehouses into a unified claims store. Their fraud model accuracy jumps from 62% to 91% without retraining.
  • A consumer goods company builds a metadata governance layer for product descriptions. Their search engine produces a 22% lift in conversions using the same embedding model.

Insight:
The value was always there. The pipes were not.
Once the pipes are fixed, value accelerates faster than organizations expect.


Success Mode #2: When AI Becomes Embedded, Not Added On, ROI Becomes Structural

AI only becomes transformative when it is built into workflows—not layered on top of them.

Examples:

  • A call center doesn’t deploy an “agent copilot.” Instead, it rebuilds the entire workflow so the copilot becomes the first reader of every case. Average handle time drops 30%.
  • A bank redesigns underwriting from scratch using probabilistic scoring + agentic verification. Loan processing time goes from 15 days to 4 hours.
  • A global engineering firm reorganizes R&D around AI-driven simulation loops. Their product iteration cycle compresses from 18 months to 10 weeks.

Insight:
These are not incremental improvements—they are order-of-magnitude reductions in time, cost, or complexity.

This is why success appears sudden:
Organizations go from “AI isn’t working” to “we can’t operate without AI” very quickly.


Success Mode #3: When Costs Normalize, Entire Use Cases Become Economically Viable Overnight

Just like Moore’s Law enabled new hardware categories, AI cost curves unlock entirely new use cases once they cross economic thresholds.

Examples:

  • Code generation becomes viable when inference cost falls below $1 per developer per day.
  • Automated video analysis becomes scalable when multimodal inference drops under $0.10/minute.
  • Autonomous agents become attractive only when long-context models can run persistent sessions for less than $0.01/token.

Insight:
Small improvements in cost + efficiency create massive new addressable markets.

That is why success feels instantaneous—entire categories cross feasibility thresholds at once.


7.3. The Core Insight: Early Failures Are Not Evidence AI Won’t Work—They Are Evidence of Unrealistic Expectations

Executives often misinterpret early failure as proof that AI is overhyped.

In reality, it signals that:

  • The organization treated AI as a feature, not a process redesign
  • The data estate was not production-grade
  • The economics were modeled on today’s costs instead of future costs
  • Teams were structured around old workflows
  • KPIs measured activity, not transformation
  • Governance frameworks were legacy-first, not AI-first

This is the equivalent of judging the automobile by how well it performs without roads.


7.4. The Decision-Driving Question: Are You Judging AI on Its Current State or Its Trajectory?

Technologists tend to overestimate short-term capability but underestimate long-term convergence.
Financial leaders tend to anchor decisions to early ROI data, ignoring the compounding nature of system improvements.

The real dividing line between winners and losers in this era will be determined by one question:

Do you interpret early AI failures as a ceiling—or as the ground floor of a system still under construction?

If you believe AI’s early failures represent the ceiling:

You’ll delay or reduce investments and minimize exposure, potentially avoiding overhyped initiatives but risking structural disadvantage later.

If you believe AI’s early failures represent the floor:

You’ll invest in foundational capabilities—data quality, taxonomy, workflows, governance—knowing the step-change returns come later.


7.5. The Pattern Is Clear: AI Transformation Is Nonlinear, Not Incremental

  • Phase 1 (0–18 months): Costly. Chaotic. Overhyped. Low ROI.
  • Phase 2 (18–36 months): Data and processes stabilize. Costs normalize. Models mature.
  • Phase 3 (36–60 months): Returns compound. Transformation becomes structural. Competitors fall behind.

Most organizations are stuck in Phase 1.
A few are transitioning to Phase 2.
Almost none are in Phase 3 yet.

That’s why the market looks confused.


8. The Mature Investor’s View: AI Is Overpriced in Some Layers, Underestimated in Others

Most conversations about an “AI bubble” focus on valuations or hype cycles—but mature investors think in structural patterns, not headlines. The nuanced view is that AI contains pockets of overvaluation, pockets of undervaluation, and pockets of durable long-term value, all coexisting within the same ecosystem.

This section expands on how sophisticated investors separate noise from signal—and why this perspective is grounded in history, not optimism.


8.1. The Dot-Com Analogy: Understanding Overvaluation in Context

In 1999, investors were not wrong about the Internet’s long-term impact.
They were only wrong about:

  • Where value would accrue
  • How fast returns would materialize
  • Which companies were positioned to survive

This distinction is essential.

Historical Pattern: Frontier Technologies Overprice the Application Layer First

During the dot-com era:

  • Hundreds of consumer “Internet portals” were funded
  • E-commerce concepts attracted billions without supply-chain capability
  • Vertical marketplaces (e.g., online groceries, pet supplies) captured attention despite weak unit economics

But value didn’t disappear. Instead, it concentrated:

  • Amazon survived and became the sector winner
  • Google emerged from the ashes of search-engine overfunding
  • Salesforce built an entirely new business model on top of web infrastructure
  • Most of the failed players were replaced by better-capitalized, better-timed entrants

Parallel to AI today:
The majority of model-centric startups and thin-moat copilots mirror the “Pets.com phase” of the Internet—early, obvious use cases with the wrong economic foundation.

Investors with historical perspective know this pattern well.


8.2. The 2008 Analogy: Concentration Risk and System Fragility

The financial crisis was not about bad business models—many of the banks were profitable—it was about systemic fragility and hidden leverage.

Sophisticated investors look at AI today and see similar concentration risk:

  • Training capacity is concentrated in a handful of hyperscalers
  • GPU supply is dependent on one dominant chip architecture
  • Advanced node manufacturing is effectively a single point of failure (TSMC)
  • Frontier model research is consolidated among a few labs
  • Energy demand rests on long-term commitments with limited flexibility

This doesn’t mean collapse is imminent.
But it does mean that the risk is structural, not superficial, mirroring the conditions of 2008.

Historical Pattern: Crises Arise When Everyone Makes the Same Bet

In 2008:

  • Everyone bet on perpetual housing appreciation
  • Everyone bought securitized mortgage instruments
  • Everyone assumed liquidity was infinite
  • Everyone concentrated their risk without diversification

In 2025 AI:

  • Everyone is buying GPUs
  • Everyone is funding LLM-based copilots
  • Everyone is training models with the same architectures
  • Everyone is racing to produce the same “agentic workflows”

Mature investors look at this and conclude:
The risk is not in AI; the risk is in the homogeneity of strategy.


8.3. Where Mature Investors See Real, Defensible Value

Sophisticated investors don’t chase narratives; they chase structural inevitabilities.
They look for value that persists even if the hype collapses.

They ask:
If AI growth slowed dramatically, which layers of the ecosystem would still be indispensable?

Inevitable Value Layer #1: Energy and Power Infrastructure

Even if AI adoption stagnated:

  • Datacenters still need massive amounts of power
  • Grid upgrades are still required
  • Cooling and heat-recovery systems remain critical
  • Energy-efficient hardware remains in demand

Historical parallel: 1840s railway boom
Even after the rail bubble burst,
the railroads that existed enabled decades of economic growth.
The investors who backed infrastructure, not railway speculators, won.


Inevitable Value Layer #2: Semiconductor and Hardware Supply Chains

In every technological boom:

  • The application layer cycles
  • The infrastructure layer compounds

Inbound demand for compute is growing across:

  • Robotics
  • Simulation
  • Scientific modeling
  • Autonomous vehicles
  • Voice interfaces
  • Smart manufacturing
  • National defense

Historical parallel: The post–World War II electronics boom
Companies providing foundational components—transistors, integrated circuits, microprocessors—captured durable value even while dozens of electronics brands collapsed.

NVIDIA, TSMC, and ASML now sit in the same structural position that Intel, Fairchild, and Texas Instruments occupied in the 1960s.


Inevitable Value Layer #3: Developer Productivity Infrastructure

This includes:

  • MLOps
  • Orchestration tools
  • Evaluation and monitoring frameworks
  • Embedding engines
  • Data governance systems
  • Experimentation platforms

Why low risk?
Because technology complexity always increases over time.
Tools that tame complexity always compound in value.

Historical parallel: DevOps tooling post-2008
Even as enterprise IT budgets shrank,
tools like GitHub, Jenkins, Docker, and Kubernetes grew because
developers needed leverage, not headcount expansion.


8.4. The Underestimated Layer: Enterprise Operational Transformation

Mature investors understand technology S-curves.
They know that productivity improvements from major technologies often arrive years after the initial breakthrough.

This is historically proven:

  • Electrification (1880s) → productivity gains lagged by ~30 years
  • Computers (1960s) → productivity gains lagged by ~20 years
  • Broadband Internet (1990s) → productivity gains lagged by ~10 years
  • Cloud computing (2000s) → real enterprise impact peaked a decade later

Why the lag?
Because business processes change slower than technology.

AI is no different.

Sophisticated investors look at the organizational changes required—taxonomy, systems, governance, workflow redesign—and see that enterprise adoption is behind, not because the technology is failing, but because industries move incrementally.

This means enterprise AI is underpriced, not overpriced, in the long run.


8.5. Why This Perspective Is Rational, Not Optimistic

Theory 1: Amara’s Law

We overestimate the impact of technology in the short term and underestimate the impact in the long term.
This principle has been validated for:

  • Industrial automation
  • Robotics
  • Renewable energy
  • Mobile computing
  • The Internet
  • Machine learning itself

AI fits this pattern precisely.


Theory 2: The Solow Paradox (and Its Resolution)

In the 1980s, Robert Solow famously said:

“You can see the computer age everywhere but in the productivity statistics.”

The same narrative exists for AI today.
Yet when cloud computing, enterprise software, and supply-chain optimization matured, productivity soared.

AI is at the pre-surge stage of the same curve.


Theory 3: General Purpose Technology Lag

Economists classify AI as a General Purpose Technology (GPT), joining:

  • Electricity
  • The steam engine
  • The microprocessor
  • The Internet

GPTs always produce delayed returns because entire economic sectors must reorganize around them before full value is realized.

Mature investors understand this deeply.
They don’t measure ROI on a 12-month cycle.
They measure GPT curves in decades.


8.6. The Mature Investor’s Playbook: How They Allocate Capital in AI Today

Sophisticated investors don’t ask, “Is AI a bubble?”
They ask:

Question 1: Is the company sitting on a durable layer of the ecosystem?

Examples of “durable” layers:

  • chips
  • energy
  • data gateways
  • developer platforms
  • infrastructure software
  • enterprise system redesign

These have the lowest downside risk.


Question 2: Does the business have a defensible moat that compounds over time?

Example red flags:

  • Products built purely on frontier models
  • No proprietary datasets
  • High inference burn rate
  • Thin user adoption
  • Features easily replicated by hyperscalers

Example positive signals:

  • Proprietary operational data
  • Grounding pipelines tied to core systems
  • Embedded workflow integration
  • Strong enterprise stickiness
  • Long-term contracts with hyperscalers

Question 3: Is AI a feature of the business, or is it the business?

“AI-as-a-feature” companies almost always get commoditized.
“AI-as-infrastructure” companies capture value.

This is the same pattern observed in:

  • cloud computing
  • cybersecurity
  • mobile OS ecosystems
  • GPUs and game engines
  • industrial automation

Infrastructure captures profit.
Applications churn.


8.7. The Core Conclusion: AI Is Not a Bubble—But Parts of AI Are

The mature investor stance is not about optimism or pessimism.
It is about probability-weighted outcomes across different layers of a rapidly evolving stack.

Their guiding logic is based on:

  • historical evidence
  • economic theory
  • defensible market structure
  • infrastructure dynamics
  • innovation S-curves
  • risk concentration patterns
  • and real, measurable adoption signals

The result?

AI is overpriced at the top, underpriced in the middle, and indispensable at the bottom.
The winners will be those who understand where value actually settles—not where hype makes it appear.


9. The Final Thought: We’re Not Repeating 2000 or 2008—We’re Living Through a Hybrid Scenario

The dot-com era teaches us what happens when narratives outpace capability.
The 2008 era teaches us what happens when structural fragility is ignored.

The AI era is teaching us something new:

When a technology is both overhyped and under-adopted, over-capitalized and under-realized, the winners are not the loudest pioneers—but the disciplined builders who understand timing, infrastructure economics, and operational readiness.

We are early in the story, not late.

The smartest investors and operators today aren’t asking, “Is this a bubble?”
They’re asking:
“Where is the bubble forming, and where is the long-term value hiding?”

We discuss this topic and more in detail on (Spotify).