The speaker argues that the current AI cycle is fundamentally different from the dot-com era due to pre-existing distribution infrastructure and massive big-tech subsidization of CapEx. While companies are staying private significantly longer, trapping value outside public markets, the decline in AI input costs and rapid consumer adoption suggest a massive forthcoming shift in value capture.
Overview
This briefing analyzes the current state of the technology growth market, emphasizing the divergence between public and private market opportunities. The speaker, a growth-stage investor, posits that technology has "swallowed the market," with US-based tech companies dominating the top market cap positions. A central theme is the structural shift in capital markets: companies now stay private for over 14 years, expanding the private market capitalization from $500 billion to $3.5 trillion in a decade.
The analysis delves deeply into the AI infrastructure build-out, characterized as a massive, front-loaded investment by hyperscalers (run-rating $400 billion annually) that effectively subsidizes the ecosystem for application layer startups. Unlike the dot-com bubble, this cycle benefits from immediate global distribution via existing cloud and mobile infrastructure. The discussion concludes with investment strategies, advocating for a barbell approach: balancing high-momentum growth companies with "high variance" early-stage research teams, while predicting that energy and cooling will be the critical physical constraints of the next decade.
Key Points
The Private Market Liquidity Shift: High-growth technology companies are staying private significantly longer than in previous cycles, often delaying IPOs for 14+ years. This has caused the aggregate market cap of private unicorns to swell sevenfold over the last decade, moving the epicenter of high-growth returns away from public exchanges. Why it matters: Retail investors and public market funds are increasingly excluded from the high-growth phase of the technology lifecycle, necessitating access to private markets for alpha. Evidence: If you go back 10 years ago, that whole private market cap of three and a half trillion was like 500 billion. So over the last 10 years, the market cap of these private companies is like seven X.
The Big Tech Infrastructure Subsidy: Unlike the telecom boom where fragile startups bore the cost of infrastructure, the current AI build-out is funded by the strongest balance sheets in history (Google, Microsoft, Meta). These incumbents are absorbing the capital risk of a potential overbuild, creating a robust layer for startups to build upon. Why it matters: This reduces the capital intensity risk for application-layer startups and ensures infrastructure stability regardless of short-term market volatility. Evidence: It turns out they're the best companies, you know, probably ever created... and they can bear potential capacity overbuild... The best part about this is it's mostly the large tech companies that are bearing the burden of the build out.
Hyper-Deflation of Input Costs: The cost of accessing frontier AI models has dropped by over 99% in two years, a rate exceeding Moore's Law, while capabilities double roughly every seven months. This creates a powerful economic tailwind for developers, as raw material costs collapse while product quality surges. Why it matters: Startups can anticipate margin expansion over time not through price hikes, but through the rapid commoditization of their primary cost driver (compute/tokens). Evidence: Just trust me when I tell you the cost of the inputs, of accessing these models has declined 99% or a little more than 99% over the last two years.
Pre-Existing Distribution Rails Accelerate Adoption: The comparison to the dot-com crash is flawed because the distribution network (internet/mobile) is already complete. ChatGPT reached massive scale 5.5x faster than Google Search because it didn't need to wait for broadband or hardware adoption; it simply rode the existing cloud infrastructure. Why it matters: Demand signals are immediate and verifiable, de-risking investments compared to previous cycles where demand was theoretical pending infrastructure rollout. Evidence: The time to get to 365 billion searches on ChatGPT was two years. The time for Google to get to 365 billion searches was 11 years. So it's five and a half times longer.
Consumer Surplus and Monetization Lag: There is a massive gap between the value users derive from AI (surplus) and what they currently pay. Most value is currently accruing to the end user, but the 'stickiness' of the product suggests companies have significant pricing power they have not yet exercised. Why it matters: Current revenue figures likely underestimate the long-term earnings potential of AI leaders once they optimize pricing models and segmentation. Evidence: If you had known that users were willing to pay $100, $200 like they are with ChatGPT... God only knows what the market of Google would be today.
Sections
Market & Structural Insights
Meta-level observations regarding the disconnect between public and private market dynamics and business model evolution.
The 'Dot-Com' comparison is structurally flawed due to distribution readiness: The 2000s crash was driven by a lag between infrastructure build-out and user adoption. Today, the 5 billion+ user distribution network (smartphones/cloud) exists before the new technology (AI) is deployed, allowing demand to materialize instantly rather than speculatively.
Public markets are becoming 'growth deserts': With only ~5% of public software companies forecasting >25% growth, the traditional avenue for wealth generation via high-growth tech has almost entirely shifted to private markets, creating a structural liquidity trap for retail investors and a necessity for private allocation.
AI Business Models will mimic Utilities: The long-term view is that AI intelligence becomes ubiquitous and metered like electricity or Wi-Fi—essential, constantly running, and eventually invisible in the cost structure, rather than a discrete 'add-on' purchase.
Future Forecasts
Forward-looking statements regarding infrastructure bottlenecks and economic shifts.
Energy bottlenecks will be solved by a nuclear renaissance within the next 5 years, driven by big tech co-locating data centers with reactors.
Once energy is solved, 'Cooling' will emerge as the primary hard-tech bottleneck for data centers, sparking a wave of innovation in thermal management.
AI pricing models will evolve to capture consumer surplus through price discrimination—offering low-cost/ad-supported tiers for the masses and high-cost ($200-$300/month) subscriptions for power users.
Risks & Pitfalls
Potential downsides and areas of caution for investors and operators.
Low switching costs for B2B API wrappers: Businesses built solely on accessing a model (e.g., developers hitting an API) have low stickiness compared to consumer apps or integrated enterprise workflows.
Surplus leakage to end customers: Due to intense competition and difficulty in measuring 'task completion' value, AI companies may fail to capture the economic value they create, passing the savings entirely to the customer (similar to the steam engine).
The Gross Margin Tolerance Thesis: Investors are currently lenient on lower gross margins for AI application companies. The thesis is that competitive pressure among model providers (OpenAI, Anthropic, Google) will continue to drive inference costs down, naturally repairing application margins over time without needing to raise prices. Why it matters: Valuation frameworks for AI SaaS are adjusting to prioritize growth and retention over immediate gross margin perfection, contingent on model commoditization. Evidence: I'd say relative to like mature SaaS apps, we probably are a little bit more lenient on assessing a company's gross margin today because we strongly believe that their input costs are going to go down over time.
Workflow Integration as the True Moat: Raw model access is not sticky; developers switch providers via API calls instantly. True durability comes from 'boring' software principles: deep workflow integration, rules engines, and proprietary data moats (e.g., medical scribing, customer support rules). Why it matters: Investments based solely on model performance are fragile; durability requires embedding the AI into complex, hard-to-rip-out business processes. Evidence: If there's a new coding model that comes along that's better... our coding companies we'll just switch and it's pretty easy to do because it's an API call... The stickiness comes in the form like in software... integrations, rules, engines, workflows.