Investor Gavin Baker analyzes the critical transition in AI infrastructure, arguing that 'reasoning' models bridged a hardware stagnation gap and predicting that XAI will lead the Blackwell era. He details why legacy SaaS companies face an existential margin crisis and proposes orbital data centers as the ultimate first-principles solution for power and cooling constraints.
Overview
In this deep dive into the mechanics of the AI revolution, Gavin Baker joins Patrick O'Shaughnessy to dissect the 'Great Game' being played between hyperscalers and chip manufacturers. Baker argues that the industry recently narrowly avoided a stagnation period; while waiting for Nvidia's complex Blackwell chips, the emergence of 'reasoning' models (like o1) introduced new scaling laws based on verified rewards, effectively saving the progress curve. He posits that AI has fundamentally shifted the tech valuation paradigm: for the first time, being the low-cost producer matters, a dynamic currently favoring Google's TPUs but likely to shift toward XAI and Nvidia's merchant silicon in 2026.
The conversation extends beyond silicon into the existential risks for software companies. Baker warns that high-margin SaaS incumbents are repeating the mistakes of brick-and-mortar retailers by refusing to accept the lower gross margin structure of AI agents, leaving them vulnerable to disruption. The dialogue concludes with a futuristic yet first-principles case for space-based data centers and a personal reflection on investing as a competitive search for hidden truths through history and current events.
Key Points
Reasoning Models Bridged the Hardware Gap: Baker explains that without the advent of reasoning models (system 2 thinking), AI progress would have stalled between mid-2024 and the arrival of Gemini 3. The industry faced a gap where pre-training scaling required Nvidia's delayed Blackwell chips. Reasoning introduced two new scaling laws: reinforcement learning with verified rewards and test-time compute, allowing intelligence to jump from 8% to 95% on benchmarks without better base hardware. Why it matters: This suggests AI progress is no longer solely dependent on massive pre-training clusters, creating a new avenue for efficiency and capability before next-gen hardware comes online. Evidence: Had reasoning not come along, there would have been no AI progress from mid 2024 through essentially Gemini 3... Reasoning kind of bridged this like 18month gap.
The Low-Cost Producer Paradigm Shift: Unlike the smartphone or PC eras, where differentiation drove value (Apple/Microsoft), AI is becoming a commodity production game where the cost per token dictates strategy. Google is currently using its TPU infrastructure to act as the low-cost producer, suppressing competitors' margins. However, Baker predicts this advantage is temporary; as Blackwell chips scale, merchant silicon users will achieve parity, forcing Google to abandon its predatory pricing strategy. Why it matters: This economic shift fundamentally alters the competitive moat of tech giants, moving advantage from software lock-in to raw infrastructure efficiency. Evidence: AI is the first time in my career as a tech investor that being the lowcost producer has ever mattered... Google has been... sucking the economic oxygen out of the AI ecosystem.
The SaaS Gross Margin Trap: Application SaaS companies are facing a 'Innovator's Dilemma' moment comparable to retailers ignoring e-commerce. Incumbents with 80%+ gross margins are refusing to deploy AI agents that run at ~40% margins. Baker argues this reluctance creates a fatal opening for AI-native startups who are comfortable with lower margins, or for 'activist' pivots where companies sacrifice margins now for dominance later. Why it matters: Investors should view adherence to high gross margins in SaaS as a red flag for obsolescence rather than a sign of health. Evidence: Application SAS companies are making the exact same mistake that brick-andmortar retailers did with e-commerce... If you are trying to preserve an 80% gross margin structure, you are guaranteeing that you will not succeed at AI.
The First-Principles Case for Space Data Centers: Addressing energy constraints, Baker outlines a radical but logical solution: orbital compute. Space offers 30% higher solar intensity (24/7 power without storage), free cooling via radiation to the dark side (eliminating massive HVAC costs), and faster data transmission via lasers in a vacuum compared to fiber optics on Earth. He links this to the convergence of Musk's empire: Starship provides the lift, Starlink the comms, and Tesla the batteries. Why it matters: This represents the ultimate supply-side unlock for compute, potentially bypassing terrestrial power grid bottlenecks and regulatory hurdles. Evidence: In every way, data centers in space from a first principles perspective are superior to data centers on earth.
XAI's Infrastructure Advantage: Baker predicts XAI will deploy the first Blackwell-trained model, beating Google and OpenAI. This is attributed to Elon Musk's ability to build data centers faster than anyone else (Colossus), allowing them to debug and optimize the complex new Nvidia clusters first. This speed confers a compounding advantage in model quality and cost efficiency. Why it matters: Speed of infrastructure deployment is becoming the primary proxy for model release cadence and leadership. Evidence: I think the first Blackwell model will come from XAI... according to Jensen, no one builds data centers faster than Elon.
The Return on Investment (ROI) Tipping Point: Moving beyond hype, Baker points to non-tech Fortune 500s like C.H. Robinson showing concrete ROI from AI. C.H. Robinson improved quote speeds from 45 minutes to seconds and coverage from 60% to 100%, driving a 20% stock jump. This signals the transition from AI as R&D expense to AI as a deflationary productivity driver affecting P&L statements. Why it matters: Tangible earnings impacts in 'boring' industries validate the capex spend and mitigate fears of an AI bubble burst. Evidence: This is the first quarter where we had Fortune 500 companies outside of the tech industry give specific quantitative examples of AIdriven uplift.
Sections
Meta-Level Observations
Synthesized patterns regarding industry dynamics and geopolitical implications.
The 'Verified Rewards' Flywheel: Reasoning models allow labs to verify outcomes (did the code run? did the math balance?), creating a data flywheel that didn't exist with creative writing models. This reintroduces 'increasing returns to scale' for labs that can harness this feedback loop.
Geopolitics of Chip Lag: The gap between Western frontier models and Chinese open source is set to widen significantly because Chinese labs cannot access Blackwell-class compute. While they could emulate older chips, the specific complexity of Blackwell makes the 'compute gap' insurmountable for the next generation.
The 'Usefulness' Handoff: The industry is nearing a point of diminishing returns on raw intelligence for consumer applications. The value curve must shift from 'smarter models' to 'longer context/reliable agents' (usefulness) to bridge the gap before AI can achieve scientific breakthroughs (curing cancer).
Future Forecasts
Specific predictions made by Gavin Baker regarding technology and market movements.
XAI will release the first model trained on Nvidia Blackwell chips, likely in early 2026.
Google will eventually bring its silicon design entirely in-house, moving away from its partnership with Broadcom to capture the ~50% gross margins currently paid to them.
A 'Bear Case' for AI Compute: Edge AI on smartphones (Apple) becomes 'good enough' (115 IQ at 30 tokens/sec), significantly reducing the demand for massive cloud-based inference.
Lessons for Investors & Builders
Takeaways derived from Baker's experience and observations.
Investing is the search for hidden truths found at the intersection of history and current events. True alpha comes from identifying these truths before the pari-mutuel system of the market prices them in.
Empathy and humility are critical professional traits. Baker cites his time cleaning toilets as a housekeeper as pivotal in shaping how he treats founders and service workers, contrasting it with the arrogance often found in finance.
Don't judge the capability of a technology based on its free tier. Most skeptics judge AI based on '10-year-old' free models rather than the 'adult' paid versions.
Memorable Quotes
Verbatim excerpts capturing the essence of the conversation.
To a large degree open AI runs on Twitter vibes... I just think AI happens on X.
With software, anything you can specify, you can automate. With AI, anything you can verify, you can automate.
I'm amazed at how many famous and August investors are reaching really definitive conclusions about AI... based on the free tier.
Whatever AI needs to keep growing and advancing, it gets. Have you ever seen public opinion change so fast in the United States on any issue has nuclear power?