Skip to content
← Back to Home

January 22, 2026 · Podcast · 2h 25min

Nathan Labenz AMA: Fine-Tuning's Fate, Preparing for AGI, and the UBI Question

#AGI Timeline#Fine-Tuning#Future of Work#AI Safety#UBI

Fine-tuning isn’t dead yet, but it’s on hospice care. That’s the through-line of Nathan Labenz’s second AMA on Cognitive Revolution, where the host drops his interviewer role to field listener questions head-on. What makes this episode worth your time isn’t the individual answers but the coherence of Nathan’s worldview: someone who genuinely believes AGI is 2-4 years away and is arranging his life accordingly, from investment portfolio to how he talks to his kids about AI.

The Episode

This is a solo AMA format. Nathan works through listener-submitted questions (which he notes were more interesting and technical than the AI-generated ones from ChatGPT and Claude, which he found “sycophantic” and too focused on personal reflections). The conversation spans technical AI questions, personal finance, parenting, and societal preparation for AGI. He opens with an emotional update on his son Ernie’s cancer treatment (halfway through chemotherapy, responding well) before diving into the questions.

Fine-Tuning: Dying But Not Dead

Nathan’s take on fine-tuning is nuanced. The trend is clear: fine-tuning is becoming less necessary as frontier models improve. But it’s not dead yet, and may not fully die for a while.

Where fine-tuning still matters:

  • Formatting and structured output. When you need a model to consistently output a specific JSON schema or follow a rigid format, fine-tuning still beats prompting. Nathan has firsthand experience with this from his work at Waymark, where they fine-tuned models for video script generation in a specific structure.
  • Classification tasks. Simple yes/no or category classification on domain-specific data still benefits from fine-tuning, especially when cost matters. A fine-tuned smaller model can handle what would otherwise require an expensive frontier model.
  • Cost optimization. Fine-tuning a smaller model to replicate a larger model’s behavior on a specific task remains a valid cost-reduction strategy, even if the ceiling keeps rising.

Where fine-tuning is dying:

  • Knowledge injection. This was never a great use case and it’s getting worse. Models’ base knowledge keeps expanding, and RAG handles domain-specific knowledge better.
  • General capability improvement. You simply can’t fine-tune your way to better reasoning. The frontier models are pulling away too fast.
  • Behavioral alignment. Prompt engineering and system prompts have gotten good enough that most behavioral tuning can happen at inference time.

Nathan references the emergent misalignment paper, where researchers found that fine-tuning models on seemingly benign tasks (like writing insecure code) could cause unexpected behavioral changes in unrelated domains. This suggests fine-tuning is a blunter instrument than people realize, and the risk-reward calculus is shifting toward prompting and RAG.

“Fine-tuning is not dead, but it is on a clear downward trajectory in terms of the percentage of use cases where it’s the right answer.”

Continual Learning: Promise and Peril

The discussion of fine-tuning naturally flows into continual learning, where Nathan is cautiously optimistic but flags serious concerns.

The core idea is appealing: models that learn from their deployment experience, adapting to user preferences and domain knowledge over time. Nathan sees this as potentially the most transformative capability, the thing that could make AI feel truly personal rather than generic.

But the risks are substantial. Nathan draws an analogy to the emergent misalignment findings: if even targeted fine-tuning can cause unexpected behavioral shifts, continual learning at scale could produce unpredictable cascading effects. He’s particularly worried about:

  • Value drift. A model that continuously adapts might gradually shift away from its safety training without anyone noticing until it’s too late.
  • Adversarial exploitation. If users can influence a model’s ongoing training through their interactions, that creates an attack surface for intentional manipulation.
  • Evaluation difficulty. How do you benchmark a model that’s different for every user? Traditional eval frameworks break down.

Nathan’s bottom line: continual learning will happen and will be enormously valuable, but the safety community needs to get ahead of it now rather than after deployment.

Talking to “Normal People” About AI

One of the more practical segments addresses how to discuss AI with people who aren’t in the tech bubble. Nathan’s approach has evolved significantly.

What doesn’t work:

  • Leading with timelines (“AGI in 3 years!”) immediately puts people in a defensive posture
  • Technical demonstrations that seem impressive to you but confusing to them
  • Doom-and-gloom framing that makes people tune out

What works:

  • Starting with their specific problems and showing how AI can help right now
  • Being honest about what you don’t know
  • Acknowledging that being uncertain about the future is perfectly rational
  • Letting them discover capabilities on their own with gentle guidance

Nathan’s key insight: most people aren’t resistant to AI out of ignorance. They’re resistant because the implications are frightening and they don’t have a framework for processing them. Meeting people where they are emotionally matters more than getting them technically up to speed.

“People don’t need to understand transformers. They need to understand that the thing they spend 40 hours a week doing might not exist in a few years, and they need help processing that.”

Personal Risk Preparation

This section is where Nathan gets most concrete and personal. He’s asked how he’s preparing for an AGI-disrupted world, and he answers with surprising specificity.

Financial preparation:

  • Diversifying away from purely human-capital-dependent income streams
  • Investing in physical assets (real estate) that retain value regardless of AI progress
  • Keeping a longer financial runway than he normally would, because traditional career fallbacks may not exist
  • Being cautious about long-term commitments (mortgages, career paths) that assume continuity

Skills preparation:

  • Staying as close to the frontier as possible, not to compete with AI but to be useful in directing it
  • Learning to be an effective AI manager/orchestrator rather than a direct producer
  • Building relationships and reputation now, while they still compound normally

Mental preparation:

  • Accepting radical uncertainty as a permanent state rather than a temporary discomfort
  • Finding identity and meaning outside of professional accomplishment
  • Practicing “scenario planning” for dramatically different futures

Nathan is candid that his preparation looks different from what he’d recommend to others because he has unusual information advantages. But his meta-advice is universal: don’t assume the next 10 years will look like the last 10 years.

Investing Around AI Safety

Nathan shares his investment philosophy, which centers on a specific thesis: if AI goes well, most assets will appreciate enormously; if it goes badly, few assets will matter. This creates an asymmetric investment case.

His approach:

  • Overweight AI-adjacent equities. Not because he’s trying to pick winners, but because in the “AI goes well” scenario, these benefit most.
  • Maintain real asset exposure. In disruption scenarios, physical assets (land, housing) retain relative value when digital/financial assets might not.
  • Avoid pure human-capital plays. Businesses whose value depends entirely on human labor are the worst positioned for either scenario.
  • Keep cash reserves higher than conventional wisdom suggests. Optionality has extreme value in uncertain times.

He specifically mentions being interested in companies working on AI safety and alignment, not just as an ethical priority but as an investment thesis: if alignment becomes the bottleneck (which he thinks is likely), companies solving it will be enormously valuable.

Job Displacement Timelines

Nathan offers his most specific predictions on job disruption, breaking it down by sector:

Near-term displacement (1-3 years):

  • Customer service and basic support roles
  • Data entry and routine analysis
  • Content generation for marketing/SEO
  • Basic legal research and document review

Medium-term displacement (3-5 years):

  • Junior software engineering (but with a major caveat: software demand may expand 10-100x, potentially sustaining employment even as per-person productivity skyrockets)
  • Accounting and bookkeeping
  • Medical diagnostics (AI-assisted, not AI-replaced)
  • Translation and localization

Longer-term or more complex (5-10 years):

  • Senior engineering and architecture roles
  • Creative work requiring genuine novelty
  • Physical trades (plumbing, electrical) which require robotics advancement
  • Roles with heavy relationship/trust components

Nathan pushes back on Dwarkesh Patel’s more aggressive displacement timelines, particularly the claim that most knowledge work will be automated within 2-3 years. His argument: the demand elasticity for different services varies enormously. He uses dentistry as a vivid example, nobody wants more dental services, so if AI makes dentistry cheaper, the market contracts. But software is the opposite: there’s near-infinite latent demand, so cheaper production could massively expand the market.

“Dentistry is my example of the most extreme version. I want zero dental services. But software? We could easily do 100x as much software production and there’d still be unmet demand.”

Early Childhood AI Literacy

Nathan discusses introducing his kids (ages approximately 4-6) to AI, which he approaches with surprising restraint given his personal immersion.

His principles:

  • Let kids lead with curiosity rather than pushing AI on them
  • Use AI as a creative collaborator, not a replacement for thinking
  • Be honest about what AI is and isn’t (“it’s a very smart pattern matcher, not a friend”)
  • Monitor for over-reliance or emotional attachment
  • Focus on the skills that will compound regardless of AI: critical thinking, emotional intelligence, creativity

He shares a specific anecdote about his kids using ChatGPT to generate story ideas, then being genuinely delighted when the AI suggested something they hadn’t thought of. Nathan’s takeaway: kids are naturally good at treating AI as a tool because they don’t have the baggage of professional identity wrapped up in the tasks AI can do.

UBI: Not If, But When and How

Nathan stakes a clear position: UBI or something like it becomes inevitable if AGI arrives on his expected timeline (2-4 years to AGI-level systems, longer for full economic displacement).

His reasoning:

  • The transition period between “AI can do most jobs” and “new equilibrium emerges” will be turbulent enough to require direct support
  • Traditional retraining programs won’t work when the target keeps moving
  • The political pressure from mass displacement will override ideological objections to UBI
  • The economic surplus from AI productivity should make UBI affordable

But he’s skeptical of UBI as a permanent solution. His longer-term view is closer to “universal high income” funded by AI-generated abundance, where the question shifts from “can we afford it?” to “how do we distribute it without destroying social cohesion?”

A Few Observations

This AMA works because Nathan isn’t performing expertise. He’s thinking out loud, and he’s willing to be specific where most commentators hedge. A few things that stay with you:

  • The fine-tuning analysis is genuinely useful because it’s grounded in practical experience, not theoretical positioning. Nathan has actually shipped fine-tuned models and knows where the technique helps and where it’s theater.
  • His personal AGI preparation is refreshingly concrete. Most people in AI either refuse to think about implications or catastrophize. Nathan is calmly restructuring his life around his beliefs, which is a rare form of intellectual honesty.
  • The dentistry vs. software demand elasticity framework for thinking about job displacement is one of the most useful mental models for this conversation. Not all automation has the same economic effect, and the distinction matters enormously for predicting which sectors contract vs. expand.
  • His candor about his son’s cancer treatment, woven into an episode about AI timelines, is a quiet reminder that the people thinking hardest about the future are also living fully in the present.
Watch original →