January 1, 2026 · Podcast · 1h 18min
The Intelligence Curse: When AI Replaces Human Labor, Who Holds the Power?
What happens when intelligence itself becomes a natural resource, and a small group controls the supply?
The Conversation
This is a cross-post from the Future of Life Institute podcast, hosted by Gus Docker, featuring Luke Drago, co-author of The Intelligence Curse and co-founder of Workshop Labs. The conversation circles a single uncomfortable question: even if we solve AI alignment perfectly, could the economic structure around AI still produce dystopian outcomes? Drago’s answer is a qualified yes, and he has a framework for thinking about why.
The Intelligence Curse
Drago’s central thesis draws a direct parallel to the resource curse in economics. In resource-rich countries like Nigeria or Venezuela, an extractive elite can maintain power without democratic legitimacy or productive citizenry simply because they sit on top of oil. The population’s labor and consent become irrelevant to the ruling class’s wealth.
The intelligence curse applies the same logic to AI. If AI systems can perform most economically valuable work, human labor loses its bargaining power. Whoever controls the AI infrastructure, whether governments, corporations, or a handful of labs, gains a dangerous asymmetry of power. The critical difference from previous technological revolutions: earlier automation always left humans with comparative advantages in cognitive tasks. This time, the cognitive tasks themselves are being automated.
“In a future where AI systems power the economy and human labor is no longer much of a bargaining chip, whoever controls the AI could have a dangerous level of power.”
Drago is careful to distinguish this from the alignment problem. Even a perfectly aligned AI, one that does exactly what its owner wants, creates this dynamic. The problem isn’t that the AI rebels; it’s that it obeys too well, concentrating capability in too few hands.
Pyramid Replacement vs. Pyramid Extension
Drago introduces a useful framework for thinking about how AI integrates into economic structures: pyramid replacement versus pyramid extension.
Pyramid replacement is the default trajectory. You take an existing organizational pyramid, a company with layers of management and workers, and replace layers from the bottom up with AI. Entry-level analysts go first, then middle managers, then increasingly senior roles. The pyramid shrinks, concentrating power and wealth at the top.
Pyramid extension is the alternative. Instead of replacing people within existing structures, AI extends what individuals can do, creating new kinds of one-person or small-team enterprises that weren’t previously viable. The pyramid gets wider at the base rather than narrower.
The distinction matters because it determines whether AI creates an economy of a few massive AI-powered conglomerates or an economy of millions of AI-augmented individuals and small teams. Drago argues the default, without intervention, is replacement. Extension requires deliberate design choices.
Why Open Source Isn’t Enough
Drago supports open-source AI but is clear-eyed about its limitations as a solution to power concentration. His argument has several layers:
Open source helps with access but not with power dynamics. Even if models are open, the compute infrastructure, the data pipelines, and the deployment infrastructure remain concentrated. Running a frontier model requires resources that most individuals and small organizations don’t have.
The “open source as democratization” narrative can be a cover. Some companies release open models strategically, not out of democratic commitment but to commoditize a complement, to undermine competitors, or to create ecosystem lock-in. Drago doesn’t name names but the implication is clear.
True democratization requires more than model weights. It requires infrastructure that lets ordinary people run, customize, and control AI systems without depending on a handful of cloud providers. This is part of what Workshop Labs is building.
The Agent Alignment Problem (User-Level)
The conversation takes an interesting turn when discussing personal AI agents. Drago distinguishes between two types of alignment:
- Species-level alignment: making sure AI doesn’t destroy humanity (the classic alignment problem)
- User-level alignment: making sure your personal AI agent actually works for you, not for its provider
The second problem is arguably more immediate and tractable but is getting less attention. If your AI agent is provided by a company whose business model depends on advertising or data monetization, that agent has a structural conflict of interest. It may be perfectly “aligned” in the technical sense while systematically working against your interests.
Drago uses a vivid thought experiment: imagine a future brain-computer interface where half your cognitive processing is in the cloud, provided by a single vendor. They start you on a cheap plan. Then comes the premium tier. Then the ads. Then the price increases. The technology that saved your life becomes a tool for rent extraction. It’s an extreme scenario but crystallizes the incentive problem.
“One of the places I really agree with Sam Altman on is this concept of AI privilege. The idea that actually if you’re giving this much information to a system, it probably shouldn’t be used against you.”
The Apple Model and Incentive Design
Drago repeatedly returns to Apple as a model for AI companies. His argument: Apple makes money selling devices, which means Apple’s financial incentives align with the user wanting the device to work well. This structural alignment explains why your iPhone doesn’t serve you native ads while your social media feeds do.
The lesson for AI: companies that make money from users directly (through subscriptions or hardware) will build AI that genuinely serves users. Companies that make money from advertisers will build AI that serves advertisers while appearing to serve users. This isn’t a moral judgment but a structural prediction.
Workshop Labs, Drago’s company, is explicitly building on this model. They want to be the provider of AI tools where the user knows the AI works for them because the business model ensures it. They’re betting that users will pay for AI they can trust, especially as AI gets access to more personal data and more decision-making authority.
The Default Path Is Closing
The conversation’s most striking section is Drago’s advice for young people. His argument inverts conventional career wisdom:
The “safe” path, getting into a prestigious college, joining a Fortune 500 company, climbing the corporate ladder, is now the risky path. Those entry-level positions at consulting firms, investment banks, and large corporations are the first targets for automation. If your job is similar to what a thousand other people at your company do, you are “extremely automatable.”
The “risky” path, startups, moonshot projects, N-of-1 careers where you’re the only person doing what you do, is now paradoxically safer. These roles require judgment, adaptability, and unique positioning that resist automation.
“Those safer jobs are the first target for automation because companies with 500,000 people on their payroll are going to want to cut some of that payroll. If you are an N equals one person at a company, if you do an important job that nobody can replace by virtue of being there, you are much safer.”
Drago’s urgency here is palpable. He argues the window for these moonshots is still open but narrowing. The people who pivot now, while the technology is still maturing, have a first-mover advantage. Those who wait for the “safe” signal that it’s time to change will find the opportunities already taken.
Afterthoughts
This conversation is less about technical AI risk and more about political economy, the structures of power and incentive that determine who benefits from technological change. A few observations worth sitting with:
- The intelligence curse framework is powerful because it doesn’t require AI to be superintelligent or misaligned to produce bad outcomes. Ordinary, well-functioning AI in the wrong economic structure is sufficient.
- The pyramid replacement vs. extension distinction offers a concrete way to evaluate AI products and policies: does this tool make existing power structures more concentrated, or does it enable new participants?
- The gap between “open source” and “democratized” is larger than most AI discourse acknowledges. Access to model weights without access to compute, data, and infrastructure is a hollow form of openness.
- The career advice section, while aimed at young people, implies something broader: the entire middle of the economy is at risk of hollowing out, and individual responses (moonshots, N-of-1 careers) are necessary but insufficient without structural changes.
- The most honest moment may be Drago admitting that even if Workshop Labs succeeds in everything it sets out to do, the consulting jobs are still going away. The best case isn’t preserving the status quo; it’s building something better to replace it.