January 20, 2026 · Interview · 28min
Andrew Ng: I Won't Hire Engineers Who Can't Use AI, and AGI in Two Quarters Is Pure Hype
Andrew Ng draws a line in the sand: he won’t hire a software engineer who can’t use AI tools sophisticatedly, and he extends that standard to marketers and HR professionals too. But the real provocation isn’t his hiring bar. It’s his argument that companies deliberately lowering the definition of AGI are creating a chain of real-world harm, from misguided career decisions by high school students to billions in misallocated corporate capital.
In this Davos 2026 interview with India’s Economic Times, Ng covers an unusual amount of ground in 28 minutes: why the “thousand flowers bloom” AI strategy fails, how China’s open-source models are becoming soft power instruments, and why US immigration policy is an “awful unforced error.” Through it all, his central message is pragmatic: AGI is nowhere close, but AI is already powerful enough to reshape entire industries. The question is whether people and nations will upskill fast enough.
The New Hiring Bar: Code or Get Left Behind
Ng’s position is unambiguous. He would not hire a software engineer today who doesn’t know how to use AI tools “in a very sophisticated way.” What’s less obvious is how far he extends this: when hiring marketers or HR professionals, he strongly prefers candidates who can write software using AI to enhance their own work.
“Frankly today, I would not hire a software engineer that does not know how to use AI tools in a very sophisticated way.”
The productivity gap between AI-skilled and non-AI-skilled workers is “dramatically” widening. For India’s massive IT services and outsourcing industry, this translates to existential risk. Ng frames it starkly: these are smart, hardworking people the world has trusted for decades, but if they don’t upskill fast enough, the job displacement “could be significant.”
AGI: A Word That Has Lost Its Meaning
Ng agrees with Yann LeCun that LLMs alone aren’t a path to AGI. But he goes further: “You can plug in any technology that exists today and say it by itself is not a path to AGI. That will be a true statement.” Achieving AGI will require breakthroughs “that none of us really know what exactly it is.”
His definition of AGI sets a high bar: AI that can do any intellectual task a human can, from spending five years writing a PhD thesis to learning to drive a truck through a jungle in minutes. By that standard, we’re very far away.
The more interesting argument is about the damage done by redefining AGI downward. Companies lower the bar to claim proximity, and this creates a specific harm chain:
- High school students make career decisions based on the belief that AGI will soon obsolete certain professions
- CEOs make large capital allocation decisions assuming AI capabilities that don’t yet exist
- The public develops inflated expectations about AI’s actual trajectory
“I’ve seen this mislead high school students that are now making different career decisions because they think AGI will obsolete certain careers.”
Yet Ng insists this isn’t a bearish view. Even without AGI, there’s enough work in business process automation and workflow re-engineering to keep everyone busy for a decade.
Why “Let a Thousand Flowers Bloom” Fails
Through his enterprise consulting work at AI Aspire (with partner Chrissy Tan), Ng has observed a consistent pattern: the most common corporate AI strategy, bottom-up innovation, mostly doesn’t work.
The failure mode is specific. Someone identifies a small piece of the business, makes it 10x more efficient with AI. That’s nice, but the overall gains are 5%-10% incremental improvements. These are point solutions that don’t compound.
“A lot of businesses have done the ‘let a thousand flowers bloom’ strategy and it’s mostly not working.”
What actually delivers value is understanding all the steps a business executes to create value, then re-engineering the broader workflow. This requires either top-down leadership or bottom-up innovators with a view of the bigger picture.
His challenge to every CEO: if you’re serious about AI, spend a few hours personally going deeper into the technology. Excitement isn’t enough; you need the knowledge to drive teams, allocate resources, and set priorities.
The Google Brain Bet and What’s Still Left in the Lemon
Ng recounts founding Google Brain 15 years ago with a mission that was “very controversial” at the time: build really large neural networks and throw all the data at them. People told him it was a bad career move. He had data that gave him confidence even then, and 15-16 years later, this recipe of scale has proven spectacularly right.
But he doesn’t think the returns are exhausted. “We’re still not yet done squeezing the juice out of this particular lemon.” There’s more to be gained from scaling AI models further.
On the current model landscape, he sees “many strong horses in the race”: Claude Code did a tremendous job, Gemini 3 and Gemini CLI have “closed the gap significantly,” Opus 4 is incredible, GPT 5.1/5.2 are very strong, and OpenAI Codex is doing well.
AI Doesn’t Think Like Us (and Neuroscience Couldn’t Help)
Before and during Google Brain, Ng read “stacks of papers on neuroscience” trying to understand how the human brain works to inspire better AI algorithms. The exercise was “mostly not useful” because neuroscience itself barely understands how the brain works.
Today’s AI models absorb far more text and images than any human will consume in a lifetime, yet remain dumber than humans in key ways. The learning mechanism is fundamentally different from human cognition.
On whether AI is creative or conscious, Ng steps explicitly into philosophy: these concepts have no measurable scientific definitions. If AI behaves creatively, he’s happy to call it creative. But he flags this as “a philosophical rather than a scientific answer.” The airplane-bird analogy applies: we didn’t build airplanes by mimicking birds, but aerodynamics principles derived from studying birds were essential. A “theory of intelligence,” if one ever emerges, might similarly guide AI development. But no such theory exists.
Open-Source Models as Geopolitical Soft Power
For nations pursuing AI sovereignty, Ng offers a pragmatic path: invest in open-source and open-weight models rather than training from scratch. It’s far more efficient, and multiple nations can pool resources.
The geopolitical dimension is sharper than most discussions. The US leads in closed proprietary models, but China has released some of the best open-source models. When organizations adopt a Chinese model, user queries are more likely to get responses reflecting the producing nation’s values. This is becoming a “tremendous source of geopolitical influence” for China.
The Python analogy crystallizes his point: nobody needs to own Python, but you need to ensure no country with opposing interests controls its specification. AI model sovereignty follows the same logic.
US Immigration: An “Awful Unforced Error”
As an immigrant himself, Ng calls current US immigration policy “awful” and a “huge unforced error.” America’s strength has come from native-born talent working alongside smart immigrants from India and elsewhere.
The personal dimension surfaces: many of his friends have lived in the US for 5-10 years, children born and raised there, still waiting for green cards while regulations shift. Securing the southern border may be legitimate, but hostility toward highly skilled talent and 17-year-olds wanting to attend US colleges is a “vast overcorrection.”
“I think it’s awful. I think it’s a huge unforced error on the part of the United States.”
India’s Leapfrog Moment
AI’s disruption means old rules no longer hold. Established positions are being shaken, giving every nation a better chance to leapfrog. India has done this before: skipping landlines for mobile, building quick commerce that delivers groceries in 8 minutes while Americans find it hard to believe.
“AI is so disruptive a lot of the old rules of the game no longer hold.”
Ng’s message to India is direct: learn the real skills, ignore the hype, and build something bold and new. The opportunity is there for nations that execute the transition. The risk of devastating displacement is there for those that hesitate.
Some Thoughts
This is a compact interview that covers more ground than most hour-long podcasts. A few things stand out:
- The AGI definition-lowering argument is the most original contribution. It’s not the standard “AGI is far away” take; it identifies a specific mechanism of harm: companies game the definition, which cascades into real decisions by students and executives. Few people have articulated this systemic problem so concretely.
- The enterprise AI diagnosis is immediately actionable: bottom-up innovation yields 5%-10% marginal gains; the real value is in workflow redesign. This directly challenges the prevailing “pilot first, scale later” corporate playbook.
- Framing China’s open-source models as values-embedding instruments rather than mere technical competitors is a geopolitical insight that most AI discourse misses.
- The neuroscience dead-end is a useful data point from someone who actually tried it: reading stacks of brain papers before Google Brain, only to find neuroscience too primitive to help. It’s a corrective to the persistent narrative that AI progress comes from understanding the brain.