- Duo Discover
- Posts
- When Will AI Be Smarter Than Humans? The Real Question We Should Be Asking
When Will AI Be Smarter Than Humans? The Real Question We Should Be Asking
Stay up-to-date with AI
The Rundown is the most trusted AI newsletter in the world, with 1,000,000+ readers and exclusive interviews with AI leaders like Mark Zuckerberg, Demis Hassibis, Mustafa Suleyman, and more.
Their expert research team spends all day learning what’s new in AI and talking with industry experts, then distills the most important developments into one free email every morning.
Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.
As the stock markets rally—Sensex surging past 75,000 and Nifty 50 gaining over 400 points—another kind of hype is capturing attention: Artificial General Intelligence (AGI). Everyone from OpenAI’s Sam Altman to Elon Musk and Google DeepMind’s Demis Hassabis is tossing around predictions about when machines will match or exceed human intellect. But here’s the thing—nobody really knows what AGI is, let alone when it will happen.
AGI conjures up images of Hollywood-style sentient machines—whether it’s the empathetic AI girlfriend from Her or the malevolent Skynet from Terminator. But the reality is far less cinematic and far more ambiguous. The term “AGI” is increasingly used as a catch-all for “something big is coming,” and that vagueness is a problem. The real issue isn’t whether machines will become human-like—it’s what these systems are actually capable of doing, and how they’re reshaping our world.
So, How Close Are We to AGI?
Depends who you ask.
Altman and Anthropic’s Dario Amodei say AGI is just a couple of years away. Others, like DeepMind’s Hassabis and Meta’s Yann LeCun, estimate five to ten years. Then there’s a growing chorus of journalists and tech thinkers warning us to brace for impact.
But even those issuing the warnings are vague about what they mean. Some think AGI will be an AI that can perform most mental tasks as well as a human. Others believe it’ll be a system capable of Nobel-worthy innovation. Still others envision a machine that can navigate the physical world or outperform the smartest human alive. That’s a pretty wide spectrum.
Is “AGI” Even a Useful Term?
Short answer: not really.
The buzzword serves as a shorthand for “revolutionary AI is coming,” but it distracts more than it clarifies. Instead of chasing a moving target like AGI, it’s more useful to understand what specific AI systems can—and cannot—do right now.
Large Language Models (LLMs) like ChatGPT are impressive. They generate readable text, pass standardized tests, and even write code or poetry. But they also hallucinate facts, struggle with ambiguity, and can fail spectacularly outside their narrow domains. They’re powerful, yes—but not general. This is called the “jagged frontier” problem: being brilliant at one task and useless at a seemingly related one.
That’s not intelligence—it’s specialized pattern recognition at scale.
Human Intelligence Isn’t “General” Either
Part of the confusion stems from a flawed assumption: that human intelligence is a kind of universal benchmark. In reality, our mental capabilities evolved for very specific biological and social reasons. We’re good at surviving as humans, not at all tasks imaginable.
Other species have incredible cognitive abilities that we can’t replicate. Elephants remember migration routes over thousands of miles. Spiders interpret subtle vibrations in their webs. Octopuses distribute cognition across their limbs. None of these are “general,” but they are extraordinary.
So, why expect machines to evolve into a human-like intelligence? They won’t. Nor should we want them to. The future of AI lies not in mimicking us, but in expanding what intelligence can mean.
Agentic AI: A Glimpse Into That Future
One of the next frontiers is “agentic” AI. These aren’t just chatbots that respond to queries. They’re systems that act—filling out forms, scheduling meetings, or drafting emails based on context. Zoom, for instance, is rolling out AI that can parse a meeting and handle follow-ups automatically.
Are they AGI? No. They’re more like hyper-specialized assistants—powerful but narrow.
And they might come in swarms. Imagine dozens or hundreds of these AI agents managing your workflows, finances, or home devices. Helpful? Absolutely. General-purpose human-like minds? Not even close.
And what happens when these swarms interact at scale? Flash crashes in stock markets have already shown how autonomous systems can spiral. Multiply that by billions of agents, and the potential for chaos becomes very real.
Embodied AI: Thinking With a Body
While LLMs are trained on language alone, another branch of AI is exploring embodiment—giving machines physical or simulated bodies so they can learn by interacting with the world. This approach mimics how humans develop intelligence through sensory experience and movement.
Could it lead to machines that truly “think”? Possibly. But they still won’t think like us. A robot that never sleeps, eats, or feels emotions won’t ponder its mortality or write sonnets about heartbreak. Its understanding of tasks will be mechanical, not emotional.
The Smarter Question
Rather than ask when AI will become smarter than humans, a better question is: What is this AI actually capable of doing?
It’s not about achieving some mythical milestone. It’s about understanding capabilities and consequences.
Can AI write contracts, diagnose illnesses, or design new materials? Yes—and that’s game-changing. But that doesn’t mean it understands law, medicine, or physics in any human sense. It means it’s very good at replicating certain outputs under certain conditions.
If you want to track the progress of AI, stop watching for signs of “AGI.” Instead, focus on the narrow, specialized, astonishingly capable systems that are already here—and what they mean for jobs, security, and society.
Because what comes next won’t be a human copycat.
It’ll be something else entirely.
What did you think of this week's issue?We take your feedback seriously. |