- Duo Discover
- Posts
- "Godfather of AI" Warns of 20% Chance of Human Extinction
"Godfather of AI" Warns of 20% Chance of Human Extinction
Writer RAG tool: build production-ready RAG apps in minutes
RAG in just a few lines of code? We’ve launched a predefined RAG tool on our developer platform, making it easy to bring your data into a Knowledge Graph and interact with it with AI. With a single API call, writer LLMs will intelligently call the RAG tool to chat with your data.
Integrated into Writer’s full-stack platform, it eliminates the need for complex vendor RAG setups, making it quick to build scalable, highly accurate AI workflows just by passing a graph ID of your data as a parameter to your RAG tool.
Geoffrey Hinton, a British-Canadian computer scientist often referred to as the “godfather of AI,” has increased his estimate of the likelihood that artificial intelligence could lead to human extinction within the next three decades. Hinton, who was awarded the Nobel Prize in Physics this year for his groundbreaking work in AI, now places the chance of AI causing humanity’s downfall at 10-20% over the next 30 years. This is a significant increase from his previous estimate of a 10% chance.
In an interview on BBC Radio 4's Today programme, Hinton discussed the rapid pace at which AI technology is evolving. When asked if he had revised his earlier prediction, he confirmed that the risk was greater than he had initially thought, but still maintained his position on the possibility of an AI-driven catastrophe, stating, “Not really, 10 to 20 percent.”
Hinton’s updated estimate comes amid growing concerns about the potential risks associated with the development of artificial intelligence. He explained that the speed of AI advancements is far quicker than expected, which raises alarms about the implications for humanity’s control over the technology. He pointed out that humans have never faced the threat of a more intelligent being, noting that in nature, there are very few examples of a less intelligent entity controlling a more intelligent one. “If anything, we’re the less intelligent ones,” Hinton remarked. “Imagine yourself and a three-year-old; we’ll be the three-year-olds,” he added, drawing a stark analogy to emphasize the scale of the threat.
Hinton’s remarks reflect deep concerns within the AI research community, where fears of artificial general intelligence (AGI) — systems that are smarter than humans — becoming uncontrollable have gained traction. In 2023, he resigned from his position at Google to speak more openly about the dangers of unconstrained AI development, warning that the technology could fall into the hands of bad actors who might use it for malicious purposes.
When asked to reflect on the current state of AI development compared to when he first started his work in the field, Hinton admitted that he never imagined AI would advance so rapidly. He expressed surprise that most experts now believe we will develop AI systems smarter than humans within the next 20 years. “This is a very scary thought,” he said, highlighting the profound implications of such an eventuality. “The situation we’re in now is that most experts think AI will soon surpass human intelligence, and the pace of development is much faster than I expected.”
Hinton is calling for urgent government regulation to mitigate the risks posed by AI. He warned that relying on the invisible hand of the market or the profit motives of large technology companies would not be sufficient to ensure AI is developed safely. “The only thing that can force these big companies to do more research on safety is government regulation,” Hinton stated. He stressed that the rapidly evolving nature of AI technology demands a coordinated effort to ensure it is developed responsibly and with proper safeguards in place.
Hinton is widely recognized as one of the three “godfathers of AI,” along with Yann LeCun and Yoshua Bengio. The trio received the ACM A.M. Turing Award — often referred to as the Nobel Prize of computing — for their groundbreaking contributions to the development of deep learning, a key component of modern AI systems. Despite the shared recognition, not all experts in the AI field share Hinton’s concerns about the existential risks posed by the technology.
Yann LeCun, who serves as Meta’s chief AI scientist, has downplayed the fears of an AI apocalypse, suggesting that AI could ultimately help humanity by solving pressing global challenges. LeCun has argued that the development of AGI does not necessarily pose an existential threat and, in fact, might help humanity avert catastrophe by providing solutions to issues like climate change or resource scarcity.
Hinton’s warnings come at a time when the development of AI is accelerating at an unprecedented rate, with significant breakthroughs in machine learning, natural language processing, and autonomous systems. As the technology continues to evolve, the debate over its potential risks and rewards has intensified. While some experts, like Hinton, urge caution and regulation, others remain optimistic about AI’s potential to improve human life and solve global problems.
The debate over AI’s future raises important questions about how society will navigate the challenges and opportunities presented by the technology. As Hinton and other AI pioneers continue to raise alarms about the risks of uncontrolled AI development, it is clear that the next few decades will be critical in shaping the role of artificial intelligence in our world.
Ultimately, the question remains: can humanity control the future of AI, or will it lead to its own downfall? As AI systems grow ever more powerful, this existential question will continue to dominate discussions in both the scientific and policy-making communities.
What did you think of this week's issue?We take your feedback seriously. |