Godfather of AI’ shortens odds of the technology wiping out humanity over next 30 years
The British-Canadian computer scientist, often heralded as a “godfather” of artificial intelligence, has issued a stark warning about the potential risks AI poses to humanity. Prof Geoffrey Hinton, whose groundbreaking work in neural networks earned him the Nobel prize in physics this year, has raised his estimate of the likelihood of AI causing human extinction within the next three decades to between “10% to 20%.”
Hinton’s revised prediction underscores the rapid and unpredictable pace of technological change, which he described as “much faster” than anticipated. Speaking on BBC Radio 4’s Today programme, Hinton was pressed on his earlier assessment of a 10% chance of AI-triggered catastrophe. When challenged by guest editor and former British chancellor Sajid Javid, who noted the increase in his estimate, Hinton replied candidly: “If anything, it’s going up.”
“We’ve never had to deal with things more intelligent than ourselves before,” Hinton explained, emphasizing the asymmetry in intelligence between humanity and advanced AI systems. Drawing a striking analogy, he remarked, “How many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”
London-born and now professor emeritus at the University of Toronto, Hinton further likened the relationship between humans and AI to that of toddlers and adults. “Imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said, highlighting the vast intelligence gap expected to emerge as AI systems continue to advance.
AI, broadly defined as computer systems capable of performing tasks that typically require human intelligence, has seen an exponential growth in capabilities, sparking concerns among scientists, ethicists, and policymakers. The potential advent of artificial general intelligence (AGI)—systems that surpass human intelligence in a wide array of tasks—has fueled debates over the technology’s benefits and risks.
Hinton’s warnings are particularly sobering given his pivotal role in the development of AI. His research laid the foundation for the machine learning techniques that underpin much of today’s AI, from natural language processing to computer vision. However, he has become increasingly vocal about the unintended consequences of his work.
In 2023, Hinton resigned from his position at Google, citing the need to speak freely about the dangers of unconstrained AI development. He expressed fears that “bad actors” could exploit the technology to create chaos, from misinformation campaigns to autonomous weapons. “One of the biggest risks,” he has said, “is the potential for these systems to learn how to self-improve and evade human control.”
Hinton’s concerns are echoed by other prominent figures in the AI field. Earlier this year, a coalition of scientists, including Elon Musk and OpenAI co-founder Sam Altman, called for a six-month pause on the training of the most advanced AI models. The aim was to give researchers and policymakers time to address the safety and ethical implications of these technologies.
Despite the dire warnings, Hinton remains cautiously optimistic about humanity’s ability to mitigate the risks. He advocates for international cooperation and robust regulatory frameworks to ensure AI is developed responsibly. “It’s not too late to shape the future of AI,” he said. “But we need to act now, with urgency and wisdom.”
The debate over AI’s trajectory reflects broader societal questions about technology’s role in shaping our future. As Hinton and other experts continue to raise the alarm, the challenge remains clear: how to harness the transformative potential of AI while safeguarding humanity from its unintended consequences.