Geoffrey Hinton Warns of AI Dangers as Technology Advances Rapidly
Geoffrey Hinton, often referred to as the ‘godfather of AI,’ has expressed concerns about the rapid development of artificial intelligence. In a recent CBS News interview, Hinton warned that AI is advancing faster than expected and could potentially become superintelligent, posing significant risks to humanity.
Hinton, who was awarded the 2024 Nobel Prize in Physics for his work in machine learning, estimated there’s a ‘sort of 10 to 20% chance’ that AI systems could eventually seize control. He stressed that predicting the exact outcome is impossible. His concerns are partly driven by the rise of AI agents that can perform tasks autonomously, making the situation ‘scarier than before.’
The timeline for superintelligent AI may be shorter than previously thought. Hinton now believes there’s a good chance it could arrive in 10 years or less, revising his previous estimate of five to 20 years. He also warned that global competition between tech companies and nations makes it highly unlikely that humanity will avoid building superintelligence.
Hinton compared the development of AI to raising a tiger cub, stating, ‘It’s just such a cute tiger cub… unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.’ He emphasized that the challenge lies in designing AI in a way that it never wants to take control.
The AI pioneer also expressed disappointment with tech companies he once admired. Hinton mentioned being ‘very disappointed’ that Google, where he worked for over a decade, reversed its stance against military applications of AI. He resigned from Google in 2023 to speak freely about the dangers of AI development and is now a professor emeritus at the University of Toronto.
At 77, Hinton says he’s ‘kind of glad’ he may not live long enough to witness the potentially dangerous consequences of AI. ‘Things more intelligent than you are going to be able to manipulate you,’ he warned, highlighting the potential risks associated with superintelligent AI.