Google DeepMind CEO Demis Hassabis predicts that artificial general intelligence, where computers possess human-level cognitive abilities, is just five to ten years away. In a recent interview with 60 Minutes, Hassabis discussed the rapid progress of AI, stating that it’s on track to understand the world in nuanced ways and be embedded in everyday life. “It’s moving incredibly fast,” Hassabis said, attributing the progress to an exponential curve of improvement driven by increased attention, resources, and talent in the field.
Hassabis, a pioneer in AI and co-founder of DeepMind (acquired by Google in 2014), has a background in computer science and neuroscience. His work on AI led to a Nobel Prize for creating an AI model that predicted the structure of 200 million proteins in one year, a task that previously took years to map a single protein.
Recent Advancements in AI
DeepMind’s Project Astra, an AI companion that can see, hear, and chat about various topics, represents a significant step forward. When tested by 60 Minutes correspondent Scott Pelley, Astra identified artworks and created a narrative about a painting by Edward Hopper. Hassabis noted that AI systems often surprise him with their capabilities after being trained on the internet for months.
Future Developments
Google DeepMind is currently training its AI model ‘Gemini’ to interact with the world, such as booking tickets or ordering online. Robotics will also play a crucial role in advancing AI, potentially leading to humanoid robots that can perform useful tasks within the next couple of years.
Challenges and Concerns
While AI has made tremendous progress, Hassabis acknowledges that current systems lack imagination and are not self-aware. He emphasizes the need to build intelligent tools first and then use them to advance neuroscience before considering self-awareness.
Hassabis sees enormous potential benefits in AI, including the possibility of ending disease within the next decade by accelerating drug development. AI could also lead to ‘radical abundance’ by eliminating scarcity. However, he also highlights the need for guardrails and safety limits to prevent misuse by bad actors and ensure control as AI becomes more autonomous.
“We have to give these systems a value system and guidance, and some guardrails around that, much in the way that you would teach a child,” Hassabis said, emphasizing the importance of teaching morality to AI systems.
As AI continues to develop rapidly, the balance between progress and safety remains a critical challenge for researchers and developers.