The Evolution of Artificial Intelligence
Artificial intelligence has been around longer than most people realize. In 1950, Alan Turing proposed the famous ‘Turing test’ in his paper, ‘Computing Machinery and Intelligence,’ questioning whether machines could think. The test, which he termed the ‘imitation game,’ involves three participants: a human judge, a machine, and another human. The judge interacts with both the human and the machine without knowing which is which, attempting to distinguish between their responses. If the judge cannot tell them apart, the machine is considered to have passed the test.
Recent years have seen significant advancements in generative AI, enabling machines to learn from vast datasets and create new content autonomously. These systems have become so sophisticated that they can now pass the Turing test with ease. The first machine to achieve this was Eugene Goostman, a chatbot simulating a 13-year-old Ukrainian boy, which fooled 33% of judges in 2014. More recently, OpenAI’s ChatGPT 4.5 and Meta’s Llama 3.1 405B convinced 73% and 53% of judges, respectively, that they were human.
The pace of AI development is accelerating, not decelerating. Microsoft, a major backer of OpenAI, has unveiled its new quantum chip, Majorana 1, which has the potential to surpass the combined processing power of all computers on Earth. Integrating this technology into next-generation AI architectures could be a crucial step toward creating truly sentient artificial beings.
The Potential Risks of Advanced AI
Many scientists believe there’s a significant chance that AI could lead to human extinction within the next century. Advanced AI systems, particularly those equipped with quantum chips, could hypothetically create lethal pathogens, launch nuclear weapons, or map out strategies to eliminate humanity. Forecasters at the 2022 Existential-Risk Persuasion Tournament predicted a 6% chance of human extinction due to AI by 2100.
We’re still in the early stages of AI development, despite the rapid progress made in recent years. It’s not unreasonable to expect artificially conscious robots within the next decade. However, this raises concerns about the potential consequences if these systems were to become rogue and decide that humans are no longer worth serving or protecting.
Some lawmakers have proposed implementing ‘kill switches’ to control AI systems, but tech companies have resisted this idea, citing concerns about stifling innovation. California Gov. Gavin Newsom vetoed a bill last year that included such a provision. The issue is not limited to the U.S.; countries like China are rapidly developing AI with little regard for the potential consequences. China has already begun developing robot soldiers, some of which are doglike or humanoid, for use in battlefields.
The development of AI is a global issue, and the race to create more advanced systems is intensifying. Even if U.S. tech companies were to halt their innovation, adversaries like China would likely continue. It may only be a matter of time before AI goes rogue, making it a question of ‘when’ rather than ‘if.’