The Dawn of the AI-Powered Scientist
In February of this year, Google unveiled a new AI system designed specifically for scientists, touting it as a collaborative tool to aid in the creation of novel hypotheses and research plans. Though it’s still early to gauge the practical impact of this particular tool, one thing is becoming increasingly clear: artificial intelligence (AI) is rapidly transforming the world of science.
Last year, computer scientists were jointly awarded the Nobel Prize in Chemistry for their development of an AI model capable of predicting the shape of every known protein. Nobel Committee Chair Heiner Linke described this AI system as the realization of a “50-year-old dream,” solving a long-standing challenge that had eluded scientists since the 1970s. This achievement underscores the potential of AI to accelerate scientific breakthroughs, potentially bringing discoveries that would otherwise be decades away.
However, the integration of AI into science is not without its shadows.
The Darker Side: The Rise of Scientific Misconduct
While AI offers immense promise for scientific advancement, it also presents a darker side. The ease with which AI can fabricate research is a growing concern. Academic papers are frequently retracted when their data or findings are found to be invalid due to data fabrication, plagiarism, or human error. These paper retractions are on the rise, exceeding 10,000 in 2023, with the retracted papers having been cited over 35,000 times.
One study found that a significant percentage of scientists admitted to serious research fraud, doubling the previously reported rate. The rate of biomedical paper retractions has quadrupled in the last twenty years, with the majority of these retractions stemming from misconduct.
Generative AI programs, such as ChatGPT, have made it remarkably simple to fabricate research. This was demonstrated vividly by two researchers who used AI to generate 288 entirely fabricated academic finance papers, complete with predictions of stock returns. While this was a demonstration, the potential to generate fictitious clinical trial data or modify experimental results to conceal adverse outcomes is alarming.
Already, there are reported cases of AI-generated papers clearing peer review and being published, only to be retracted later based on undisclosed AI use or serious errors. Some researchers are also using AI in peer reviews, an essential part of scientific integrity that is often time-consuming and unpaid.
In the most extreme scenarios, AI may even write research papers that are then reviewed by other AIs, exacerbating the exponential increase in scientific publishing. This trend is occurring as the average novelty of each paper has been declining.
AI can also lead to the unintentional fabrication of scientific results. Generative AI systems have a tendency to fabricate answers, a phenomenon known as “hallucination.” Its impact on scientific papers isn’t fully understood, researchers found that over half of AI-generated responses to coding questions contained errors, which human oversight failed to catch nearly 40% of the time.
Navigating the Future: Maximizing Benefits and Minimizing Risks
Despite these concerning developments, it’s critical not to discourage the use of AI by scientists. AI offers significant advantages in science, as specialized AI models have been used for years to solve complex scientific problems. Moreover, generative AI models like ChatGPT offer the promise of general-purpose AI scientific assistants, enabling collaboration and streamlining tasks.
For example, researchers at CSIRO are already designing AI lab robots that function as human-like assistants, automating repetitive tasks. As with any disruptive technology, AI presents both benefits and drawbacks.
The challenge for the scientific community lies in establishing appropriate policies and guardrails to maximize the benefits and mitigate the risks.
The potential for AI to reshape science and contribute to a better world is irrefutable. Consequently, as AI continues to develop, the scientific community must decide: should it embrace AI by developing an AI code of conduct that enforces ethical and responsible use within the scientific process, or should a relatively small number of unscrupulous actors be allowed to damage the scientific reputation and miss the opportunity?