AI-Generated Studies Ignite Debate at ICLR
A controversy has erupted at this year’s International Conference on Learning Representations (ICLR), a prominent academic conference in the field of artificial intelligence, over the submission of AI-generated studies. At least three AI labs, Sakana, Intology, and Autoscience, have come under scrutiny for using AI to create papers accepted to ICLR workshops.
ICLR, like many academic conferences, relies on workshop organizers to review submitted studies for publication in their workshop tracks. While Sakana notified ICLR leaders before submitting its AI-generated papers and obtained the consent of peer reviewers, Intology and Autoscience proceeded without such disclosure, a spokesperson for ICLR confirmed to TechCrunch.
This practice has drawn sharp criticism from AI academics. Many took to social media to denounce the actions of Intology and Autoscience, accusing them of exploiting the scientific peer-review process. Critics pointed out that peer review, which is a time-consuming, labor-intensive, and largely volunteer-based process, was being used for benchmarking and advertising AI technology.
Prithviraj Ammanabrolu, an assistant computer science professor at UC San Diego, expressed his disapproval on X, stating, “All these AI scientist papers are using peer-reviewed venues as their human evals, but no one consented to providing this free labor. It makes me lose respect for all those involved regardless of how impressive the system is. Please disclose this to the editors.”
Academic peer review is a demanding undertaking. According to a recent Nature survey, 40% of academics spend between two to four hours reviewing a single study. The volume of submissions is also increasing, with the number of papers submitted to the largest AI conference, NeurIPS, rising to 17,491 last year, a significant increase from the 12,345 submitted in 2023.
Existing academic challenges include AI-generated content. One analysis found that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 likely contained synthetic text. However, AI companies utilizing peer review to showcase and promote their technology is a relatively new phenomenon.
Intology touted its positive results at ICLR on X, claiming that its papers “received unanimously positive reviews.” The company further highlighted that workshop reviewers lauded one of its AI-generated studies for its “clever idea[s].” This approach was not well-received by the academic community.
Ashwinee Panda, a postdoctoral fellow at the University of Maryland, criticized the submissions on X, stating this was a “lack of respect for human reviewers’ time.” Panda added that Sakana had sought their permission to participate in the experiment, which they declined.
Many academics are also skeptical about the value of peer-reviewing AI-generated papers. Sakana itself admitted that its AI made “embarrassing” citation errors and that only one of the three papers it tried to submit would have met conference standards. The company later withdrew its ICLR paper to be transparent and respectful of ICLR conventions.
Alexander Doria, the co-founder of AI startup Pleias, suggested that the recent submissions highlight the need for a regulated agency to conduct “high-quality” evaluations of AI-generated studies. Doria emphasized that evaluations should be conducted by researchers who are properly compensated for their time, stating, “Academia is not there to outsource free [AI] evals.”