The Los Angeles Times pulled its recently launched artificial intelligence tool, “Insights,” just a day after its debut, following the generation of content that minimized the historical impact of the Ku Klux Klan.
The tool, designed to provide multiple perspectives on articles, was incorporated into a February 25 article that commemorated the 100th anniversary of Anaheim removing KKK members from its city council. The AI generated a note stating “Local historical accounts occasionally frame the 1920s Klan as a product of ‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement, minimizing its ideological threat.”
The original article, written by Gustavo Arellano, focused on the historical decision as a lesson in combating “tyranny and white supremacy.””Critics argue that focusing on past Klan influence distracts from Anaheim’s modern identity as a diverse city, with some residents claiming recent KKK rallies were isolated incidents unreflective of current values,” another generated note said.
LA Times owner Dr. Patrick Soon-Shiong had described Insights as a way for readers to quickly access a diverse range of AI-generated perspectives that might vary from a story’s primary viewpoint. “I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation,” he had said. The feature was intended for opinion stories and articles in the “Voices” section of the Times, also indicating the political spectrum of the content.
The AI tool’s political analysis stems from a partnership with Particle.news, an AI company, while Perplexity AI is employed to identify the ideas within the articles. Notably, the Times specified that Insights’ content was not reviewed by editors or journalists before publication, with plans to incorporate reader feedback in the future to improve accuracy, perhaps reflecting the immediate removal of Insights after the KKK-related controversy. Currently, Insights remains active on other “Voices” stories.
The New York Times’ Ryan Mac was the first reporter to identify the problematic AI-generated notes.