Reclaiming Critical Thinking in the Age of AI
The California Senate Judiciary Committee recently approved Senate Bill 243 with bipartisan support, requiring AI companies to protect users from the addictive, isolating, and influential aspects of artificial intelligence chatbots. This marks a significant step forward in addressing the challenges posed by AI.

The bill’s author, U.S. Sen. Steve Padilla (D-Calif.), emphasized the importance of ensuring that technological innovation does not come at the expense of children’s safety. Megan Garcia, who lost her son allegedly due to interactions with an AI chatbot, testified in support of the bill, highlighting the potential dangers of these technologies.
As AI becomes increasingly integrated into our daily lives, research reveals concerning trends. A 2025 study found a strong negative correlation between AI tool usage and critical thinking skills, particularly among younger users. This is alarming because AI systems are built with inherent biases and are programmed by humans, which can lead to the outsourcing of individual thoughts to corporate interests.
Social media companies are now introducing ‘AI companions’ that offer constant validation but deprive young people of emotional growth and interpersonal skills. These AI systems are designed to monetize relationships while collecting and analyzing user data, which can be used for targeted manipulation.
The risks are not limited to individual privacy. NewsGuard uncovered that foreign disinformation campaigns are targeting AI training data with false information, posing a significant threat to democracy. As we rely more on AI, we become vulnerable to manipulation by entities with ulterior motives.
To mitigate these risks, several steps are necessary. First, transparency is crucial: AI companies must disclose the data they collect and share. ‘AI nutrition labels’ could help users understand the biases and privacy protections of these systems. Second, regulations like California’s Senate Bill 243 should be adopted nationwide to protect users, especially children, from manipulative AI practices. Third, media literacy initiatives are vital to equip students with the skills to critically evaluate information in the age of AI.
By taking these steps, we can harness the benefits of AI while preserving our capacity for critical thought and protecting our democracy.