AI Surveillance in Schools: A Double-Edged Sword
Across the United States, an increasing number of school districts are employing artificial intelligence (AI) to monitor the online activities of students, a move intended to safeguard children amid a rising mental health crisis and the persistent threat of school shootings. These surveillance tools, however, are sparking considerable debate regarding privacy violations and security vulnerabilities.
Surveillance software, such as that developed by the company Gaggle, scans school-issued devices 24/7, looking for indicators of potential danger. These can range from expressions of self-harm and suicidal thoughts to threats of violence. When a potential issue is detected, alerts are sent to school staff for review. The goal is to intervene and offer support when needed.
The Vancouver Case: A Cautionary Tale
In a recent incident, Vancouver Public Schools in Washington state inadvertently exposed nearly 3,500 sensitive student documents following a records request about its surveillance technology. The released documents, initially unredacted, revealed the personal struggles students were facing, providing a glimpse into the types of issues the surveillance software flagged.
These files contained student writings about depression, relationship issues, suicidal ideation, addiction, and eating disorders. They also featured poems, college essays, and even excerpts from AI chatbot interactions. The documents were accessible to school staff and others with access to the files; security experts highlighted the considerable risk this posed.
The Technology: How It Works
Companies such as Gaggle, GoGuardian, and Securly use machine-learning algorithms to scan student online activity. The technology looks for keywords, phrases, or patterns that could indicate a problem. If the algorithm detects a potential issue, it sends a screenshot of the concerning activity to human reviewers.
The latest contract Vancouver signed with Gaggle, in the summer of 2024, shows a price of $328,036 for three school years, roughly the cost of an extra counselor. If a Gaggle employee deems the issue serious, the company alerts school officials, and in cases of imminent danger, it may directly contact law enforcement.
Mixed Results and Unintended Consequences
While proponents argue that these tools allow schools to identify and support vulnerable students, the reality is far more complex. Counselors report alerts, but some students find ways around the monitoring strategies. Critics assert that these tools can erode trust between pupils and staff, while not always resulting in a safer school environment.
One of the most concerning consequences is the potential to inadvertently out LGBTQ+ students. The Vancouver case revealed multiple instances where students were potentially identified due to their writings about their sexual orientation or gender identity. LGBTQ+ students are more susceptible to mental health issues, making them especially vulnerable to misidentification or privacy breaches.
“We know that gay youth, especially those in more isolated environments, absolutely use the internet as a life preserver,” said Katy Pearce, a University of Washington professor.
Weighing the Benefits and Risks
While school officials maintain that the technology is important, the long-term impacts are still unclear. Recent research indicates that there is little concrete proof that the digital surveillance significantly reduces suicide rates or violence.
“If you don’t have the right number of mental health counselors, issuing more alerts is not actually going to improve suicide prevention,” stated Benjamin Boudreaux, an AI ethics researcher.
The debate over this kind of surveillance is complicated. Parents may not always be aware of the practice or may struggle to opt their children out of it.
Some students express concerns about the loss of privacy. Others worry that the constant monitoring may hinder a teen’s ability to build a private life and explore their feelings.
As schools continue to grapple with the challenge of keeping students safe, the use of AI-powered surveillance raises vital questions that must be addressed.