Social Media Platforms Must Step Up to Protect LGBTQ+ Youth
In a climate of increasing political polarization, the safety and well-being of LGBTQ+ youth are under threat. As former President Donald Trump’s rhetoric and policies have targeted the LGBTQ+ community, social media platforms face mounting pressure to protect young users from misinformation and harmful content.
The rise of misinformation, coupled with the removal of crucial health information, has created a more dangerous environment for vulnerable teens. The CDC’s decision to remove LGBTQ+ health information from its website, coupled with Trump’s executive orders that, when issued, halted gender-affirming care, has exacerbated the challenges faced by LGBTQ+ youth. Now, Meta’s decision to end fact-checking on its platforms further threatens to make social media an unsafe space for young people, particularly for teens who are already facing marginalization and vulnerability.
Adolescence is a time of self-discovery and exploration, and the internet and social media have, for many, become a place to explore questions about their sexual identity. However, this space is anything but safe when untrue statements like “LGBTQ+ is a mental illness” spread unchecked. These scientifically debunked statements are not just factual errors but direct assaults on teens’ sense of self, their mental health, and their well-being.
Studies show that victimization, including anti-LGBTQ+ harassment, strongly predicts self-harm and suicidal thoughts and behaviors among LGBTQ+ young people. Young people may internalize these harmful ideas, leading to confusion, shame, or mental health struggles like anxiety, depression, or suicide ideation.
Adults, including those who run tech companies, are responsible for creating safe and positive online experiences for young people. Experts at the American Academy of Pediatrics, through its Center of Excellence on Social Media and Youth Mental Health, specifically recommend platform policies that prevent the spread of untrustworthy and hateful content and more user control over settings.
Meta, at first, seemed responsive, launching Teen Accounts with features like sleep mode and limits on sensitive content. However, the removal of fact-checking undermines these efforts. This contradiction sends a troubling message: while Meta claims to prioritize the safety and well-being of young users, it simultaneously dismantles one of the key mechanisms ensuring information integrity.
Mark Zuckerberg framed his decision as a defense of “free expression.” But young people consistently express a desire for safer online spaces. According to the Pew Research Center, the majority of teens prioritize feeling safe over being able to speak their minds freely and want enhanced safety features and content moderation.
If Zuckerberg decides to scrap safeguards in fact-checking in favor of “Community Notes,” it is essential that “Community Notes” strategies are evidence-based, youth-centered, and community-driven. Social media companies must prioritize the following three approaches to ensure young people’s safety online:
- Partnering with LGBTQ+ and other advocacy groups: Ensuring that information shared is truthful, accurate, and rooted in the lived experiences of marginalized communities.
- Investing in youth-centered approaches: Utilizing moderated online communities and youth-led platforms to proactively address hate speech.
- Linking young people to mental health resources: Providing access to evidence-based, culturally informed mental health resources at every opportunity.
Zuckerberg’s move has been framed as protecting free speech. Instead, it appears to protect hate speech and misinformation at the cost of young people’s well-being. If Meta is sincere about improving its products, then Teen Accounts must be accountable to the truth.
Editor’s Note: This article reflects the perspective of the author, and does not necessarily represent the values or views of The Fulcrum.