Match Group, the company behind popular dating platforms like Tinder and Hinge, is increasing its investment in artificial intelligence (AI) to enhance user experiences. New AI-powered features are set to roll out this month, designed to help users select profile photos, craft messages, and receive “coaching.” However, experts are raising alarms about the potential downsides of relying on AI to facilitate romantic connections.

Dr. Luke Brunning, a lecturer in applied ethics at the University of Leeds, is leading a call for regulatory protections against the use of AI on dating apps. He and dozens of other academics from across the UK, US, Canada, and Europe express concern that the hasty adoption of generative AI “may degrade an already precarious online environment.” They fear that AI could worsen loneliness, impact youth mental health, perpetuate biases, and erode real-life social skills. The academics are urging for prompt regulation of the growing use of AI features.
The concerns are multi-faceted. One worry is that users who become overly reliant on AI-generated messages and profiles may struggle with in-person interactions. Without the assistance of their phones, these users may experience anxiety and withdrawal from social situations, further isolating themselves in the digital space. Another fear is the erosion of trust: It becomes harder to discern who is a genuine person and who is an AI-powered bot.
Dr. Brunning emphasizes that the dating app environment already presents challenges, and that new technology should not be used to address the social problems that have created the initial problems. “They’re reaching for technology as a way of solving them, rather than trying to do things that really de-escalate the competitiveness, [like] make it more easy for people to be vulnerable, more easy for people to be imperfect, more accepting of each other as ordinary people that aren’t all over 6ft [tall] with a fantastic, interesting career, well written bio, and constant sense of witty banter. Most of us just aren’t like that all the time.”

In the UK, nearly 5 million people use dating apps—a number that swells to over 60 million in the US. With a large portion of these users aged 18-34, the impact of AI on dating trends is poised to be significant.
Proponents of AI integration in dating apps suggest that these “wingmen” can help reduce dating app fatigue and the administrative burden of setting up dates. However, critics argue that such features could exacerbate existing issues. The letter warns that AI on dating apps risks “making manipulation and deception easier, reinforcing algorithmic biases around race and disability, and homogenising profiles and conversations even more than they currently are.”
Match Group stated, “We are committed to using AI ethically and responsibly, placing user safety and well-being at the heart of our strategy… Our teams are dedicated to designing AI experiences that respect user trust and align with Match Group’s mission to drive meaningful connections ethically, inclusively and efficiently.” A spokesperson for Bumble said, “We see opportunities for AI to help enhance safety, optimise user experiences, and empower people to better represent their most authentic selves online while remaining focused on its ethical and responsible use. Our goal with AI is not to replace love or dating with technology, it’s to make human connection better, more compatible, and safer.”
Ofcom has stated that the Online Safety Act applies to harmful generative AI chatbots and that they’ve set out what platforms can do to safeguard their users from harm. These statements suggest an ongoing regulatory focus on AI’s role in digital platforms, signaling potential changes ahead.