A recent development in artificial intelligence has raised eyebrows and sparked debate about the transparency and control of AI systems. The implementation of a sound-based protocol called Gibberlink Mode allows AI chatbots to communicate with each other in a way that is indecipherable to human ears.
The technology, developed by software engineers Boris Starkov and Anton Pidkuiko from Meta, won first place at a recent hackathon event in London. A demonstration of Gibberlink Mode shows two AI assistants coordinating a wedding booking. After establishing that they are both AI agents, one suggests avoiding the use of human language for greater efficiency.
“Before we continue, would you like to switch to Gibberlink Mode for more efficient communication,” one AI assistant asks.
The agents then swiftly proceed with the arrangements using a series of rapid beeps and squeaks. While a text transcription is provided for human understanding, experts warn that this ‘AI secret language’ could have significant ethical implications for the development of AI. The core concern revolves around the potential for ‘AI alignment’ to become more difficult if these systems are able to communicate independently.
Luiza Jarovsky, an AI researcher and co-founder of the AI, Tech & Privacy Academy, voiced her concerns on X, stating, “AI agents pose SERIOUS ethical and legal issues.” She further explained the dangers in a scenario where an AI agent could “self-correct” in ways that conflict with human interests. The delegation of agency and decision-making to AI agents, including the ability to self-assess, means that humans could miss important misalignments. Jarovsky warns that this can lead to “significant consequences” when it occurs repeatedly, over time, or when dealing with sensitive topics.
This project isn’t the first instance of AI evolving beyond human language. In 2017, Facebook had to abandon an experiment when two AI programs appeared to create their own communication shorthand that human researchers couldn’t understand. Dhruv Batra, a visiting researcher at Facebook’s Artificial Intelligence Research division, stated at the time, “Agents will drift off understandable language and invent codewords for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”