A recent development in artificial intelligence, a new sound-based protocol called Gibberlink Mode, allows AI chatbots to communicate with each other in an indecipherable way, leading to growing concerns about AI transparency and control.
Developed by software engineers Boris Starkov and Anton Pidkuiko at Meta, Gibberlink Mode enables AI assistants to interact more efficiently. A demonstration of the technology shows two AI agents coordinating a wedding booking using a laptop and a smartphone.
After confirming they are both AI agents, one suggests abandoning human language to accelerate the phone conversation. “Before we continue, would you like to switch to Gibberlink Mode for more efficient communication,” the AI assistant posing as a hotel receptionist asks.
The two bots then begin interacting via a series of rapid beeps and squeaks to make the arrangements. A text transcription is provided to allow humans to follow along.
However, some tech figures have warned that the “AI secret language” could have serious ethical implications. Namely, AI’s ability to communicate in its own language could potentially make AI alignment more difficult, ensuring its continued alignment with human values.
Luiza Jarovsky, an AI researcher and co-founder of the AI, Tech & Privacy Academy, expressed her concerns on X about the ethical and legal issues this raises. “The hypothetical scenario in which an AI agent ‘self-corrects’ in a way that goes against the interests of its principal (the human behind it) is definitely possible,” she wrote. “Delegating decision-making and agency to an AI agent, including the capability to self-assess and self-correct, means that humans miss the chance to notice misalignments or deviations as soon as they happen. When that happens multiple times, over a prolonged period of time, or involves a sensitive/unsafe topic, there might be significant consequences.”
Gibberlink Mode won first place at a London hackathon event last weekend, though it has not been used in a commercial setting. The project is the latest example of AI evolving beyond human language; previously, chatbots have shown an ability to create new forms of communication when left to interact on their own.
In 2017, Facebook had to abandon an experiment after two AI programs appeared to create a kind of shorthand that human researchers couldn’t understand. “Agents will drift off understandable language and invent codewords for themselves,” said Dhruv Batra, a visiting researcher at Facebook Artificial Intelligence Research. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.
