A string of protests has targeted tech billionaires in several West Coast cities through an unexpected medium: hacked crosswalk signals. Instead of traditional protests with signs and marches, hackers have used AI-generated voices to deliver satirical messages from billionaires at street crossings.
The phenomenon was first observed in Seattle, where at least five intersections were affected. When pedestrians pressed the button to cross the street, they were greeted with messages that sounded like they came from Amazon founder Jeff Bezos, warning against taxing the rich and joking about billionaires moving to Florida. Tech experts confirmed that these messages were likely created using AI voice cloning technology.
“Hi, I’m Jeff Bezos. This crosswalk is sponsored by Amazon Prime with an important message. You know, please, please don’t tax the rich. Otherwise, all the other billionaires will move to Florida, too.”
Similar incidents were reported in Silicon Valley, where crosswalk signals played recordings mimicking Meta’s Mark Zuckerberg and Tesla’s Elon Musk. The companies involved, including Amazon, Meta, and Tesla, did not respond to requests for comment.
Reactions to the hacked messages were mixed. Some pedestrians, like Ava Pakzad in Seattle’s University District, found it amusing. “It’s really funny, I think,” she said. Others, like JP Smith, appreciated the anti-billionaire sentiment. “I really appreciated it. I thought it was wonderful,” Smith said.
However, not everyone was pleased. Maeceon Mace, who works at a restaurant near Amazon’s headquarters, expressed concern about the security implications. “If our cross signs can be hacked, anything can be hacked,” he said.
The Seattle Department of Transportation acknowledged that the crosswalk signals were hacked and stated they are working with the vendor to strengthen security. Experts point to weak passwords as a likely vulnerability. David Kohlbrenner from the University of Washington’s Security and Privacy Research Lab explained that the crosswalk signals can be accessed through a phone app and Bluetooth, making them potentially easy targets if default passwords are not changed.
“They’re not very secured. That’s on purpose. They’re usable by people out in the field, and so they don’t want them to have a lot of complexity with interacting with them,” Kohlbrenner said.
The incident highlights the ease with which AI can be used to create convincing voice clones. Cecilia Aragon, who researches AI-generated audio at the University of Washington, noted that with just a few audio samples, AI can learn a person’s speech patterns and generate fake messages. “All they need to do is have recorded samples of this person’s voice. So basically, anybody who’s been recorded and is a semi-public figure is vulnerable to this type of fakery,” Aragon explained.
The lack of strong regulations on voice cloning technology adds to the concern. For now, experts advise updating passwords as a basic security measure. As AI technology continues to evolve, the potential for such satirical stunts to become more sophisticated is a growing concern.