Amazon Echo’s New Policy: Cloud Processing Mandatory

Amazon’s smart speaker, Echo, which utilizes the AI assistant Alexa, is changing how it processes user voice commands. Going forward, consumers will no longer have the option to opt out of having their recordings processed in the Amazon cloud. This shift has raised concerns among users and experts alike.
A common joke among users of home artificial intelligence (AI) assistants is that the algorithms are “always listening.” Consider the scenario: someone asks Alexa to play a Rolling Stones song and later sees an ad for Rolling Stones merchandise. Is this a coincidence? Probably not; it’s a form of targeted advertising that can make some people feel uneasy.
According to Umar Iqbal, an assistant professor of computer science and engineering at Washington University in St. Louis, users should be aware of certain aspects of the technology. “Consumers should be able to take advantage of the benefits generative AI provides,” Iqbal said. “The question is how to make these technologies meet user expectations, particularly in the context of privacy and security.”
Iqbal’s previous research examined how Amazon uses smart speaker interaction data to infer user interests, then uses these insights to personalize advertising.
These smart speaker-based AI assistants function by being activated by specific “wake words,” such as “Alexa.” Amazon’s FAQ site explains these as chosen words. Recently, some Amazon customers were alerted to a change in how the company processes voice commands given to Alexa.
Previously, customers could choose to process voice commands locally, but now all commands are sent to Amazon’s cloud computing centers for processing. In the past, Amazon and other tech companies used voice command data to personalize advertising. Now, Amazon wants to use those commands to train Alexa+, the company’s next-generation AI assistant. However, until recently, those using three previous Echo versions could opt out of server-side voice command processing.
The announcement to Amazon customers and media stated that if a customer’s Echo device was set to “Don’t save recording,” their Voice ID feature would not function properly. As a result, those opting out of saving recordings would lose many of the personal assistant-style features that Voice ID offers.
Iqbal highlights the lack of transparency as a central issue. Users should have control over how their data is processed and ultimately deleted. “From my perspective, the lack of transparency results in a lack of trust,” he said. Being less open about the fate of recorded requests adds to existing anxieties about the technology.
Iqbal compares Alexa to chatbots like ChatGPT. Alexa leverages large-language models to process information, effectively making it a voice-based chatbot. Alexa operates on the hardware of smart speakers (like Echo) and mobile devices, but these devices are not powerful enough to run large AI models, which is why Amazon is sending recordings to the cloud for processing.
Iqbal suggests other alternatives to incorporate large-language models that give users more autonomy over their data. Apple products, for example, offer more safeguards in selecting which data gets sent to the cloud, with better control over tracking that data.
“There are options that are more user- and privacy-friendly,” he said.
As AI assistants take on more consumer tasks, such as trip planning and scheduling, the threat to data security increases. Algorithms may exchange user information, potentially making them vulnerable to data manipulation. Iqbal and his McKelvey Engineering colleagues have created tools to counter these threats. One method, known as “IsolateGPT,” keeps external tools isolated from one another, letting AI assistants function while also securing user data.