'I know what you said last summer'

User privacy is being trampled on say civil liberties groups as several Big Tech companies finally admit that they record and listen to our voice commands, conversations and even private chats

It was a given that Big Tech companies were screening our text communication and internet activities. However, the past few weeks saw them conceding that they had recorded our voices and hired humans to listen to them.

Microsoft had humans listen in on Xbox voice commands and Skype calls, so did Amazon for Alexa, Apple for Siri and Google for its Assistant. Facebook has been collecting voice chats on Messenger and paying contractors to listen to and transcribe them.

In the case of Xbox, Microsoft claimed that the recording was triggered by mistake. However, the situation is far more complex, explains Ray Walsh, digital privacy expert at ProPrivacy.com.

"It is possible that some recordings were made by the Xbox when a ‘wake’ word was misheard by the AI. However, the underlying process of recording conversations and passing them to human contractors was no mistake. This was a systematically executed development process that purposefully appropriated people’s private conversations in order to help train’s Microsoft’s AI," he said.

"What it is important to remember, is that recordings affect not just the owners of voice assistants - but also third parties recorded without their knowledge when a voice assistant is triggered by a wake word."

The ‘wake word’ does not always work. ProPrivacy lists plenty of instances to suggest that AI assistants often wake erroneously. There are allegations of conversation being recorded even before the wake word is uttered, added Walsh.

This is a big blow to consumer assumptions that technologies such as voice assistants make it to market only after they have been developed, tested and improved in an isolated and secure environment.

"The reality is that voice assistants have been released as a work in progress and have been leveraging consumers’ private conversations to improve the underlying AI. Consumers have been acting as guinea pigs in order to allow tech firms to profit."

The common reason given by the Big Tech companies for this voice snooping is that it is necessary to develop and perfect their proprietary algorithms.

"While the idea of an algorithm listening to a conversation may not be troubling, the realisation that private and often sensitive information has been screened by humans, many thousands of times, is extremely concerning," said Walsh. 

"Tech firms have no right to intrude on consumers’ conversations without informing them that it is occurring. The idea that people’s conversations can be hoovered up without consent is perverse," he added.

Even though Microsoft cites the alibi of the wake word error in the case of the xbox, the company has been a serial offender when it comes to voice snooping, Walsh said.

"This has been an ongoing part of Microsoft’s AI development process for some time, and is part of the research and development process used to iron out kinks in its AI voice recognition algorithms. This process is extremely troubling and has been happening without the consent or knowledge of consumers."

Such steps have faced significant consumer opposition, though it has been fragmented.

In the US, privacy advocacy groups such as The American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF) often band together to apply pressure on the Big Tech firms to stop them from collecting people’s sensitive private information.

EFF previously applied pressure on Microsoft over the invasive level of telemetry data collected by the Windows 10 operating system. In January of 2019, 90 advocacy groups sent a letter to Amazon, Google, and Microsoft to request that they cease selling dangerous facial recognition technology to the government.

In 2006, the Center for Digital Democracy (CDD) and the US Public Interest Research Group (US PIRG) filed a complaint against Microsoft with the US Federal Trade Commission (FTC) over its "unfair and deceptive" methods of collecting data from its users. 

Ultimately, it’s the consumers’ choice to opt for a voice assistant service. Therefore the onus of precaution also fall on them. 

Risk is inherent in virtually all cloud-driven IoT systems, said Craig Young, principal security researcher at Tripwire. The size of the data offered by the Big Tech make them attractive for malicious hackers or rogue insiders. The best way to avoid incidents like this is to use technologies that can function without relying on someone else’s computer (e.g. vendor infrastructure) for each and every interaction, he suggested. 

"At the present, this is somewhat daunting for the complexities of a voice assistant, but as the field and the underlying computer technology evolves, I think it will open new possibilities for products with the advanced features we’ve come to expect but without the sacrifice of privacy," he added. 

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Video and interviews