Virtual assistant listening devices raise privacy and data protection concerns
Voice-activated digital assistants, such as Siri, Alexa, Cortana and Google Assistant, are becoming ever more popular and offer users an easy and convenient way to look up information and control Internet of Things (IoT)-connected appliances throughout the home. It has been predicted that over 55 percent of homes will acquire such gadgets within the next four years, and that they will one day outnumber humans worldwide. The same ‘listening’ technology is employed in mobile phones and Google estimates that over one-fifth of searches made on phones are already by voice.
However, while access to mobile devices is protected by some form of user authentication (password, PIN code or biometric validation), this is not the case for the majority of home digital assistants which instead have a ‘hot word’ that activates them to start recording voice data and streaming it to cloud-based servers where it is deciphered by machine learning algorithms. As these devices are potentially always on, it gives tech companies and potential hackers another way to collect sensitive personal data.
Privacy and data protection concerns
One U.S. family’s concerns were realised recently when it was reported that an Amazon Echo device recorded a conversation and then promptly messaged it to an employee of a family member who is on the family’s contact list. What exactly happened remains unclear: Amazon’s investigation revealed that in an unfortunate, and unusual, string of events Alexa had ‘misheard’ a series of words in the background conversation that it interpreted as commands; while the family say they didn’t hear Alexa asking for confirmation before sending their conversation.
Whatever the sequence of events, the incident highlights how such devices may be responsible for potentially serious breaches of personal data. Although Apple and Amazon state that users have control over their data, data potentially can be shared with third parties and the incident highlights how underperforming new technology or a glitch may inadvertently result in a personal data breach with potentially serious consequences. These are important considerations as the GDPR introduces the concepts of privacy by design and privacy by default, meaning that data controllers must adopt significant technical and organisational measures to demonstrate GDPR compliance.
Many businesses and service providers are showing an interest in the technology. For example, several banks, including Citibank and J.P. Morgan Chase, have announced how they plan to integrate services with Apple’s Siri in order for customers to be able to access their bank accounts using voice controlled assistants in the future. Concerns are also being raised with the arrival of virtual assistants in the workplace and the potential for private conversations to be inadvertently recorded.
The extension of voice technology to advertising is also a concern as devices build up a profile of the user based on their activity, and brand preferences. It may then be possible for advertisers to play targeted messages in the voice of a favourite personality. This may be problematic if consumers are unable to discern whether such messages are authentic endorsements or sponsored messages.
Compliance with the GDPR
There is no explicit mention of voice-activated assistants or smart speakers in the General Data Protection Regulation (GDPR). However, Article 22 of the GDPR concerns automated individual decision-making and profiling, and states that unless explicit consent has been given “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly affects him or her”.
At the moment, all companies that sell home assistant devices address privacy concerns through their End User Licence Agreements or Privacy Statements. At present there is no consensus on whether any additional specific measures should be taken by operators providing these devices and services. It is possible that regulators may in the future require stronger privacy notices and specific ‘consent to recording’ announcements to be made. For example, should smart speakers utter a simple verbal statement and if so what would be sufficient and how often should it be played? It is likely that the EU and regulators will offer some form of specific guidance in due course. Another challenge is that the GDPR requires parental consent before storing and processing personal details (including audio recordings) of minors.
It is also a requirement under the GDPR that data controllers must report breaches of personal data within 72 hours of becoming aware of such a breach. However, this may be problematic in the context of artificial intelligence if devices make decisions themselves based on machine learning. At which precise point along a chain of device-driven actions without human confirmation, should the company have been aware that a data breach had occurred?
Artificial intelligence, virtual assistants and smart home technology are all relatively new concepts that are here to stay. As these technologies evolve, so too will regulation and legislation. Certainly, the implementation of the GDPR, and the rapid uptake of this technology alongside increased consumer awareness following the Facebook and Cambridge Analytical episode, is raising privacy and data protection concerns. Steps that consumers can take to protect their privacy include muting their device when not in use, deleting recordings and refraining from connecting financial accounts to their device. From the standpoint of tech companies, monitoring for ongoing compliance with the GDPR is essential and special attention should be paid to any future guidance.