Nearly everyone seems to be using voice technology in their daily life these days. It’s useful, convenient and pretty reliable for what we need. But, while it has its advantages for our busy lives, how trustworthy are Siri, Alexa, and Google Assistant when it comes to our privacy?
41% of consumers claimed in April 2019 to have privacy worries related to their voice assistant—and more recent developments suggest that these numbers may have increased and people's concerns really justified.
Here we look at the evolution of the voice assistant, privacy concerns surrounding voice tech, and consider how important such innovations are to the future of digital marketing.
The evolution of the Voice Assistant
Before we dig into the heart of the issue, let's look at the evolution of the voice assistant.
Contrary to popular belief, voice technology is by no means a new concept. Around the mid-1960s, IBM introduced IBM Shoebox, the world's first voice recognition tool, an innovation which had the ability to process 16 spoken words and ten digits.
Between the late 1960s and early 1990s, voice recognition software evolved until Dragon launched the first-ever digital dictation tool available to consumers—a program with a retail price of $6,000.
Next came Clippy, Microsoft’s in-built voice activated desktop assistant, a feature that made the concept of voice technology widely accessible to consumers across the globe before being killed off (to the delight of many) in 2001.
In 2011, shortly after the advent of the smartphone, Apple introduced Siri—a complex, conversational voice companion available to Apple devices owners. In the years that followed, the likes of Amazon’s Alexa and Google’s Home Assistant brought the voice assistant into homes across the globe, tapping into the digital ecosystem by allowing people to ask questions, make repeat purchases, play music on demand, and more.
Yes, the age of voice technology is upon us and it’s no longer a novelty for many—it’s just another convenient, functional component of everyday life. But, does the voice assistant value your privacy?
In Voice we Trust? Privacy Issues Surrounding Voice Technology
The rapid evolution of digital has opened countless doors for people looking for a richer consumer experience, as well as marketers looking to provide it—however, with increased hyper-connectivity come ever more privacy concerns.
Recent data regulations like the GDPR and the upcoming CCPA empower consumers to take control over their personal data—offering up information to third parties when or where they feel most comfortable doing so. Naturally, events like the Cambridge Analytica Scandal remind us that entities or organizations can, and will, try to breach these regulations for their own gain.
Now, with voice technology, there’s another layer of dystopian privacy invasion plaguing consumer trust—the inherent threat of your voice assistant eavesdropping on you.
As touched on by Matt Ferrel (in the video above) and a wave of recent media reports, a series of privacy missteps from Apple, Google, and Amazon voice devices have highlighted privacy issues around voice assistant technology.
During the summer of 2019, Siri was shrouded in controversy as a Guardian report revealed that "a small proportion of Siri recordings are passed on to contractors working for the company around the world.”
The report states that these unsanctioned voice clips featured confidential medical information, details of drug deals, and even sexual content. Not exactly private.
In response to the claims, Apple said that it views data privacy as a fundamental human right, that the clips were harmless, and they were merely used to enhance the technology.
Moreover, Amazon and Google have also admitted to using ‘listening-in techniques’ to enhance the artificial intelligence (AI) and machine learning (ML) capabilities of their voice assistant technologies. And in light of a recent investigation in Germany, Google has announced that it will place a hold on listening to and transcribing conversations in the EU.
While it’s hopeful that these companies are starting to take ownership of their gross privacy missteps, one can’t help but feel it’s merely a case of being caught with your virtual pants down.
If the whistle had not been blown on these voice-based data invasions, would companies like Apple, Amazon, and Google have implemented changes to protect their consumers? Perhaps not.
It’s doubtful that your voice assistant listens Philippines Photo Editor to everything you say from the moment it’s activated, but what is concerning is the lack of transparency when it comes to the documentation of consumers’ voices.
The choice to opt out
Clearly, human beings are needed to enhance the usability and conversational capabilities of voice-based assistants—but like every other aspect of our digital footprint, we should be able to opt in or opt out. Consumers must be given the choice.
Speaking on this issue, Ryan Calo, faculty co-director of the University of Washington Tech Policy Lab, expressed doubts over access to what should, by rights, be private conversations:
“I worry about a trend where these systems begin to listen for more than just your affirmative command—it could listen for breaking glass or signs of distress, or a baby crying. All of a sudden, the system is listening for all kinds of things.”
As we’ve all seen, in the digital era, trust and transparency dictate the success of your digital marketing efforts. Without trust, it’s unlikely that your brand or business will thrive.
The voice tech market is growing exponentially—but if there’s a consumer paranoia surrounding voice assistants, the grand predictions of further growth could become null and void.