-->

Ethics and Algorithms—Exploring the Implications of AI

Article Featured Image

As speech technology scales new heights, and speech becomes the foundation for new home and office applications, some observers have become very concerned about security and ethical issues. From data use and collection, to security holes, to the legal implications of having a device constantly listening to your conversations, speech technologies and AI have left experts racing to answer a lot of questions—and hopefully find solutions.

Improvements in artificial intelligence and machine learning enable companies to gain more information—in the form of data—about their customers than ever before. As a result, vendors now tailor more personal services and experiences for consumers. However, questions are raised about how vendors use customer data and its impact on individuals’ privacy. Concerns about consumer data privacy aren’t unique to speech-enabled devices—though they’re often at the heart of the discussion—but they bring many new considerations to the forefront.

Privacy is often at the center of many ethical issues in the digital age. The advent of the internet and the rise of Big Data, analytics, and AI and machine learning have made it possible for businesses to fairly accurately gauge a seemingly anonymous online visitor’s gender, ethnicity, native language, age, interests, and political ideology. This is true whether you’re just browsing the internet, typing a search into your Google search box, or perusing Facebook. But when voice and AI get thrown into the mix, a whole new can of ethical worms opens up.

Old and New Privacy Concerns Abound

The rising popularity of devices like Amazon’s Echo and Google Home has raised a lot of issues about privacy. From questions about data usage to sticky legal conundrums, Alexa and her ilk are forcing people to think about privacy in sometimes new and surprising ways.

For many internet users, having your online activities tracked—mostly by brands trying to sell you something—has become par for the course. Even less savvy web users have begun to understand exactly how much data they leave behind, thanks in part to news stories about Facebook’s role in Russian election tampering and the Cambridge Analytica scandal. But many were still surprised to learn about all the new and sometimes frightening privacy concerns having a speech-enabled device in your home brings to the fore.

Laws like the European Union’s General Data Privacy Regulation (GDPR) and California’s Consumer Privacy Act are already addressing the issue of how your data can be used and how you need to be informed of its collection. Those regulations will, ultimately, govern data being collected through speech-enabled and IoT devices—and we’re already starting to see GDPR fines be enforced.

But just as the world is starting to take online privacy and data collection seriously, new worries are arising.

Virtual assistants like Amazon’s Alexa and Apple’s Siri have become quite popular. But these devices also create new security holes. The systems respond to anyone within earshot. An annoying but comical outgrowth of that always-on capability is that children can use the system to order new toys, or people accidentally make a phone call they didn’t intend to make. But there are more sinister side effects as well. “An individual working around a person’s home may use the voice system to help them break in,” says Deborah Dahl, principal at Conversational Technologies.

Vendors have been moving to address these kinds of problems. “Amazon has been working on individual voiceprints, so not everyone has access to a home system,” says Kevin Kelly, president of digital advertising agency Bigbuzz Marketing Group. The owner of the system will be able to enter a voiceprint when the system is set up and limit access to authorized users.

Meanwhile, improvements in voice synthesis have created other problems. In May 2018, Google unveiled Duplex. The Google assistant can place a call and make, for instance, a restaurant reservation or hair appointment. No more sitting on hold for you, just have Duplex do it. In this case, disclosure became the issue. The Google system sounds very human; the company even inserted pauses into sentences so Duplex would not sound like a robot. Does Duplex need to identify itself as a machine when it makes a call? The law is unclear on that point. “Technology raises interesting legal issues that frankly have no answer at this point,” explains Ken Dort, a partner in the Chicago office of national law firm Drinker Biddle & Reath.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Q&A: Sam Ringer Says the Revolution Is Coming — the Medium-Term Future of AI and ML

Current AI and machine learning (ML) technologies are starting to change the way we build and innovate. However, the power of our current ML technologies is not fixed. Sam Ringer will explore where ML is at the moment.

Apple to Let Users Opt Out of Siri Response Grading Over Privacy Concerns

Apple has temporarily suspended its Siri response grading program over privacy concerns and says users will be able to opt-out in future iterations of its popular voice assistant.

Diagnosing Disease with Speech Analytics

It's now possible to detect everything from depression to Parkinson's disease with speech analytics, but privacy concerns are still being dealt with

Five Tips for Managing Voice Data in the GDPR Era

As the UK 's Information Commissioner's Office orders the nation's tax authority to delete 5 million voice recordings under GDPR, we offer 5 tips for staying out of trouble with privacy regulators.

SpeechTEK 2019: Making Machines More Human

Deborah Dahl, principal at Conversational Technologies and co-chair of the SpeechTEK Conference, kicked off the last day of SpeechTEK 2019 with a talk titled "Just Like Talking to a Person: How to Get There from Here?"

Will Speech Technology Keep Speech Free?

The technology is magical, but can be misused

AI for Speech, Other Uses Primary Message from NICE Interactions

Delivering and leveraging artificial intelligence throughout the contact center, including several AI-enabled speech capabilities was the theme of the NICE Interactions 2019 Conference, which drew an estimated 3,000 attendees.

Verint Launches AI Blueprint

Verint announced a new offering, Verint AI Blueprint, which is designed to help companies launch conversational artificial intelligence solutions through a combination of consulting and technology.