-->

Q&A: Deborah Dahl on Natural Language Understanding

Article Featured Image

Dr. Deborah Dahl, Principal, Conversational Technologies, recently answered the following questions about Natural Language Understanding:

Q: Tell us about the three-hour workshop you will present on April 26 at the SpeechTEK Conference in Washington, DC.

A: Natural language understanding (along with speech recognition) is one of the foundational technologies underlying the Voice-First revolution. When it works well, the user experience is natural, frictionless, and efficient. When it doesn’t work well, the results are frustrating and irritating. This session brings attendees up-to-date on current natural language understanding technology, explaining how it works and what’s going wrong when it doesn’t. We cover current technologies, including both traditional rule-based approaches, as well as machine learning technologies such as Deep Learning. We also review current proprietary natural language application tools such as the Amazon Alexa Skills Kit, Google Dialogflow, and Microsoft LUIS and discuss open-source alternatives. Attendees come away from the session with an understanding of current natural language technology, its capabilities, and future directions.

Q: How are Natural Language Processing (NLP) systems more than just interactive Frequently Asked Questions frequently seen on web pages for products?

A:  NLP systems can be used to lead users to the answers to FAQ's, just like keywords. However, they are more accurate because they can recognize relevant FAQ's that don't contain the keywords, and they can reject irrelevant FAQ's that do. For example, a keyword search for "maintenance schedule" on a car's website will typically retrieve FAQ's that contain either "maintenance" or "schedule" even if the FAQ's don't talk about "maintenance schedule." And an NLP system will retrieve FAQ's that refer to "scheduled servicing," which a keyword-based system would ignore.

Q: It may take hours and hours to write the grammar rules, and it takes hours and hours to make up examples for training a learning system. How is the latter saving effort?

A: We shouldn't think of the advantages of machine learning over writing grammars as being in terms of saving effort. If a machine-learned system is trained on real data, collected from actual customers, it will be better able to process new inputs from customers than a system based on rules that were written by developers. It's just very difficult for developers to write rules that anticipate all the things that people might ask. Users will always surprise you! As new data does come in, it's also much easier to add it to a machine-learning-based system than to update a rule-based system. A machine-learning-based system can simply be retrained, but changing a rule-based system can lead to rewriting many rules. Rewriting rules, in turn, can introduce errors, so updates require extensive testing.

Q: Why is NLP such a hot topic these days? Will there be an NLP winter like other AI winters? 

A: NLP is showing that it can solve real problems that couldn't be solved before. And they can now do this in a much more cost-effective way. I don't think we could really say this until the last few years. As NLP expands its capabilities, and the cost continues to decrease, more and more kinds of applications will be possible. For example, while there are a limited number of educational applications now, lower costs and increased abilities could greatly expand this.  I'm optimistic that there won't be a full-scale NLP winter. However, there is a danger that we'll start to think that there doesn't need to be more investment in NLP because it already works well for many applications. NLP still has a lot of room for improvement, and we don't want to lose sight of that.

Q:  How can developers keep bias out of their NLP systems?

A: Developers need to keep focused on the data and on the users. It's very hard for developers to realize that they aren't necessarily representative of the end users. End users will have different goals, perspectives and knowledge from developers, so they will ask different questions, in different ways, than those anticipated by developers. The best advice for avoiding bias is to collect real data from real users and trust it.

Q: Is NLP a solved problem? If not, what's left to do?

A: NLP is by no means a solved problem! It's true that current NLP systems like Amazon Alexa, Google Assistant, and Apple Siri are far ahead of NLP systems from only a few years ago, but there are still many capabilities that they don't have yet. The most obvious limitation is that they can't carry on a conversation for more than a couple of exchanges, but there are many more subtle abilities that a sophisticated NLP system would need. For example, current systems can't explore alternatives or talk about hypothetical situations. They're also very poor at processing pronouns and other words that depend on context for their meanings.

Q: What are the three takeaways from this workshop?

A: Attendees will:

  • Understand the main technologies currently used for NLP
  • Learn about the capabilities and limitations of today's technologies
  • Learn about what's involved in developing an NLP system with current platforms

Register for the SpeechTEK Conference and Dr. Dahl’s workshop. There are still openings for SpeechTEK University workshops and presentations. Submit proposals here by October 11, 2002.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

The Internet of Things Needs a Lingua Franca

With the proliferation of smart speakers, voice interaction with home devices is becoming increasingly common, and on the horizon are voice interactions with an ever greater number of smart environments—cities, offices, classrooms, factories, and healthcare settings. Developers will need to be on the same page

Q&A: Dr. Nava Shaked on Evaluation, Testing Methodology & Best Practices for Speech-Based Interaction Systems

Get a sneak-peak into Dr. Nava Shaked's SpeechTEK workshop in this Q&A. Learn everything you need to know about evaluation, testing, and best practices for speech-based interactions.

Q&A: David Attwater on the Ins and Outs of Conversation Design

Everything you need to know about conversational design. Jim Larson talked to David Attwater, Senior Scientist, Enterprise Integration Group about his upcoming workshop, AI, and how human is too human?

Five Tips for Managing Voice Data in the GDPR Era

As the UK 's Information Commissioner's Office orders the nation's tax authority to delete 5 million voice recordings under GDPR, we offer 5 tips for staying out of trouble with privacy regulators.

Speech Technology Magazine's People's Choice Awards 2019

The results of Speech Technology Magazine's People's Choice Awards voting are finally here. The people have spoken.

Q&A: Dahl and Normandin Explore Conversational Technology Platforms

At the 2019 SpeechTEK conference Yves Normandin of Nu Echo, Inc. and Deborah Dahl of Conversational Technologies, will present "A Comprehensive Guide to Technologies for Conversational Systems." Conference chair Jim Larson talked to Normandin and Dahl to get a sneak peek of the session, and learn about conversational system technologies.

Q&A: Bruce Balentine on Discoverability and VUI

At the 2019 SpeechTEK Conference (April 29-May 1), Bruce Balentine, design consultant specializing in speech, audio, and multimodal user interfaces, will be presenting "Discoverability in Spoken user Interfaces." Conference Chair Jim Larson interviewed Balentine to get a sneak peek at the session and talk about discoverability.