Video: 6 Ways to Improve VAs via Better Language Understanding
Learn more about conversational systems at the next SpeechTEK conference.
Watch Deborah Dahl's complete keynote from SpeechTEK 2019, Just Like Talking to a Person: How to Get There From Here, in the SpeechTEK Video Portal.
Interested in attending a SpeechTEK event? Visit SpeechTEK.com to sign up for conference alerts, discounts, and more.
Read the complete transcript of this clip:
Deborah Dahl: Getting better language understanding, in most cases, is an incremental task. Being able to process more complex language, like alternatives--"Are there any Mexican restaurants near here, besides Plaza Azteca? How can I get to Philadelphia without going on the Schuylkill?" To me those are just a matter of the developers saying, "Okay, we want to handle alternatives."
The same thing goes for possibilities. "Is there a Mexican restaurant near here that's open now?" That's a question about a possibility. Multi-intent is really, really useful technology or really useful feature and that refers to things like the user asks two things in the same question.
People are working hard on that because it is very natural for you to say things that are really two tasks. "Is there a Mexican restaurant near here? If so, make a reservation for me at one o'clock." There's two jobs that you're doing with your artificial assistant.
One that I think is really, really hard is a general inference. So, all of the Siri, Alexa, and Google family will answer questions like, "Do I need an umbrella today?" They take that as a weather request. That sounds really cool. That sounds like it's so smart but actually those have been hard-coded in the back end.
The way that you know that they're not doing a general inferencing job is to ask them something a little bit more off the wall. So, if you ask them something like, "Can I wear flip-flops today?" You'd have to know what the weather's gonna be like. If it's gonna be warm enough for flip-flops. Well, you also have to know what flip-flops are and that they're kind of open and your foot is out and exposed. So, if it's cold you don't want to wear flip-flops. Or if it's raining. You also need to know if someone has a lot of business meetings. You need to wear normal shoes to a business meeting, I think. Or maybe businesses are getting more casual but maybe not that casual.
So, to answer that kind of a question, you need to do some kind of general reasoning about really arbitrary knowledge. "What is appropriate business wear?" I tried these with Siri and Alexa. Siri was actually--I'm not sure why it did this-- but it actually did give me a weather report. Alexa just said, "I don't know." So, what you would like is some general ability to reason about knowledge, which is very far away from being executable. And the last one that needs a lot of work, I think, is inferencing about time. So, if you said something like "If I paid my bill yesterday, will my payment have arrived before the due date?" That would be very difficult with today's technology.
Allstate Conversational Designer Katie Lower outlines working models for assessing the viability of a conversational interface with multiple teams within an organization in this clip from her presentation at SpeechTEK 2019.
Allstate Conversational Designer Katie Lower defines the customer journey map as a visualization of the customer's process and explains why it's valuable in this clip from her presentation at SpeechTEK 2019.
Grand Studio Lead Designer Diana Deibel discusses the ethical implications of speech UIs and remaining cognizant of the inherent human elements of speech and conversation in this clip from her presentation at SpeechTEK 2019.
Grand Studio Lead Designer Diana Deibel discusses multiple approaches to making VUI design transparent--the Google vs. Alexa, system-initiated vs. user-initiated--in this clip from her presentation at SpeechTEK 2019.
Pindrop Director of Product Marketing Ben Cunningham discusses best practices for voice authentication in IVR design in this clip from his panel at SpeechTEK 2019.
Gridspace Co-Founder and Co-Head of Engineering Anthony Scodary demonstrates Grace, Gridspace's new automonous call center agent, in this clip from his keynote at SpeechTEK 2019.
AI allows companies to retrieve 100% of the audio from contact center calls without compromising quality and accuracy. With this knowledge, companies can improve CX, reduce effort and increase brand loyalty.
Orion Labs Head of Product Ellen Juhlin and Voicea CMO Cory Treffiletti discuss persisting challenges in speech-to-text, AI identifying intent, user expectations, and more in enterprise speech tech applications in this clip from their panel at SpeechTEK 2019.
451 Research Senior Analyst Raul Castanon discusses new findings of a recent survey on speech technology adoption in the enterprise and how adoption of devices in the consumer space have impacted enterprise adoption in this clip from his panel at SpeechTEK 2019.
Grand Studio Lead Designer Diana Deibel discusses best practices for culturally inclusive access in voice UI design in this clip from her presentation at SpeechTEK 2019.
Gridspace Co-Founder and Co-Head of Engineering Anthony Scodary discusses the transactional nature of speech and how that understanding impacts effective, AI-driven call center analytics in this clip from his keynote at SpeechTEK 2019.
Conversational Technologies Principal Deborah Dahl discusses the state of the art for the three pillars of conversational systems in this clip from her keynote at SpeechTEK 2019.
Conversational Technologies Principal Deborah Dahl explains how more targeted enterprise knowledge could make VAs more effective in organizations in this clip from her keynote at SpeechTEK 2019.
Nuance Communications' Roanne Levitt delineates the differences between text-dependent and text-independent biometrics and what the advent of text-independent means for IVR applications in this clip from SpeechTEK 2019.