Speech Technology Magazine


Guidelines for Designing Chatbots (Video)

Ulster University Professor Michael McTear identifies key resources for chatbot design best practices as well as tools and frameworks for building voice-user interfaces in this clip from SpeechTEK 2018.
By The Editors of Speech Technology - Posted Sep 7, 2018
Page1 of 1
Bookmark and Share

Learn more about customer self-service at the next SpeechTEK conference.

Read the complete transcript of this clip:

Michael McTear: If you look at blogs on the internet, you will quite often find short articles with titles like, “11 Rules To Follow When Designing a Chatbot,” and it's a whole mish-mash of different things. "Make life easier." Well, it's very hard to know exactly how to implement that. "Have the chatbot introduce itself," and so on. Some of them are good advice, but it's not really all that helpful to just have maybe 10, 11 different little tips. You really need something more substantial than that, and my message is, really, why not learn from the past?

There's been lots and lots of work, and we've heard about it for 20 years in speech tech, about how to build these interfaces. People have written books about it. Why not actually look at some of these. There's one by Michael Cohen and colleagues who then moved to Nuance, and subsequently to Google. It's quite an old book now, but it's still highly recommended. There's another book by James Lewis, who works in IBM. It's particularly good because it has a lot of experimental data to show how this sort of prompt is better than that sort of prompt, and so on, how you do repairs and all sorts of things. There's the most recent book on this by Cathy Pearl, and then there's a book on bots in particular by Amir Shevat.

There are lots of different things that, unfortunately, I think, people aren't really looking at.

At the moment, there aren't standards for intelligent agents. Debbie Dahl has published a book recently on W3 standards, and she has a website as well, a W3C Community Group. But the problem with development is, there's a whole plethora of tools and frameworks now that can enable you to do it sometimes without any coding at all. Although when it gets more advanced, you need that. There are code-based solutions, and NLU tools.

But one of the basic problems is that many of the developers nowadays are rushing into this without really having a detailed technical background in what is required to build a voice-user interface, which is very, very different from building a visual interface. The requirements and the way these things work are totally different, and if anybody was at David Atwater's tutorial yesterday, he went through a whole list of the ways in which a voice user interface is different from a graphical user interface.

Just a few things. You quite often see this sort of thing about how to build a conversational agent In minutes. I did show you how to do that but with the caveat that that wouldn't be the end of the thing. But you see this all the time, those videos on YouTube of how to do all these different things, but really, the more realistic one is something like, maybe, 60 days where you're going to do the whole thing properly.

Page1 of 1