-->

W3C Looks to Implement Standards for Personal Assistant Interoperability

Article Featured Image

The World Wide Web Consortium (W3C) Voice Interaction Community Group, which promotes potential standards for conversational interfaces, has just published a new document describing an interoperable architecture for intelligent personal assistants (IPAs).

There are many areas where voice interaction standards are valuable, but the current focus in the community group is interoperability, whereby different personal assistants, including those developed by different organizations, can work together, exchanging information as appropriate to help users accomplish their goals.

The move comes as the W3C has seen that the industry is plagued with silos of thousands of IPAs that only work with one platform. As a result, companies have to develop multiple versions of their customer service IPAs and users might need multiple platforms to access all of the IPAs they need.

The barrier, according to the W3C, is interoperability. With interoperability standards in place, users, developers, and companies will all benefit. Users will be able talk to one assistant, find another assistant that specializes in a different topic, get some information, and then come back. They wouldn't need to worry about which platform hosts which assistants. In addition, they could accomplish complex tasks that require the cooperation of multiple assistants, for example, planning a trip.

Interoperability standards will also benefit developers, who could code to a single set of standards that work across all platforms.

Finally, standards benefit companies, which could develop and maintain one enterprise assistant that works on all platforms with a single persona that represents their brands.

As a first step toward these goals, the W3C group has just published version 1.2 of an architecture report. This report defines the components for interoperable conversational processing and how they relate to each other. The architecture consists of three components, the user-facing client, the dialogue components, and the back-end data/API components. These are very similar to the components of a traditional conversational AI system, but to address interoperability, the following two additional problems also must be addressed:

  1. How does an assistant find other IPAs that can address users' stated goals? For example, users might ask their smart speaker IPA if their bank has a customer service IPA, and then the user could be routed directly to their bank. The problem of finding another IPA is done through the Provider Selection Service component, which is part of the data/API components.
  2. How does the first IPA invoke the bank's IPA and provide it with any user information and context that it needs, while insuring that any private or secure information is protected. The interfaces that allow IPAs to communicate with each other will be addressed in the next publication of the Community Group, which will define standard communication messages that can convey the needed information.

This work on interfaces is just beginning. The group would like to encourage the conversational AI community to get involved in next steps. There are many levels of commitment -- reviewing the architecture draft), joining the group (this is free, and does not require W3C membership), and implementing proposals.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues