-->

W3C Solicits Comments on Multimodal Architecture

The Multimodal Interaction Working Group of the World Wide Web Consortium (W3C) has published a last-call draft of Multimodal Architecture and Interfaces. This document describes a loosely coupled architecture for multimodal user interfaces, which allows for co-resident and distributed implementations, and focuses on the role of markup and scripting, and the use of well defined interfaces between its constituents.

According to Deborah Dahl, principal at speech and language consulting firm Conversational Technologies and chair of the Multimodal Interaction Working Group, says the specification is "exciting" because it " takes a big step toward making multimodal components interoperable by specifying a common means of communication between different modalities, like speech recognizers, displays, biometrics, handwriting recognition, and so on."

Another nice feature, Dahl says, is that "it very naturally supports distributed applications, where different types of modalities are processed in different places, whether onone or more local devices, in the cloud, or on other servers."

This standard, once approved, "will also will provide a good basis for a coming style of interaction called 'nomadic interfaces,' where the user interface can move from device to device as the user moves around," Dahl says.

The technical aspects of the specification have now been mainly finalized, and the W3C is looking for comments. The last-call period ends February 15. Comments should be sent to www-multimodal@w3.org.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues