-->

MRCP Enables New Speech Applications

Have you ever wished you could change your VoiceXML platform to use a speech synthesizer or speech recognizer from a different vendor?  Have you ever wanted to move your speech synthesizer or speech recognizer to a different server?  The Internet Engineering Task Force is proposing a new standard that will provide this flexibility.

Media Resource Control Protocol Version 2 (MRCPv2) is a network protocol, which provides a vendor-independent interface between speech media servers and speech application platforms. The Internet Engineering Task Force Speech Service Control's MRCPv2 is based on an earlier version developed jointly by Cisco, Nuance, and SpeechWorks (now ScanSoft). 
    
The MRCPv2 protocol controls media service resources over a network. This protocol depends upon a session management protocol, such as the Session Initiation Protocol (SIP), to establish a separate MRCPv2 control session between the client and the media server. MRCPv2 defines the following types of media processing resources: 

  • Basic synthesizer — A speech synthesizer resource with very limited capabilities that play out concatenated audio file clips.
  • Speech synthesizer — A full capability speech synthesizer that produces human-like speech using the Speech Synthesis Markup Language specifications.
  • Recorder — A resource with end-pointing capabilities for detecting the beginning and ending of spoken speech and saving it to an URI.
  • DTMF recognizer — A limited DTMF-only recognizer that can match telephone touchtones to a grammar and perform semantic interpretation based on semantic tags in the grammar.
  • Speech recognizer — A full speech recognizer that converts spoken speech to text and interprets the results based on semantic tags in the grammar.    
  • Speaker verification — Authenticate a voice as belonging to a person by matching the voice to one or more saved voiceprints.

MRCP is designed to support two important capabilities to make speech platforms more flexible:

  1. Service provider independence — Developers can switch between service providers.  For example, a developer switches from a public domain speaker recognition engine to a higher quality (and more expensive) proprietary speaker recognition engine.
  2. Service location independence — Developers can move services among servers.  For example, if a server becomes saturated, another server can be installed and some of the services from the first server can be reloaded onto the second server.

Developers can leverage the benefits of MRCP to provide this flexibility to any application or platform that uses speech recognition, speech synthesis, and speaker authentication.  For example:

  • VoiceXML — MRCP provides media services to the VoiceXML platform. Developers continue to use speech application development methodology and tool kits to create VoiceXML 2.0/2.1 applications.  The VoiceXML platform uses MRCP to provide speech resource services.
  • SALT — The Speech Application Language Tags4 (SALT) could be implemented using MRCP. SALT developers would be free to choose ASR and TTS services from any technology vendor.
  • W3C's aural CSS — The W3C's specification for aural Cascading Style Sheets5 (CSS) supports an audio rendering of (X)HTML and XML pages by using an aural style sheet.  This would enable sight-impaired individuals to browse and interact with (X)HTML and XML pages on browsers supporting the W3C aural CSS style sheet.  Currently, neither Microsoft Internet Explorer™ nor Netscape Navigator™ support aural CSS, but it is possible to build aural CSS plugins for Internet Explorer or Navigator that use MRCP to provide speech recognition and speech synthesis services.
  • Animated visual agents — Animated icons change their appearance, move around the screen and talk to users.  Popular visual agents from Microsoft include Perdy (a parrot), Merlin (a wizard), Robby (a robot) and Genie.6 Animated visual agents will be used in entertainment applications on PCs, mobile devices, kiosks and Internet-enabled televisions.  Users will interact with a variety of artificial newscasters, program hosts, artificial characters and cartoons.  Generic animated agent software could use MRCP to provide the ASR and TTS services; while artists create the appearance and animation, and developers create the dialogs and applications.
  • Small mobile devices — Mobile devices are becoming so small that QWERTY keypads are impractical. Users will use a stylus to point and write and a microphone to speak to these devices.  MRCP can be used to support the speech resources on remote servers accessed by mobile devices.
MRCP can support remote media services for the applications listed above, and others that have not been invented yet. MRCP can do for the entire speech industry what VoiceXML did for the telephony industry—provide a standard platform on which to write applications that enables media resources to be accessed remotely, and enables developers to choose the technology vendors that best support their applications within their budgets.



Dr. James A. Larson is manager, advanced human input/output, Intel, and author of the home study course VoiceXMLGuide, http://www.vxmlguide.com.  He can be reached at jim@larson-tech.com.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues