Loquendo TTS Powers W3C Standardization Activities
TURIN, Italy - Loquendo, as part of ongoing commitment to W3C standardization activities, has donated its speech technologies, including the Loquendo TTS text-to-speech engine, Loquendo ASR automatic speech recognition and the VoxNauta(tm) speech platform to the World Wide Web Consortium (W3C) for creating speech-enabled demos, especially in the Multimodal Interaction field.
The W3C Multimodal Interaction Activity seeks to extend the Web to allow users to select the most appropriate mode of interaction for their current needs, while enabling developers to provide a user interface for whichever modes the user selects. Depending on the device, users will be able to provide input via speech, handwriting, and keystrokes, with output presented through displays, pre-recorded and synthetic speech, audio, and tactile mechanisms.
The first Multimodal Interaction Group demo powered by Loquendo Technology was presented during the W3C Technical Plenary in Boston. The demo illustrated Multimodal Interaction and styling, concentrating on the visual and aural rendering of News expressed in RSS (Rich Site Summary). RSS is an XML document used to share news headlines and other types of Web content. The formalism has been adopted by news syndication services, Web logs, and other online information services. Besides being used to create news summary Web pages, RSS can also be fed into stand-alone news browsers or headline viewers, PDAs, cell phones, email ticklers and even voice updates.
In this initial demo, the RSS data are rendered into an XHTML document for a visual browser, and concurrently into an SSML document which is fed into the Loquendo TTS speech synthesis engine for reading the news out loud. Future demos will also exploit Loquendo's ASR and VoiceXML technologies.