IBM and Opera Software Team to Develop Multimodal Browser

SAN FRANCISCO, CA - IBM and Opera announced that they will jointly develop a multimodal browser based on the XHTML+Voice (X+V) specification. The beta version of the browser, available this fall, will allow access to Web and voice information from a single mobile device. This project builds upon IBM's and Opera's ongoing relationship. In 2001, IBM, Motorola and Opera submitted the multimodal standard X+V to the standards body W3C. This mark-up language leverages existing standards so they can use their skills and resources to extend current applications instead of building new ones from the ground up. Multimodal technology allows the interchangeable use of multiple forms of input and output, such as voice commands, keypads, or stylus -- in the same interaction. As computing moves away from keyboard-reliant PCs into devices such as handheld computers and cellular phones, multimodal technology becomes increasingly important. This technology will allow end-users to interact with technology in ways that are most suitable to the situation. "IBM and Opera Software are collaborating in developing speech technology by providing the tools necessary for multimodal applications," says Jon S. von Tetzchner, CEO, Opera Software ASA. "We look forward to seeing how this multimodal browser will help shape the evolution of the mobile and wireless computing as we move into this next phase of e-business." "As we move further into the pervasive computing model, where our phones, handhelds and even cars become our gateways to information access, the ability to interact with technology in the most natural and convenient way possible will be key," said Rod Adkins, General Manager, IBM Pervasive Computing Division. "Together with Opera, one of the leading providers of browser technology, IBM aims to build an interface that will allow technology to adapt to end-users, rather than forcing them to adapt to technology." Also in his keynote speech at the Vox conference today, Adkins urged the voice industry to introduce tools to make voice and multimodal development easier. "We don't make it easy to develop for voice," he said. "Voice XML was a good start in standardizing the programming language and tags. Now, let's go the next step further, and do the same for how we build the user interface and dialogues." Adkins added that tools such as reusable dialog components -- chunks of code that can be used to build applications to different industries-would help greatly to ease voice and multimodal development. "A developer should be able to use the same block of code to build a credit card application for retail as he'd use for a travel application," he said.
SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues