-->

W3C Advances EmotionML

The Multimodal Interaction Working Group of the World Wide Web Consortium (W3C) on Friday published a last call working draft of Emotion Markup Language (EmotionML 1.0) and first public working draft of Vocabularies for EmotionML.

In explaining the releases, the W3C notes that as the Web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions. The present draft specification of Emotion Markup Language 1.0 aims to strike a balance between practical applicability and basis in science. The language is conceived as a "plug-in" language suitable for use in the following areas:

  • manual annotation of data;
  • automatic recognition of emotion-related states from user behavior; and
  • generation of emotion-related system behavior.

Deborah Dahl, chairperson of the W3C Multimodal Interaction Working Group, notes that as far as the W3C is concerned, the group "is not expecting further technical changes" to Emotion ML and the only thing left to be done is to prepare it for implementation.

The first public working draft of Vocabularies for EmotionML represents a public collection of emotion vocabularies that can be used with EmotionML to represent emotions and related states. It was originally part of an earlier draft of the EmotionML specification, but was moved out of it so the W3C could more easily update, extend, and correct the list of vocabularies as required.

According to Dahl, these preliminary standards lay the foundation for interoperable emotion analysis technologies for doing things like detecting the emotional states of callers in a call center; and, on the output side, lay the foundation for enabling text-to-speech technologies to be more natural and expressive by providing a standard way to annotate emotions.

"There is not a standard set of emotions right now," she says. "We're trying to bring together all the most commonly used emotional states in one list."

The EmotionML document, prepared by Marc Schröder of DFKI, Paolo Baggiaand Enrico Zovato of Loquendo, Felix Burkhardt of Deutsche Telekom, Catherine Pelachaud of Telecom ParisTech, and Christian Peter of Fraunhofer Gesellschaft, can be accessed at via a link on the W3C's Web site.

The vocabularies, which are being prepared by Schröder, Baggia, Burkhardt, Zovato, Peter, Pelachaud, Kazuyuki Ashimura of the W3C/Keio, and Alessandro Oltramari of CNR, can be viewed at http://www.w3.org/TR/2011/WD-emotion-voc-20110407. Comments can be sent to the W3C working group at www-multimedia@w3.org. The deadline for comments is June 7.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues