-->

Speech in a Virtual World, Part II

Article Featured Image

Virtual worlds are still in their infancy. It is uncertain which technologies will be used in the future to assist people in enjoying an accessible in-world experience, but it is likely speech recognition will continue to be used to bridge the gap.

Because these worlds are driven by graphical user interfaces, navigating through them can prove challenging, if not impossible, for people who are blind or vision-impaired. Speech recognition is being used to compensate for inaccessibility, but standard assistive technologies, like screen readers, often don’t work on sites like Second Life (SL).

Currently, the biggest barriers to accessibility are the lack of metadata for content and the fact that virtual worlds are very information-dense. Because it is so easy to create content, like prime real estate in real life, residents often cram their in-world land with as many objects as possible, resulting in an overwhelming amount of information packed into a really small space.

In spite of this, programs have been designed specifically to integrate assistive technologies with SL so disabled users can participate. “People will create things more amazing than the environment itself,” says John Lester, Boston operations director of Linden Labs, the creator of SL. Two of these amazing things are TextSL and Max, the Virtual Guide Dog.

TextSL, a free download, harnesses the JAWS engine from Freedom Scientific to enable visually impaired users to access SL using the screen reader. TextSL supports commands for moving one’s avatar, interacting with other avatars, and getting information about one’s environment, such as the objects or avatars that are in the vicinity. It will also read the text in the chat window. The program, which was created by Eelke Folmer, an assistant professor of computer science and engineering at the University of Nevada-Reno, is compatible with the JAWS screen reader and runs on Windows, Mac OS, and Linux. 

Max, the Virtual Guide Dog, was created as a proof-of-concept to show that SL could be made accessible to people with all types of disabilities. Max was born after Louise Nicholson, who is legally blind, attached to her avatar a dog that informed others she needed help finding people, places, and things. Soon after, scripting was added to the dog. 

Max attaches to one’s avatar, and its radar moves the user and interacts with objects. Max can tell a user what she can reach out and touch, printing the information into the chat window. Max can also help a user find a person or place and transport the user to a desired location. If a device or object has a .WAV file associated with it, then Max can play the audio file as well.

“The power of SL is to bring people together, to allow interaction in a place,” Lester says. And lest anyone minimize the power of place, whether it’s D-Day, the Kennedy assassination, or the Challenger explosion, we remember where we were when we learned of the event, not the location at which the event occurred. People with disabilities, often less likely to have social interactions in real life, can benefit from these interactions in-world if they are afforded the opportunity.

Compliance Issues

One of the biggest obstacles to creating an accessible environment might be that the sites’ creators do not require builders to comply with Section 508 of the Americans with Disabilities Act for making Web sites accessible to the blind. Max is Section 508-compliant, so providing an accessible solution is possible. But for programs like Max and TextSL to read descriptions to the user, objects must be labeled. Every object in SL has a name and description field, but builders can leave the description field blank. In so doing, the screen reader does not have a description to read back to the user. 

And it’s a pretty big problem. Just more than 40 percent of objects in SL were labeled “object,” and of the 100 most-used descriptions, only four or five were descriptive, according to a study of SL conducted by Folmer. Folmer surveyed 8 percent to 10 percent of the millions of objects in SL, and found that 40 percent to 50 percent of them had no descriptive labels at all. “If [the companies] would have required labeling from the beginning, we wouldn’t have this accessibility issue,” says Folmer, who is currently developing an algorithm that can automatically recognize what an object is. 

Educating builders to the importance of creating labels might improve accessibility, but unless and until description tags become a requirement, some objects will inevitably remain untagged, leaving many users literally alone in the dark.


Robin Springer is president of Computer Talk (www.comptalk.com), a consulting firm specializing in the design and implementation of speech recognition and other hands-free technology services. She can be reached at 1-888-999-9161 or contactus@comptalk.com.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

Improving Access to the Virtual World

Universal design standards are opening doors to those with disabilities.