Speech Technology Magazine

 

University Project Could Yield a New Mobile Speech Application

MIT Researchers are developing a wheelchair that will navigate on its own in response to voice commands.
By Adam Boretz - Posted Nov 1, 2008
Page1 of 1
Bookmark and Share

Professor Nicholas Roy and his colleagues at the Massachusetts Institute of Technology are working to develop a self-navigating wheelchair capable of learning different locations within a building and then transporting users to those locations in response to verbal commands.

Through speech recognition technology, a user will be able to speak commands—Take me to the study or Go to the conservatory—and the wheelchair will automatically maneuver from place to place based on a map stored in its memory.

"We’re using speech recognition developed inside the Computer Science and Artificial Intelligence Laboratory developed by the Spoken Language Systems Group," says Seth Teller, professor of computer science and engineering and head of the Robotics, Vision, and Sensor Networks Group at MIT’s Computer Science and Artificial Intelligence Laboratory. "What’s novel about what we’re doing is that we’re trying to resolve ambiguous utterances by the users. So if the user, for example, says Take me to the elevator bank, and there are two or three elevator banks in the building, the system will generate a question back to the user—something like Which elevator bank did you mean? It initiates a dialogue with the user rather than simply taking speech in stone and trying to do the best it can with it."

Instead of relying on manually captured maps, the MIT wheelchair learns about its environment through a guided tour. The chair is pushed around its new environment as the user identifies rooms and locations. The wheelchair prototype uses a WiFi system and a network of WiFi nodes in each location to generate its maps and navigate through environments.  

"The other novel thing we’re doing is using speech to help the wheelchair learn a map of the environment," Teller says. "You get a brand new wheelchair with factory programming…and the first thing you do is give it a narrated guided tour of the space you’re in so it can learn the space layout."

Collaborating on the project with Teller and Roy—the project lead and an assistant professor of aeronautics and astronautics—is Bryan Reimer, a research scientist at MIT’s AgeLab.

According to Reimer, the chair will eventually be used in real trials at The Boston Home, a nursing home in Dorchester, Mass.

"At present we have a prototype that we’re deploying in the lab, and we do hope to begin trials with real users within the next couple of years at The Boston Home," Teller says. There’s a lot of interesting stuff that happens when you try to roll out a system like this to a real community. Most, if not all, of the residents at The Boston Home have some sort of neuromuscular disability…and in many cases that slurs their speech, makes it soft, and that is going to compromise the accuracy of the speech system."

But Teller says the wheelchair’s design—particularly its ability to ask users clarifying questions—addresses those challenges. "Hopefully through repeated interactions—just the way that a human might initiate an interaction if he or she doesn’t understand what was just said—the wheelchair can prompt the user to try to get a better idea of what was just said," he explains.

Another issue, Teller says, is that some residents at The Boston Home can’t speak at all. 

"Clearly, for those people, we’re going to have to do something different," he says. "We are studying that issue. There are lots of alternative interfaces out there that involve head motion, sip and puff tubes, tongue motion, and eye gaze. And we’re looking at those."

Page1 of 1