For Full Benefit, Speech Use Has to Be Meaningful
In Gartner’s annual surveys of chief medical informatics officers, voice recognition and natural language processing have been consistently identified as the emerging technology areas that will most significantly impact patient care. They permit doctors, radiologists, and other healthcare practitioners to type, in essence, with their tongues to enter data into electronic medical records (EMRs).
These EMRs have received a lot of attention lately, and the federal government has even become involved: As part of federal legislation enacted in late 2009, Medicare and Medicaid will make available to hospitals, clinics, and private practices up to $27 billion in federal funds during the next 10 years to speed the creation of a nationwide EMR system.
An article in the New England Journal of Medicine recognized that the use of EMRs in the United States is inevitable, and said the new funding package seeks to extend their availability from just a few large institutions to the smaller clinics and practices where most Americans receive their healthcare. That provides greater opportunities for healthcare providers to incorporate speech technologies into their operations. It’s also prompting vendors of EMR systems to express an interest in integrating speech into their offerings, according to Michael Finke, chairman and CEO of M*Modal, a vendor of medical documentation solutions.
“Speech recognition can be the tool that helps enter clinical data into a health record,” he says. “When [doctors] document care is where [speech] can really help them.”
But the federal government is not just giving out grants to any healthcare provider to stock up on the latest dictation and transcription products. Caregivers must show a “meaningful use” of EMRs.
To qualify as a meaningful use, healthcare providers must show “specified improvements in care delivery” as a result of using the technologies, according to new federal regulations enacted late in the summer.
“Buying speech alone does not qualify,” Finke explains. “What qualifies is that what you capture [with speech] can be turned into action.”
That means the solution should do more than just allow a doctor to dictate to a template. He also should be able to go into the documents created to identify key pieces of information easily. That information can include patient medications, allergies, vital signs, prior conditions, height, and weight.
In so doing, “patient records become more than a one-way street,” Finke says. “Speech can create a real-time feedback loop for the doctor.”
Seeing the opportunities unfold, Nuance Communications has expanded its reach in the medical field with the launch of a new transcription services offering. Nuance Transcription Services combines Nuance’s proprietary eScription speech recognition platform with its global medical transcription and editing services team.
“With the expansion of our fully integrated Nuance Transcription Services, we’re eliminating healthcare organizations’ need to administer and manage traditional clinical documentation processes,” said Janet Dillione, executive vice president and general manager of Nuance Healthcare, in a statement. “Nuance’s robust medical transcription offerings deliver tremendous value to healthcare organizations by supporting their path to [electronic health records] and meaningful use, and by enabling them to create cost-effective, high-quality medical records.”
The use of mobile devices by medical professionals will also continue to drive speech adoption, Finke adds, noting it would be impractical for doctors to try to type on the small keyboards.