- EHR tools with voice recognition capabilities can help to reduce data entry duties for healthcare providers and free up time for face-to-face interaction with patients.
Hospitals and health systems are increasingly integrating voice recognition tools powered by artificial intelligence (AI) into EHR technology to improve EHR usability, boost clinical efficiency, and reduce administrative burden on providers.
However, questions surround the accuracy of EHR clinical documentation generated through voice recognition software. Some clinicians hesitate to utilize the technology in day-to-day operations. A July 2018 JAMA study found a 7.4 percent error rate for clinical documentation generated through voice recognition tools. Maintaining a high level of accuracy in clinical documentation when using voice recognition EHR tools is imperative for promoting clinician buy-in during health IT adoption.
Optimizing accuracy and clinician engagement was top-of-mind when New Hampshire-based Concord Hospital expanded its use of Nuance’s Dragon Medical One. The hospital went live with Dragon Medical One shortly after launching a new system-wide Cerner EHR implementation. Concord replaced its GE Centricity and McKesson Horizon Clinicals EHR systems with a single, integrated Cerner system in 2017.
“It was a huge project with the full replacement of our enterprise system, including revenue cycle,” Concord Hospital CMIO Paul Clark, MD, told EHRIntelligence.com.
While many clinicians in the hospital were new to voice recognition software, the hospital’s outpatient providers were already accustomed to the technology before going live with Cerner.
“On the outpatient side, we had a baseline of about 30 to 40 percent of our Centricity users already using Dragon,” said Clark. “We had a pretty high use of Dragon in Centricity. But when we converted to Cerner with Dragon Medical One, the use of voice recognition in all clinical areas became extremely high.”
“The place where it totally changed the experience was on the inpatient units,” said Clark. “Those docs were not using voice recognition. So that was a big conversion for them.”
This high rate of clinician engagement cut the hospital’s transcription costs by nearly 90 percent. Buy-in was especially high among clinicians working in surgery.
“Especially for post-op notes,” said Clark. “In the previous world, clinicians had to write an immediate post-op note. Then they’d come back later and dictate a note. Now, they use templates that allow them to dictate their immediate post-op note, and it also functions as their operative note, so they’re not doing it twice.”
Cutting data entry completely from clinical documentation has been a significant time saver for clinicians in surgical care settings.
Voice recognition EHR tools have also helped to boost provider satisfaction among nurses.
“Instead of the nurses now typing a narrative from a patient call, they dictate it,” explained Clark. “And at least in one practice, we saw the triage time drop from 17 minutes down to 5 minutes.”
In addition to boosting clinical efficiency, hospital leadership also received feedback from nurses that the software generates high quality notes.
“They preferred it to typing,” said Clark. “A similar situation exists in our preoperative service area, where nurses had to do a lot of work to get people ready who were coming in for surgery. Deploying voice recognition in that space was huge for some nurses.”
While clinicians readily embraced the new technology, staff still needed to address the potential for error.
“There is some error rate, which is frustrating,” said Clark. “People have a hard time in my opinion with proof-reading something they dictated in Dragon.”
Researchers in the recent JAMA study came to a similar conclusion. Zhou, MD et al. found the high error rate of notes generated by voice recognition software was partially attributable to inadequate provider review of clinical documentation.
Ensuring medical transcriptionists or clinicians thoroughly proofread notes before signing off on clinical documentation can significantly improve note accuracy. In the study, error rates plummeted to 0.3 percent after clinician review.
Clark has his own method of proofreading notes in a way that also helps to foster a stronger relationship with patients.
“My clinical work is in geriatrics and I use Dragon for all of my work,” said Clark. “One of the things that I've done — which I'm trying to get other people to do — is dictate my end-of-visit summary in the room, in the patient’s presence.”
“They hear what my instructions are,” continued Clark. “I get to clarify anything they didn't understand. And then when they leave, they get a copy of the exact same instructions. That’s pretty powerful. It also saves me time.”
Reviewing summary notes in the presence of patients may help providers to cut down on administrative burden and time spent after work hours completing clinical documentation review.
By thoroughly proofreading notes, Clark stated the accuracy of clinical documentation generated through voice recognition software is significantly higher than that of handwritten progress notes.
“We went from illegibility — where virtually no one could read a progress note — to having immediate access to a legible document now,” he said. “This has been a very big win for nursing, with the ability to understand the provider’s plan for the patient.”
Looking ahead, hospital leadership plans to further expand the use of voice recognition software to streamline non-clinical tasks for staff members.
“We’ve now had more and more requests from administration to use Dragon,” said Clark. “We’re looking for other opportunities to leverage voice recognition.”