Electronic Health Records

Use & Optimization News

Voice Recognition Accuracy Lacking in EHR Clinical Documentation

New research found a 7.4 percent error rate for EHR clinical documentation generated through voice recognition software.

EHR clinical documentation generated through voice recognition software has a 7.4 percent error rate according to new research.

Source: Thinkstock

By Kate Monica

- EHR clinical documentation generated through voice recognition software may be error-prone, according to a recent JAMA study.

In the study, Zhou, MD, et al. aimed to identify and analyze errors at each stage of the voice recognition-assisted dictation process and provide recommendations to ensure EHR clinical documentation maintains a high level of accuracy.

“Speech recognition technology is being adopted at increasing rates at health care institutions across the country owing to its many advantages,” wrote researchers. “Documentation is one of the most time-consuming parts of using EHR technology, and SR technology promises to improve documentation efficiency and save clinicians time.”

Researchers from Brigham and Women’s Hospital, Harvard Medical School, Geisinger Commonwealth School of Medicine, and other institutions collected and analyzed 83 office notes, 75 discharge summaries, and 59 operative notes dictated by 144 physicians at Partners HealthCare System and UC Health throughout the 2016 calendar year for accuracy.

Researchers analyzed a total of 217 clinical documentation notes by physicians using Dragon Medical 360 and Nuance’s eScription voice recognition software and calculated error rates.

Ultimately, researchers reported a 7.4 percent error rate across all EHR clinical documentation notes.

This disconcertingly high error rate indicated the need for clinicians and other hospital staff to sufficiently review notes before signing off on clinical documentation. After review from either a medical transcriptionist or a clinician, researchers found error rates decreased significantly.

Specifically, error rates plummeted to 0.4 percent after transcriptionist review and 0.3 percent in the final version of EHR clinical documentation reviewed and signed by clinicians.

While error rates are slightly lower after clinician review, researchers pointed out that requiring clinicians to review notes rather than allowing medical transcriptionists to assist with review may further augment administrative burden.

“Clinicians face pressure to decrease documentation time and often only superficially review their notes before signing them,” wrote researchers. “Fully shifting the editing responsibility from transcriptionists to clinicians may lead to increased documentation errors if clinicians are unable to adequately review their notes.”

Healthcare organizations can adopt both front-end and back-end speech recognition software. Back-end systems automatically convert clinician’s dictations to text. Medical transcriptionists quickly review and edit the resulting clinical documentation to ensure accuracy.

Designating note review as a responsibility of medical transcriptionists frees up physicians to focus on other aspects of care delivery, which can boost clinical productivity and reduce administrative burden.

However, many hospitals are currently adopting front-end speech recognition systems that require clinicians to review and edit their notes themselves. These systems may increase rather than reduce administrative burden on providers.

“Although adoption of SR technology is intended to ease some of the burden of documentation, that even readily apparent pieces of information at times remain uncorrected raises concerns about whether physicians have sufficient time and resources to review their dictated notes, even to a superficial degree,” stated researchers.

Without sufficient review, inaccuracies may appear in the final version of physician notes. Documentation errors that affect clinically relevant information may pose a threat to patient safety.

The prevalence of dictation errors in EHR clinical documentation, combined with insufficient clinician review, suggest speech recognition software developers should focus on better integrating voice recognition use and review into existing clinical workflows.

“These findings demonstrate the necessity of further studies investigating clinicians’ use of and satisfaction with SR technology, its ability to integrate with clinicians’ existing workflows, and its effect on documentation quality and efficiency compared with other documentation methods,” researchers maintained.

Ensuring speech recognition software use and review seamlessly integrates into clinician workflows may help clinicians to more easily identify and resolve dictation errors to improve EHR clinical documentation accuracy and boost patient safety.

“In addition, these findings indicate a need not only for clinical quality assurance and auditing programs, but also for clinician training and education to raise awareness of these errors and strategies to reduce them,” concluded researchers.

Overall, ensuring clinicians can easily and adequately review EHR clinical documentation generated through voice recognition software will help to ensure the technology lessens administrative burden on providers as intended.



Sign up to continue reading and gain Free Access to all our resources.

Sign up for our free newsletter and join 60,000 of your peers to stay up to date with tips and advice on:

EHR Optimization
EHR Interoperability

White Papers, Webcasts, Featured Articles and Exclusive Interviews

Our privacy policy

no, thanks

Continue to site...