Technologies to reduce EHR usability burden
Technology innovations have long promised to relieve burden by freeing physicians from manual EHR tasks.
- Technologies on this page
- Medical scribes
- Virtual scribes
- Medical speech recognition
- Ambient speech recognition
- AI assistants
Today there is clear evidence that technologies and technology-enabled services can relieve some EHR usability burden by inserting technology or technology- assisted services into the EHR workflow and eliminating or streamlining tasks.
Medical scribes
$2,500 to $4,500 per month
Medical scribes are professionals who record information during clinical visits in real time and can perform other EHR tasks under physician supervision.
Virtual scribes
$1,000 to $1,200 per month
Virtual scribes reduce potential intrusiveness by removing the third party from the exam room and reduce costs by scaling and offshoring the scribe’s work.
Medical speech recognition
Enterprise: $25 to $75 per month
Single physician: $200 per month
Medical speech recognition allows physicians to use their voices to dictate into their EHRs more easily. Their accuracy and integration have evolved to provide many physicians relief from documentation burden. However, physicians still must navigate the EHR to enter and edit information.
Medical speech recognition: Dragon Medical KLAS Research
Ambient speech recognition
$1,800 per month
Ambient speech recognition technology “listens” to the caregivers’ and patient’s conversation with the physician throughout the encounter, including history and exam details and works with an external staffed service to create a visit note.
AI assistant
$150 to $200 per month
AI-powered, voice-enabled digital assistants can help physicians complete documentation and other administrative tasks. Although AI assistants still have room to develop, they are the only documentation solutions on the market that show promise in totally automating the EHR documentation process.
Differences between AI and medical speech recognition
Unlike AI, legacy speech recognition systems did not leverage deep learning needed to perform voice-to-text conversion or have natural language understanding.
Although newer speech recognition solutions have progressed in these areas, AI assistants better understand natural language and can detect intent (i.e., commands from the user).
Whereas voice recognition solutions require the user to specify where dictation should be placed in the EHR note, AI assistants’ built-in model of documentation recognizes embedded commands and knows where to place text. This ability enables the physician to generate a note without having to navigate or edit in the EHR.