Posted on
Did you notice that doctor, standing over there, talking to himself? He hasn’t cracked under the pressure. He’s using a microphone to dictate his notes directly into the clinical information system.
Right now, providers (meaning anyone licensed to place an order) record their notes in many different ways, ranging from handwritten notes to automated dictation. Until recently, dictation was done using back-end speech recognition.
“With back-end speech recognition, a physician picks up the phone, enters the MRN and visit code, and talks,” explains Dr. Eric Grafstein, the Clinical and Systems Transformation (CST) project’s Chief Medical Information Officer, VCH/PHC. That dictation goes through an engine that recognizes the words and captures them in a document, and then a transcriptionist reviews and edits it. When it’s completed, it goes into the current clinical information system in the section where transcriptions reside.
Transcription can take a few days or more if corrections are needed.
“In Emerg, that’s a real issue,” says Dr. Grafstein. “My heart sinks when I see someone who was recently discharged, but the report isn’t available and I don’t have access to the chart, so I don’t know what happened.”
Full control over dictation
Thanks to the Health Information Management team, providers at VCH, PHSA and PHC will soon start using front-end speech recognition software provided by M*Modal called “Fluency Flex.”
“Front-end speech recognition does away with the phone and the transcriptionist in the background, and puts a microphone in the physician’s hand,” continues Dr. Grafstein. “The history, operative reports and so on are saved directly into the electronic chart and are immediately available to everyone. Physicians can control how quickly the note is available for other clinicians to view and understand what has happened to the patient.”
“None of us are big typers,” says Dr. Johanna Bonilla, CST’s Physician Experience Team Manager. “It’s easier to dictate. It keeps the storytelling flowing rather than shortening the narrative. It’s important to preserve that side of documenting.”
With front-end speech recognition, providers can also create auto text – snippets of commonly used language or regularly dictated content – with a single command, such as “insert normal physical exam.” They can then edit the auto text and see the final output. Currently, a transcriptionist will leave blanks if they can’t understand something a provider has dictated.
An important step to electronic provider documentation
Front-end speech recognition is an important step toward electronic provider documentation, which is being introduced as part of the CST project’s new Cerner clinical information system.
The goal is to have front-end speech recognition in place well before the new clinical information system becomes active, to ensure providers and clinicians have as much time as possible to train and become familiar with the new tool.
This will also help reduce some of the complexity associated with the activation of the new clinical information system, which is targeted to begin with VCH Coastal Group 1/LGH-Sea-to-Sky.
“This project is for patients,” affirms Dr. Bonilla. “It will preserve the continuum of the patient’s story. Having complete information at hand will also improve the experience for all clinicians."