One of the emerging concepts in this area is that of ambient intelligence. Ambient intelligence occurs when sensors such as microphones and cameras provide data to an AI that can analyze what’s going on in the clinical situation and generate notes for encounters, place the orders, order prescriptions, make referrals and carry out other actions that come out of an encounter. This could be very powerful when it reaches maturity, since clinicians can forget key aspects of encounters, and it could eventually be better than a human follow-up to a visit. Speech recognition software can transcribe encounters three times faster than a human typing into a clinical system, potentially freeing up a couple of hours a day for a typical caregiver who sees 20-30 patients a day.
One of the benefits of AI technologies is that they’re capable of learning and adapting with each interaction. This helps improve the outputs they generate over time and allows for a degree of personalization for each user. The more feedback that AI products receive from users through regular interactions, the better they can become at serving the unique needs of a particular practice, system or provider.
OrthoIndy, an orthopedic practice in Indianapolis, used Saykara to improve clinical workflows and make documentation easier and better. Saykara is a medical assistant that listens to encounters, creates notes and places them in the right place in EHRs. It can also make referrals, prescribe meds and assist with coding. It produces a fully structured note which goes directly into an Allscripts electronic health record. Physicians only need to review and sign off on the notes in the EHRs. The reason OrthoIndy was so attracted to Saykara was that it promised to remove physicians from hands-on documentation.
At a broad level, ambient intelligence technology’s true potential lies in going beyond documentation and becoming an intelligent assistant through effective listening for key issues and to-dos to document. The level of integration between emerging technology tools and core clinical platforms such as EHRs is a significant factor in increasing adoption rates. Today’s fundamental challenge for voice recognition in ambient computing is the same for AI applications in general in the healthcare context.
Epic has worked with AI-powered voice solutions company Suki to integrate its generative AI assistant into its EHR software through its ambient API. Suki Assistant helps clinicians to complete time-consuming administrative tasks. The company uses generative AI and large language models to listen to the patient-doctor conversation, identify the clinically relevant portions and then summarize that information as suggestions for the note, without human intervention. Clinicians can then review and accept, reject or edit content suggestions to ensure the accuracy of final notes. Once notes are complete, they sync back to Epic. Suki has reported that its ambient note-generation tech is capable of reducing documentation time per note by as much as 72% for family medicine physicians. The company’s voice-enabled digital assistant has been integrated into other EHR platforms including Athenahealth, Cerner and Elation Health.
There are a number of challenges presenting barriers to adoption which need to be continuously improved to make this more universal. One is the challenge of clear voice communication in a noisy and busy clinical setting, particularly when complex medical vocabulary is required. In some situations, it may be necessary to add visual support (such as the screen offering of an Amazon Show device or integrating the voice assistant with screenshots on the computer). Clinical guidelines and graphical displays of data are enhanced when we both “show” and “tell.” There are also obvious logistical factors: Wi-Fi access and the need for a secure place to keep the device can be challenging in some units within a hospital.