Next-Gen Conversational AI Will Support Our Health in Crisis and Beyond

by William Miranda

The COVID-19 virus is causing unprecedented changes in our social lives, our healthcare, the economy, and technology. The pandemic is changing the way we interact with people in our public spaces and our daily routines. People are hyperaware of germs and viruses, and there is a real desire to move away from touch-controlled devices to reduce the spread of bacteria from surface contact. At home, we leverage our smart speakers and displays to supply the information and communication we need via voice prompts (and without the need to sanitize them). On-the-go, we use our Air Pods or Galaxy Buds for touchless, voice-based interactions, entertainment, navigation, and communication. Fitness trackers, such as Fitbits and similar wearables, monitor our heart rate and sleep patterns. But as of now, a voice-driven holistic health monitoring device remains absent.

Let’s look at a few ways miniaturized conversational AI devices can advance to support better health in times of crisis and beyond. I ask you to use your imagination here. These devices do not exist in this form—yet. Perhaps a few on-market devices can do some of the following tasks, but not in sufficient depth and range required for holistic healthcare.

In the near future, voice-driven wearables monitor patients and aid in diagnosing infection.

In some cases, COVID-19 symptoms can be deceptively mild. Generally, patients are not good observers of their health status, and with rushed testing procedures, diagnoses are often inaccurate or arrive too late. Infection can present in many ways within a human body—fever, elevated heart rate, the sound of a cough. With a voice-driven system of telemetry, the patient can have conversational alerts to changes in health status and proactively seek healthcare. A patient wearing a voice-enabled monitoring device (an augmented wrist band with microphone recording capability) can be alerted of changes in baseline health metrics and take the necessary action. They can walk into a hospital while their device continues to monitor their condition before interfacing with a medical attendant. Of course, the wearable would anonymize the patient data. Let’s examine how this voice-enabled device could help patients get the proper care.

Dave, an office worker, is presenting with early symptoms of a viral infection but is not aware of it. He is wearing a voice-enabled telemetry wristband. Amazon’s Alexa (the embedded voice agent), monitoring Dave via biosensors on the wristband, notices a small change from baseline.

Alexa: Dave, I’ve noticed a small change in your health readings. How do you feel currently?

Dave: I feel okay, I guess. Maybe a little tired. Alexa, run a vitals report.

Alexa: Dave, your heart rate is increased over your normal rhythm, your temperature is slightly elevated, and your breathing appears somewhat labored. I want to run a cough analysis. Please cough three times, near your telemetry wristband, while keeping your distance from other people.

Dave: [Coughs as instructed]

Alexa: Dave, preliminary audio analysis of your cough indicates you may be at the beginning of a viral infection. I suggest you go to the nearest medical facility to be diagnosed by a medical professional.

Dave goes to his local doctor’s office, where he is checked by a healthcare professional. His voice-enabled device displays his telemetry data through Bluetooth on his phone for the doctor to review. Based on these data, and Dave’s present condition, the doctor advises Dave to rest and self-quarantine for 14 days.

A future generation of intelligent hearables gauge our health state and optimize our well-being.

The ear canal can be thought of as the body’s I/O port. Around and about the inside of the ear canal is the place where several bioreadings can be monitored, such as blood pressure, oxygen level, and heart rate. In this area of skin, slight variances in electrical currents can signal stress levels and emotional intensity. Soon, augmented smart devices that pop into our ears—call them “hearables”— ill monitor our biosignals to reveal when we are emotionally stressed and our brains overtaxed. These hearables will accomplish this by aggregating and analyzing our unique physiological data by a neural network. In time, the hearable will be able to get better at identifying and predicting our stress levels.

“The ear canal can be thought of as the body’s I/O port. Around and about the inside of the ear canal is the place where several bioreadings can be monitored, such as blood pressure, oxygen level, and heart rate.”

Let’s look at an example. Jean contracted COVID-19 three months ago and has been wearing a hearable for a month because her physician recommended keeping an eye on her bioreadings. Recently, the device has detected a difference in the patterns of sounds when she speaks, indicating an elevated level of stress. Jean’s hearable speaks an alert to the pattern detection and recommends an audio meditation session and that she contact her physician for a follow-up visit.

Next-level AI chatbots perform symptom analysis and prescreen for treatment.

Telehealth hotlines that have been set up for supporting those with symptoms of COVID-19 are being completely overwhelmed with callers, causing a complete loss of service in some cases. Here is where AI chatbots can step in and help. The CDC website offers a text-based chatbot to help website users screen their symptoms to determine if they are a likely candidate for COVID-19 testing. Ask Amazon’s Alexa right now, “Do I have the coronavirus?” and she asks if you are experiencing related symptoms and provides links to other health resources.

The current generation of chatbots is generally logic-driven. Chatbots ask a question, and the user responds by selecting one of several answers, which, in turn, informs the chatbot’s next response. Soon, next-level conversational AI chatbots will serve as an always-available health assistant, giving suggestions about how to improve or maintain health, reminding people to take their medication, order prescriptions, and book or reschedule physician appointments. But most importantly, they will have real natural language understanding and machine-learning capability, and be able to work out more relevant and helpful answers. While chatbots can offer self-help, they cannot make a definitive diagnosis. However, this pre-emptive patient care will relieve the stress on our healthcare systems by triaging patient inflow to waiting rooms where a virus can quickly spread.

In summary, COVID-19 will drive the development of sophisticated conversational AI devices to help alleviate some of the strain on our doctors and nurses in fighting future pandemics. These devices will make their way into our daily lives and become routine healthcare assistants. Without a doubt, companies such as Apple and Google are working on these concepts right now. Hopefully, with the benefit for all, we’ll be able to build these devices within the next three to five years.

Air Pods are a registered product of Apple Inc.
Amazon Alexa is a registered product of Amazon Inc.
Bluetooth is a registered trademark of Bluetooth SIG, Inc.
Galaxy Buds are a registered product of Samsung.


If you’d like to find out how Ogilvy Health can help ideate with voice and conversational AI, please contact William Miranda at or Sandeep Jambhekar at