Why I’ve been dreading chatbots in healthcare
for the Opinion column of Innovation Origins
Over the past months, while curating news for AI Health Hub, I’ve encountered several articles discussing recent partnerships between medical and tech businesses, or press releases of companies diving into generative AI for healthcare. Examples include Microsoft’s collaboration with Nuance, HCA Healthcare teaming up with Google Cloud, and Amazon’s announcement of AWS HealthScribe for clinical documentation.
There is a wide range of applications of Generative AI in healthcare. In diagnostics, it can be used to enhance medical imaging by generating high-resolution scans. It can help with drug discovery and refine patient-specific treatment plans. As a final umbrella example, there is Natural Language Processing (NLP) in healthcare, which is being actively researched and applied in various ways, such as summarizing clinical documents, mining medical literature, and transcribing doctor-patient conversations from voice to text.
However, there is one potential application that has been daunting me: healthcare chatbots.
The reason? Perhaps, that I am from the South of Italy, but I’ll further explain. Living many years abroad, I’ve noticed striking differences in interpersonal communication and interactions compared to my background. Southern Italian culture tends to embrace warmth and expressiveness in social encounters, very often also in the medical context. This contrasts with the more reserved attitude observed in Nordic countries, my personal experience being related to the Netherlands and Belgium. There is no right way of communicating, but I can sadly recall feeling very scared, lonely, and not cared for in my first medical encounters when I moved to the Netherlands. It did not help that I only spoke English, it was my first time living on my own and abroad, and I was getting to know a totally different culture and communication style.
How can a chatbot manage patients?
I felt this way while having a conversation with a flesh-and-blood doctor sitting right across me, or while chatting on the phone with the doctor’s assistant. How can I not dread for a time when we will be triaged via a chatbot to (maybe, hopefully) get the opportunity to visit the very busy doctors? If sometimes exercising empathy and warmth can be hard for a human in these critical situations, how can a chatbot be able to manage patients? There would be so many things to consider to bring a similar solution into a clinic.
For instance, I cannot imagine chatbots not being tailored to diverse communication styles, especially considering cultural differences. I can vividly imagine the elderly women in my small Southern Italian hometown getting upset and reacting strongly if met with a tone lacking warmth or understanding. Communication for us can deeply affect relationships.
Another point is overcoming the prejudice on chatbot capabilities, which I believe many of us have. Year after year, I have observed a trend of airlines, banks, and e-commerce platforms introducing their latest digital team members in customer service – chatbots with human names. My experience with these has been consistently disappointing as they have yet to successfully resolve any of my issues. Typically, I resort to customer service after exhausting all my options for finding a solution on my own. The chatbots will rarely offer anything beyond information I could easily find on the website. So I usually start and carry on the fictitious conversation, longing for the moment I get redirected to their human colleague. It takes me 2-3 minutes just to get in a long queue to speak with someone. Very inefficient.
What about using chatbots when you are afraid, worried or you feel a sense of urgency? I guess it would depend on how the whole service is designed to work for patient management. Can a human be ever contacted at the end of the line? If used for triaging or symptom assessments, would the chatbot be supervised by a medical expert, and easily remediable in case of error?
A helpful chatbot
Despite all of my worries, I have to admit there is one chatbot success story that keeps me hopeful for certain use cases, that is in a women’s health app called Flo.
While I’ve personally never hesitated to share symptoms or medical details, even with male doctors, I recognize that each woman’s comfort level varies. Within my circle, some prefer consultations with female gynecologists, for example. Despite my openness, I do find myself questioning my symptoms all the time, and realistically, I can’t consult my doctor throughout the 28-day cycle. But I have found a solution: an app that allows me to easily log any of my symptoms at any point in the cycle, prompting a chatbot to engage, analyze them, and ultimately suggest contacting a doctor if needed. So a chatbot can come in handy for both cases of people struggling to share more detailed information about their condition, or extremely curious, borderline hypochondriac people like myself.
What makes this chatbot acceptable for me? To begin with, I input my symptoms without expecting any diagnosis or decision. It simply guides me in interpreting the symptoms I log. Moreover, I gain insights based on several logged symptoms across multiple menstrual cycles. They also hit the right tone – friendly yet formal – while consistently displaying disclaimers and prompting confirmation that I understand the information is not a substitute for medical consultation.
I’d like to highlight this app not as a paid promotion, but because I find it a standout example of a healthcare application as a paying customer. I love its design, content, and functionalities. Exploring their website, you can learn about their Medical Affairs and Science & Research teams. You can also find an overview of the Medical Experts they work with, including information on their specialties and CVs. Finally, they made their pro subscription free in regions where reliable health information is scarce.
When considering the use of chatbots, there are many factors to think about, and I am sure there is much more to explore beyond what I’ve touched upon in this piece. I do recognize the value of employing chatbots for education and information dissemination, particularly when compared to extensive Google searches, and when you do not have continuous or easy access to doctors. However, I remain quite skeptical about their practical application within a clinical setting.