Pitfalls of Chatbots in Medicine 

July 18, 2023

In the last several years, conversational chatbots have become a popular tool in medical care, public health research, and even medical education. During the Covid-19 pandemic, chatbots filled in gaps in patient care that resulted from social distancing practices and overburdened healthcare systems, aiding with tasks such as communicating information about vaccines and providing mental health support (3). With the current race to apply chatbots to medicine, fueled in large part by the successes of ChatGPT, it is critical to understand the pitfalls of this technology. 

While chatbots can provide myriad benefits to medicine, AI technologies also currently have pitfalls, such as exacerbating existing bias within the healthcare system, jeopardizing data privacy, and disseminating inaccurate medical information. As a result, chatbots must be designed, implemented, and regulated intentionally to ensure that they make medicine more equitable for diverse patient populations and enhance the patient-physician relationship. 

To begin, conversational chatbots like ChatGPT are liable to spread misinformation that isn’t supported by current medical research (6). Natural language models like ChatGPT gain their “intelligence” by studying data sets that are fed to machine learning models. As a result, the accuracy of the information they provide is wholly dependent on the quality of the data that they have been trained on. Currently, the companies developing conversational chatbots have a disproportionate amount of control over the validation and standardization of the information used to train these models (6). Medical researchers and experts are needed to verify and corroborate the medical information that chatbots provide to patients. Furthermore, responses provided by conversational chatbots can be extremely dependent on context and even promote conspiracy theories, taking advantage of less educated or technologically inept populations (6). 

The data that is used to train chatbots can also make these models vulnerable to perpetuating systemic discrimination toward racial minorities, women, and other historically underserved populations in medicine (6). If chatbots are trained based on data that is over-representative of hegemonic groups, they may provide inaccurate or prejudiced medical information for minority populations (6). As a result, minority populations may receive lower-quality care from chatbots and other medical AI technologies. On the flip side, chatbots often act as a patient’s first point of contact with the healthcare system, so the racial, ethnic, and gendered appearance of a chatbot’s avatar can impact a patient’s level of trust in the medical information provided (4). 

Finally, another of the major pitfalls of chatbots in medicine is that they are at risk of violating data privacy and patient confidentiality. Artificial intelligence technologies that are used in healthcare settings are given access to vast amounts of confidential patient information (2). These technologies must be trained to provide such information only to authorized users. In order to interpret combinations of data from medical texts, lab results, and electronic health records, generalist medical artificial intelligence or GMAI needs to be given access to personal medical data and historical medical knowledge (2). Presently, data security and privacy are not being prioritized to a sufficient level in the development of these technologies. 

Chatbots are likely to become further integrated into our healthcare systems in the future in order to provide more accessible and personalized care. However, especially in medicine, chatbots must be intentionally designed to avoid potential pitfalls, promote health equity and improve trust between patients and healthcare systems. 

References 

  1. Baumgartner, Christian. “The potential impact of ChatGPT in clinical and translational medicine.” Clinical and translational medicine vol. 13,3 (2023): e1206. doi:10.1002/ctm2.1206 
  1. Binns, Corey. “The promise—and pitfalls—of medical AI headed our way.” Stanford News, Apr 12 2023, news.stanford.edu/press-releases/2023/04/12/advances-generalizable-medical-ai/ 
  1. Fournier-Tombs, Eleonore, and Juliette McHardy. “A medical ethics framework for conversational AI.” Journal of medical Internet research, 10.2196/43068. 13 Apr. 2023, doi:10.2196/43068 
  1. Mason, Kara. “Do Chatbot Avatars Prompt Bias in Health Care?” University of Colorado School of Medicine, June 5 2023, https://news.cuanschutz.edu/medicine/do-chatbot-avatars-prompt-bias-in-health-care  
  1. Palanica, Adam et al. “Physicians’ Perceptions of Chatbots in Health Care: Cross-Sectional Web-Based Survey.” Journal of medical Internet research vol. 21,4 e12887. 5 Apr. 2019, doi:10.2196/12887 
  1. Temsah, Omar et al. “Overview of Early ChatGPT’s Presence in Medical Literature: Insights From a Hybrid Literature Review by ChatGPT and Human Experts.” Cureus vol. 15,4 e37281. 8 Apr. 2023, doi:10.7759/cureus.37281