Foundation metrics for evaluating effectiveness of healthcare conversations powered by generative AI.

Journal: NPJ digital medicine

Volume: 7

Issue: 1

Year of Publication: 

Affiliated Institutions:  University of California, Irvine, CA, USA. abbasiam@uci.edu. University of California, Irvine, CA, USA. ekhatibi@uci.edu. University of California, Irvine, CA, USA. HealthUnity, Palo Alto, CA, USA. Stanford University, Stanford, CA, USA. National Institute of Standards and Technology (NIST), Gaithersburg, MD, USA.

Abstract summary 

Generative Artificial Intelligence is set to revolutionize healthcare delivery by transforming traditional patient care into a more personalized, efficient, and proactive process. Chatbots, serving as interactive conversational models, will probably drive this patient-centered transformation in healthcare. Through the provision of various services, including diagnosis, personalized lifestyle recommendations, dynamic scheduling of follow-ups, and mental health support, the objective is to substantially augment patient health outcomes, all the while mitigating the workload burden on healthcare providers. The life-critical nature of healthcare applications necessitates establishing a unified and comprehensive set of evaluation metrics for conversational models. Existing evaluation metrics proposed for various generic large language models (LLMs) demonstrate a lack of comprehension regarding medical and health concepts and their significance in promoting patients' well-being. Moreover, these metrics neglect pivotal user-centered aspects, including trust-building, ethics, personalization, empathy, user comprehension, and emotional support. The purpose of this paper is to explore state-of-the-art LLM-based evaluation metrics that are specifically applicable to the assessment of interactive conversational models in healthcare. Subsequently, we present a comprehensive set of evaluation metrics designed to thoroughly assess the performance of healthcare chatbots from an end-user perspective. These metrics encompass an evaluation of language processing abilities, impact on real-world clinical tasks, and effectiveness in user-interactive conversations. Finally, we engage in a discussion concerning the challenges associated with defining and implementing these metrics, with particular emphasis on confounding factors such as the target audience, evaluation methods, and prompt techniques involved in the evaluation process.

Authors & Co-authors:  Abbasian Khatibi Azimi Oniani Shakeri Hossein Abad Thieme Sriram Yang Wang Lin Gevaert Li Jain Rahmani

Study Outcome 

Source Link: Visit source

Statistics
Citations :  Paperno, D. et al. The LAMBADA dataset: word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 1525–1534 (Association for Computational Linguistics, Berlin, Germany, 2016).
Authors :  14
Identifiers
Doi : 82
SSN : 2398-6352
Study Population
Male,Female
Mesh Terms
Other Terms
Study Design
Study Approach
Country of Study
Publication Country
England