Large language models in healthcare: a checklist for safe, ethical and responsible use of generative AI language models

Large language models in healthcare: a checklist for safe, ethical and responsible use of generative AI language models

Yves Saint James Aquino1, Stacy Carter University of Wollongong Wollongong

1University of Wollongong, Wollongong, NSW, Australia

Abstract

Chat Generative Pre-trained Transformer or ChatGPT, developed by OpenAI, has become one of the most popular examples of a large language model (LLM), a type of generative artificial intelligence (AI) designed to process and generate text-based content. ChatGPT can produce sophisticated and human-like text responses, and can be deployed for conversational or writing tasks. Since ChatGPT’s launch, studies have attempted to demonstrate ChatGPT’s and other LLMs’ performance in tasks including medical knowledge assessment tests, laboratory report interpretation, and clinical evidence synthesis. In clinical practice, numerous reports have emerged of doctors using ChatGPT to deal with paperwork, including patient discharge summaries. As with other AI applications in healthcare, LLMs are being touted as a disruptive technology with the potential to fundamentally change healthcare practices. However, LLMs also carry risks that may not be immediately apparent to end users in healthcare. This paper proposes a checklist for the safe, ethical and responsible use of LLM in healthcare. We developed a checklist to support healthcare practitioners to consider the possible ethical issues that arise from the use of LLM. The checklist draws clinicians’ attention to key dimensions of LLM application in practice, and draws out a range of practical and ethical issues, including bias, privacy, and evidence base. We plan to work with partners to encourage the implementation of this checklist in clinical and policy decision-making in healthcare, with the goal of enabling the benefits of LLMs in healthcare while protecting patients, clinicians and health systems from the significant risks entailed.

Biography

Bio to come

Categories