Mr Joel Seah1
1National University of Singapore, Singapore
Biography:
Bio to come
Abstract:
Informed consent is a fundamental requirement in human subjects research, and ‘Respect for Persons’ obligates recognizing one’s autonomy for deliberation and decision-making by providing important and relevant information in an understandable manner. Unfortunately, informed consent forms (ICFs) often fall short of this ethical obligation; typically lengthy, complex, and/or hard-to-understand, such ICFs hinder people’s ability to make an informed choice for participation.
In part, the lack of writing and editorial skills/expertise in researchers and Institutional Review Board (IRB) staff contributes to the creation of such poor-quality ICFs and templates. Anecdotally, there is evidence to suggest that by co-piloting with Large Language Models (LLMs) like OpenAI’s ChatGPT, researchers/IRBs might potentially be able to more effectively and efficiently develop ICFs which are concise, readable, and easy-to-understand, especially in circumstances where resources like time, money, manpower are often scarce or limited. Moreover, a human-in-the-loop approach, where the human checks and verifies its outputs, would address the common concerns of ‘hallucinations’ and performance ‘consistency’ in LLMs.
With the advent and democratization of LLMs, ‘Respect for Persons’ necessitates that researchers and IRBs should leverage on this technology by developing, testing, and validating such ICF-specific LLMs. If these work demonstrates co-piloted LLMs are indeed significantly better than a sole human in creating quality ICFs, we ought to consider adopting them in the writing process to enhance prospective participants’ autonomies. Consequently, further work and evaluation would also be needed to help facilitate the responsible deployment/utilization of such LLMs.