Digital Doppelgangers In Healthcare: From Substituted Judgment To “life” Extension

Dr Brian Earp1

1National University Of Singapore, Singapore, Singapore

Biography:

Brian D. Earp, PhD is Associate Professor of Biomedical Ethics at the National University of Singapore. Brian directs the Oxford-NUS Centre for Neuroethics and Society at NUS and Oxford, and is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center.

Brian D. Earp, PhD

Centre for Biomedical Ethics, National University of Singapore, Singapore and Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK

brian.earp@gmail.com

Abstract:

It is currently possible to ‘fine-tune’ large language models (LLMs), such as Chat GPT, on individual-specific information (e.g., large volumes of written text or speech transcriptions), allowing for the creation of a ‘digital doppelganger’ or psychological ‘digital twin’ of a person (Porsdam Mann, Earp, et al., 2023).

With colleagues, I recently proposed that an appropriately fine-tuned LLM could encode an individual’s preferences and values, such that it could potentially be used as a ‘personalised patient preference predictor’ (P4) to aid with proxy decision-making in cases of incapacity (Earp et al., 2024). In this talk, I propose to discuss forthcoming work that builds on this idea, by considering other potential uses to which a personalized LLM like the ‘P4’ could be put.

In particular, I explore whether an LLM-based digital doppelganger like the P4 could help an individual achieve some of the underlying aims or purported goods associated with life-extension projects, even if it could not literally extend a person’s biological life, subjective consciousness, or personhood. I argue that that, in certain circumstances and with respect to certain types of aims (e.g., leaving a legacy, maintaining aspects of valued relationships) a digital doppelganger could serve as a “second-best” option to life extension, and should therefore be included in ethical discussions about the latter. I also consider a number of objections to this idea and offer some provisional responses.

Categories