Computer knows best? The need for value-flexible AI in patient care 

Dr Rosalind McDougall1

1University Of Melbourne, Melbourne, Australia

Artificial intelligence is increasingly being developed for use in clinical care.  AI-based diagnosis has been shown to be highly accurate in several clinical contexts compared with experienced doctors.  AI can conduct a vast analysis of published evidence in order to generate ranked treatment options for an individual patient.  This paper aims to contribute to systematic discussion of the relationship of AI to our current conceptual knowledge in medical ethics.  In this paper, I investigate the relationship between AI and the ethical ideal of shared decision-making in healthcare.  My focus is specifically on AI systems that generate treatment recommendations, such as Watson for Oncology. I argue that involving AI in ranking treatment options poses a substantial threat to shared decision-making.  Ranking treatment options involves value judgements.  If these value judgements are fixed and covert in AI systems, then we risk a return to paternalistic medical care.  However, if designed in an ethically-informed way, AI could offer a potentially powerful way of supporting shared decision-making.  It could be used to incorporate explicit value reflection, promoting patient autonomy. I put forward the concept of value-flexible AI – AI that can respond to the values and treatment goals of individual patients – and argue that, in the context of patient care, there is an urgent need for AI systems to be designed to be value-flexible.


Biography:

Dr Rosalind McDougall is a Senior Lecturer in Health Ethics in the School of Population and Global Health, University of Melbourne, Australia.  She has published widely in clinical ethics and reproductive ethics.  She recently co-edited When Doctors and Parents Disagree: Ethics, Paediatrics and the Zone of Parental Discretion (Federation Press, 2016).

Recent Comments
    Categories