Does Use Of Artificial Intelligence Impose Additional Informed Consent Obligations To Health Practitioners?

Dr Alison Weightman1,2,3, Dr Simon Coghlan4, A/Prof Philip Clayton1,2,3

1Adelaide Medical School, Unviersity of Adelaide, , Australia, 2Australia and New Zealand Dialysis and Transplant (ANZDATA) Registry, SAHMRI, , Australia, 3Central and Northern Adelaide Renal and Transplantation Service, , Australia, 4Centre for Artificial Intelligence and Digital Ethics, School of Computing and Information Systems, University of Melbourne, , Australia

Biography:

Dr Alison Weightman is a clinical nephrologist at the Royal Adelaide Hospital. She has previously completed a Masters of Bioethics with her research thesis investigating Respect for Autonomy in Living Kidney Donation. She is currently undertaking a PhD in Decision Making and Informed Consent for Deceased Donor Kidney Transplants.

Abstract:

In Ethics and Governance of Artificial Intelligence (AI) for Health, the World Health Organisation (WHO) recommends that clinicians be upfront about all AI use in healthcare. However we suggest such broad disclosure requirements are both unnecessary and confer excessively burdensome obligations for patient consent. Disclosure and informed consent obligations occur on a spectrum, permitting tacit consent in some circumstances while requiring formal detailed disclosure and documentation procedures in other cases. As AI applications in healthcare can also vary from minor (for example, AI assisting with record keeping) to substantial (for example, AI performing procedures), clinicians should be able to use existing informed consent guidance to determine situation specific disclosure and consent requirements. Eyal suggests informed consent requirements are influenced by: (1) riskiness of proposed treatment or intervention, (2) invasiveness, (3) controversy, (4) practitioner uncertainty, and (5) degree of impact (‘critical life choices’). Based on these, more rigorous disclosure and consent for AI use should occur in situations where AI is associated with increased risk (e.g. due to lack of outcome data), increased invasiveness (e.g. robotic surgery), increased controversy (e.g. AI recommendations at odds with the treating doctor), practitioner uncertainty (e.g. choosing between multiple appropriate treatment options) or high impact (e.g. AI recommending withdrawal of life support). We propose that AI use in non-controversial low risk settings (e.g. assistance in test interpretation) need not necessarily attract additional disclosure and informed consent obligations whereas AI use in high stakes decision making should confer requirement for disclosure and patient consent prior to use.

Presentation Slides PDF – Click here

Categories