Dr Anantharaman Muralidharan1, Dr Sinead Prince1, Dr James Edgar Lim Tjen Yao1
1Centre For Biomedical Ethics, Yong Loo Lin School of Medicine, National University Of Singapore, , Singapore
Biography:
Sinead is a Research Fellow at the Centre for Biomedical Ethics at National University of Singapore. Her PhD research focuses on the role of autonomy and well-being in evaluating the ethics of genetic enhancement, and her current research focuses on the ethics of medical-based AI devices.
Abstract:
Many guidelines on AI in healthcare emphasize the importance of algorithmic explainability. A decision-making algorithm is explainable if, in addition to generating a decision, it also generates information about how the decision was reached. Algorithmic explainability has been claimed to be necessary for AI to be for trustworthy, contestable, actionable, justifiable and fair. The dilemma is that black-box AI cannot provide such explanations and we must make a ‘trade-off’: explainability for accuracy. If explainability is what constitutes justifiability, we must forgo the benefits of black-box AI.
This panel claims that justifiability is distinct from explainability, and therefore the use of algorithms in decision-making can be justified without them being explainable. Thus, this panel challenges the common presumption in favour of algorithmic explainability and, in particular, examines the following questions:
1. If justifiability is a matter of making explicit to patients and physicians alike why the AI’s decision is correct, can justifiability be achieved without explaining AI decisions to the patient or physician?
2. What is the purpose of explainability in healthcare, and in what conditions can this purpose be achieved without requiring black-box AI to provide explanations?
3. What is the definition and purpose of trust and trustworthiness in justifying medical AI?
Presentation slides PDF – Click here