Should AI-enabled Medical Devices be Explainable?

Should AI-enabled Medical Devices be Explainable?

Rita Matulionyte1, Macquarie University Law School Macquarie Park

1Macquarie University Law School, Macquarie Park, NSW, Australia

Abstract

Despite its exponential growth, artificial intelligence (AI) in healthcare faces various challenges. One of them is a lack of explainability of healthcare AI tools, which arguably contributes to insufficient trust in AI technologies, quality, and accountability and liability issues. The aim of this paper is to examine whether, why, and to what extent explainability should be demanded with relation to AI-enabled medical devices and their outputs. To examine these questions, we conducted a critical analysis of interdisciplinary literature (computer, health and legal science) on this topic and a pilot empirical study: two focus group discussions involving clinicians, patients’ representatives and AI researchers. We conclude that the role of AI explainability with relation to AI-enabled medical devices is a limited one. We argue that technical explainability is capable of addressing only a limited range of challenges associated with AI in healthcare and is likely to reach fewer goals than sometimes expected. So far, AI explainability (XAI) techniques do not have quality guarantees and are of limited assistance in increasing trust in AI among clinicians. XAI ability to improve clinical decision making (e.g. by showing AI errors) is limited and often overstated; and the role of explainability in allocating accountability between AI developers and clinicians is uncertain. The study recommends that, instead of focusing on technical explainability of AI-enabled medical devices, the priority should be on establishing a more general transparency around the AI development and quality assurance.

Biography

Bio to come

Categories