Regulation of AI in healthcare: A cautionary tale considering horses and zebras

Dr Susannah Louise Jacobson1, Dr Bernadette  Richards3, Dr Yves Aquino2

1Adelaide Law School, Adelaide, Australia, 2University of Wollongong, ACHEEV, Wollongong, Australia, 3Future Health Technologies, Singapore-ETH Centre, , Singapore

The purpose of this presentation is to engage with the challenges and misconceptions surrounding the regulation of Artificial intelligence (“AI”) in healthcare. The discussion will highlight the complexity of AI applications that have the ability to change over time, and with inner workings that are currently not fully explainable. We will address three common misconceptions about healthcare AI; firstly, we explain that there is not one role of AI in the clinical setting given the diversity in applications that include diagnostic tools, drug delivery systems and monitoring programs. Secondly, we consider that instead of inhibiting change, regulation can shape, guide and encourage innovation. Thirdly, we suggest that the regulatory concerns about data management in healthcare AI should take care not to narrowly focus on personal data management and privacy laws alone. Such narrow focus serves to potentially overlook other significant risks, including the known prevalence of bias in data which informs healthcare delivery. Using the medical aphorism of the ‘zebra’ to describe the incidence of surprising diagnosis or a rare case, result or disease, we illustrate that data management principles must minimise the risk of of AI exacerbating existing health inequalities.

Given the numerous challenges and misconceptions about AI in healthcare explored in our discussion, we argue against the temptation of creating a  ‘law of the horse’ (a narrow governance of diverse challenges) . Instead, we propose identifying common regulatory principles and values that underpin the provision of health care. In the context of regulation of AI in healthcare, the concept of patient safety underscores the need to evaluate and prevent potential harms including the risk of under and overdiagnosis, clinicians’ overreliance on technology, and promotion of bias and health disparities, among others.


Bio to come.

Recent Comments