Deskilling or upskilling? Professional perspectives about the impact of medical artificial intelligence on clinical skills

Dr Yves Saint James Aquino1, Prof Stacy  Carter1, Prof  Wendy  Rogers2, Prof Nehmat  Houssami3, Dr  Chris  Degeling1, Prof Annette Braunack-Mayer1

1University Of Wollongong, School of Health and Society, Australian Centre for Health Engagement, Evidence and Values, Wollongong, Australia, 2Macquarie University, Department of Philosophy and Department of Clinical Medicine, Sydney, Australia, 3University of Sydney, Faculty of Medicine and Health, School of Public Health, Sydney, Australia

The rapid development of artificial intelligence (AI) in medicine raises concerns regarding the risk of clinical deskilling, or the deterioration of clinical decision-making capacities of healthcare workers. This paper reports the preliminary findings of a qualitative study that explores the perspectives of professional stakeholders working on AI applications in diagnosis and screening. The study involves in-depth interviews with clinicians, medical technologists, screening program managers, consumer health representatives, regulators and developers. First, our analysis examines various conceptions of the potential role of AI that range from augmenting clinical skills (such as diagnostic decision aids) to automating specific clinical tasks. Second, our analysis explores conflicting stakeholder perspectives about deskilling associated with AI augmentation and/or automation. One view frames deskilling as a negative consequence of AI augmentation/automation. Whereby, implementing AI entails problematic tendencies for clinicians (and patients) including potential overreliance on technology, loss of foundational clinical skills and decreased confidence in clinicians’ judgment. A competing view contends that deskilling is a positive—if not necessary—effect of upskilling, wherein medical AI takes over menial tasks and enables clinicians to perform the more complex and meaningful aspects of the clinical encounter. Finally, our analysis investigates normative assumptions about essential and dispensable clinical skills that underpin the informants’ views about the value of clinical deskilling associated with AI augmentation/automation. Our findings demonstrate that professional experts have varied epistemic and normative understandings about the role of AI in diagnosis and screening, the normative implications of clinical deskilling, and the conception of essential skills in clinical decision making. This work is funded by NHMRC 1181960.


Biography:

Yves is a doctor and philosopher of medicine interested in bioethics, public health ethics and ethics of AI. He is a postdoctoral fellow at the Australian Centre for Health Engagement, Evidence and Values (ACHEEV), University of Wollongong. He completed his PhD in bioethics at the Department of Philosophy, Macquarie University.

Recent Comments
    Categories