Trust as a Moral Power: Implications for AI in Medicine
Muralidharan Anantharaman1, G. Owen Schaefer1, National University Of Singapore Singapore 1National University Of Singapore, Singapore Singapore
Abstract
Trust (or lack thereof) and trustworthiness have been identified as central concerns in the development and deployment of artificial intelligence, especially in medical practice. However, AI raises challenges for mainstream conceptual accounts of trust. Accounts of trust in the literature are, for the most part, mentalist: Trust consists of a belief or an affect or some combination of such mental states. Unfortunately, such mentalist accounts of trust are subject to two types of problems. Firstly, they are implicitly committed to a strong form of reliabilism where the aptness of trust is calibrated to the recipient’s overall reliability. Yet intuitively, judgments of reliability and aptness of trust do come apart. A second problem is that mentalist accounts have difficulty accounting for the voluntary nature of trust: We can choose to trust or not trust others. In this paper, we offer a novel conception of trust that better aligns with those intuitions: trust is the exercise of a moral power. On this account, to trust someone is to perform a kind of communicative action that can change what duties or permissions others have towards us. This illuminates the justification for a trust gap between humans and AI in medicine: only persons are moral agents, and as such trust as a moral power may more fittingly apply to humans rather than AI. . We further explore implications of our concept of trust for AI in medicine, especially as it relates to the doctor-patient relationship.
Biography
Murali is a research fellow at the Centre for Biomedical Ethics at the National University of Singapore. He works on questions that relate to epistemic, moral and political justification as they relate to questions about IRBs and AI. He has published in Bioethics and Ethical Theory and Moral Practice.