Dr Alaa Ali S Almohanna1, Professor Khin Than Win2, Professor Alberto Nettel Aguirre3, Dr Yves Saint James Aquino1
1The Australian Centre for Health Engagement, Evidence and Values (ACHEEV), School of Health and Society, Faculty of the Arts, Social Sciences and Humanities, The University of Wollongong (UOW), Wollongong, Australia, 2School of Computing and Information Technology, Faculty of Engineering and Information Services, The University of Wollongong (UOW). , Wollongong, Australia , 3School of Mathematics and Applied Statistics, Faculty of Engineering and Information Services, The University of Wollongong (UOW). , Wollongong, Australia
Biography:
1. Dr. Alaa Almohanna (she/her) holds a PhD in Computing and Information Technology from the University of Wollongong (UOW). Alaa is a research associate at UOW's Australian Centre for Health Engagement, Evidence, and Values (ACHEEV). Her interdisciplinary research focuses on Human-Computer Interaction (HCI) and the ethical implications of emerging technologies, with a particular emphasis on leveraging artificial intelligence (AI) for inclusive solutions. Her work bridges technology and health informatics, integrating insights from various fields to address complex challenges in behaviour change and ethical AI applications.
2. Dr Yves Saint James Aquino (he/him) is a physician and philosopher with expertise in theoretical and applied ethics, empirical bioethics, and philosophy of medicine. His program of research currently focuses on the ethical, legal and social implications of artificial intelligence applications in healthcare. Yves is a research fellow at UOW's Australian Centre for Health Engagement, Evidence and Values (ACHEEV), and a member of Wiser Healthcare, a multi-institutional collaboration conducting research that will reduce overdiagnosis and overtreatment in Australia and around the world. He is one of the editors-in-chief of Research Ethics (Sage Publications).
Abstract:
Aim:
Algorithmic bias due to lack of diversity and representativeness of datasets used to train artificial intelligence (AI) systems risks perpetuating health inequities. In this scoping review, we systematically analyse existing local and international policy and governance frameworks to identify gaps in the guidelines for operationalising diversity in datasets and to recommend provisions for equitable representation of marginalised identities to ensure equitable AI development and deployment.
Method:
A systematic search of grey literature using the Arksey and O'Malley framework for scoping reviews was conducted. Policy documents, reports, and regulatory frameworks from Australia, EU member states, the United Kingdom, Canada, and the United States were identified and reviewed.
Results:
A total of 92 policy documents from: Australia (4), US (27), UK (21), EU member states (8), New Zealand (6), Canada (4), and intergovernmental organisations (22) were included in the review. Our analysis identified 1) recommendations or strategies to ensure equitable representation, 2) variations in the operationalisation of socially constructed categories of difference (e.g. race, ethnicity, gender), 3) inconsistencies in the operational definitions of diversity, representativeness, and inclusion.
Conclusion:
Policy documents commonly state that algorithmic bias in AI should be addressed. Some acknowledge the need to ensure data diversity. There is no consistent guidance for implementing data diversity in practice. Without clear normative & practical direction, ensuring equitable representation in AI systems will be difficult. Interdisciplinary work is required to develop a clear and standardised guidance on how to define, document, represent and manage social categories in datasets used to develop AI.