A conjoint analysis survey of community views on Artificial Intelligence (AI) for clinical decision making in healthcare in Singapore

Toh Hui Jin1

Abstract:

Background:

Singapore is at the forefront of integrating novel artificial intelligence (AI) into healthcare. Despite AI’s growing use and improved quality in clinical decision-making, concerns persist about its potential patient harm due to the lack of transparency and explainability. Implementing AI in healthcare requires building public trust, which involves alignment with local ethical and social norms. However, there is limited data on community attitudes towards AI use in clinical decision-making in Singapore.

Aim and Methods:

Using choice-based conjoint analysis, we conducted an online survey on how Singaporeans compare different principles related to AI decision-making in healthcare. Our survey was adapted from a survey instrument that was developed for the Danish population and included six attributes: decision, severity, explainability, quality, responsibility, and discrimination.

Results:

A total of 596 respondents completed this survey. Slightly more than half (51%) of the respondents reported some or a lot of fear that AI would unintentionally harm humans while 87% of the respondents had some or a lot of fear that AI would increase surveillance. Responsibility was found to be the important (relative importance=31.5%), followed by explainability (27.7%) and discrimination (15.9%). The most valued attribute levels were: AI recommendations that are as explainable as a doctor’s, doctors retaining responsibility for treatment decisions, and AI systems that were tested for discrimination.

Conclusion:

While having AI outperform doctors is desirable, principles like human oversight, transparency, and fairness are more important. As Singapore continues to advance AI in clinical decision making, fulfilling these key requirements will be crucial for establishing public trust.

Presentation Slides PDF – Click here

Categories