Designing user-centric explanations for medical imaging with informed machine learning

19.05.2023

Oberste, L., Rüffer, F., Aydingül, O., Rink, J. und Heinzl, A. (2023). Designing user-centric explanations for medical imaging with informed machine learning. In , Design Science Research for a New Society: Society 5.0 : 18th International Conference on Design Science Research in Information Systems and Technology, DESRIST 2023, Pretoria, South Africa, May 31 – June 2, 2023, Proceedings (S. 470–484). Lecture Notes in Computer Science, Springer: Berlin [u.a.].

Abstract

A flawed algorithm released in clinical practice can cause unintended harm to patient health. Risks, regulation, responsibility, and ethics shape the demand of clinical users to understand and rely on the outputs made by artificial intelligence. Explainable artificial intelligence (XAI) offers methods to render a model’s behavior understandable from different perspectives. Extant XAI, however, is mainly data-driven and designed to meet developers’ demands to correct models rather than clinical users’ expectations to reflect clinically relevant information. To this end, informed machine learning (IML) utilizes prior knowledge jointly with data to generate predictions, a promising paradigm to enrich XAI with medical knowledge. To explore how IML can be used to generate explanations that are congruent to clinical users’ demands and useful to medical decision-making, we conduct Action Design Research (ADR) in collaboration with a team of radiologists. We propose an IML-based XAI system for clinically relevant explanations of diagnostic imaging predictions. With the help of ADR, we reduce the gap between implementation and user evaluation and demonstrate the effectiveness of the system in a real-world application with clinicians. While we develop design principles of using IML for user-centric XAI in diagnostic imaging, the study demonstrates that an IML-based design adequately reflects clinicians’ conceptions. In this way, IML inspires greater understandability and trustworthiness of AI-enabled diagnostic imaging.

Keywords