User-centric explainability in healthcare: A knowledge-level perspective of informed machine learning

15.08.2023

L. Oberste and A. Heinzl, “User-Centric Explainability in Healthcare: A Knowledge-Level Perspective of Informed Machine Learning,” in IEEE Transactions on Artificial Intelligence, vol. 4, no. 4, pp. 840-857, Aug. 2023, doi: 10.1109/TAI.2022.3227225

Impact Statement:

The majority of investigations of the explainability challenge are being conducted from a developer-oriented focus, typically summarizing end-users based on their role or machine learning expertise. However, users are far more heterogeneous, with varying backgrounds, experiences, and needs. This motivates a recent surge of interest in explanations that account for multifaceted user requirements. However, how to effectively develop user-centric explanations is still unclear, and research lacks an understanding of which role users knowledge plays in developing satisfactory explanations. This synopsis acknowledges the potential of knowledge-informed machine learning for richer explanations. It is among the first to investigate how this strengthens user understanding from a knowledge perspective. It pinpoints knowledge characteristics of the fit between system explanations and users, which can guide the design of more user-centric clinical information systems.

Abstract:

Explaining increasingly complex machine learning will remain crucial to cope with risks, regulations, responsibilities, and human support in healthcare. However, extant explainable systems mostly provide explanations that mismatch clinical users’ conceptions and fail their expectations to leverage validated and clinically relevant information. A key to more user-centric and satisfying explanations can be seen in combining data-driven and knowledge-based systems, i.e., to utilize prior knowledge jointly with the patterns learned from data. We conduct a structured review of knowledge-informed machine learning in healthcare. In this article, we build on a framework to characterize user knowledge and prior knowledge embodied in explanations. Specifically, we explicate the types and contexts of knowledge to examine the fit between knowledge-informed approaches and users. Our results highlight that knowledge-informed machine learning is a promising paradigm to enrich former data-driven systems, yielding explanations that can increase formal understanding, convey useful medical knowledge, and are more intuitive. Although complying with medical conception, it still needs to be investigated whether knowledge-informed explanations increase medical user acceptance and trust in clinical machine learning-based information systems.