Bruckert, Sebastian and Finzel, Bettina and Schmid, Ute (2020) The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions. Frontiers in Artificial Intelligence, 3. ISSN 2624-8212
pubmed-zip/versions/1/package-entries/frai-03-507973/frai-03-507973.pdf - Published Version
Download (1MB)
Abstract
Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of Semantic Alignment between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a hands-on-cookbook for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process.
Item Type: | Article |
---|---|
Subjects: | Impact Archive > Multidisciplinary |
Depositing User: | Managing Editor |
Date Deposited: | 27 Jan 2023 12:41 |
Last Modified: | 01 Mar 2024 03:53 |
URI: | http://research.sdpublishers.net/id/eprint/1214 |