Chapter 14: Interpretable artificial intelligence systems in medical imaging: review and theoretical framework
Restricted access

The development of Interpretable Artificial Intelligence (AI) has drawn substantial attention on the effect of AI on augmenting human decision-making. In this paper, we review the literature on medical imaging to develop a framework of Interpretable AI systems in enabling the diagnostic process. We identify three components as constituting Interpretable AI systems, namely, human agents, data, machine learning (ML) models, and discuss their classifications and dimensions. Using the workflow process of AI augmented breast screening in the UK as an example, we identify the possible tensions that may emerge as human agents work with ML models and data. We discuss how these tensions may impact the performance of Interpretable AI systems in the diagnostic process and conclude with implications for further research.

You are not authenticated to view the full text of this chapter or article.

Access options

Get access to the full article by using one of the access options below.

Other access options

Redeem Token

Institutional Login

Log in with Open Athens, Shibboleth, or your institutional credentials

Login via Institutional Access

Personal login

Log in with your Elgar Online account

Login with your Elgar account