Explainability

The Explainability principle states that medical AI tools should provide clinically meaningful information about the logic behind the AI decisions. While medicine is a high-stake discipline that requires transparency, reliability and accountability, machine learning techniques often produce complex models which are black box in nature. Explainability is considered desirable from a technological, medical, ethical, legal as well as patient perspective. Explainability is a complex task which has challenges that need to be carefully addressed during AI development and evaluation to ensure that AI explanations are clinically meaningful and beneficial to the end-users.

To this end, two recommendations for Explainability are defined in the FUTURE-AI framework. At the design phase, it should be first established with end-users and domain experts whether explainable AI is needed for the medical AI tool in questions. In this case, the specific goal and approaches for explainability should be defined (Explainability 1). After their implementation, the selected approaches for explainability should be evaluated, both quantitatively using in silico methods, as well qualitatively with end-users to assess their impact on the user’s satisfaction and performance (Explainability 2).

Recommendation Practical steps Examples of approaches and methods Stage
Explainability 1. Define explainability needs
  • Assess need for explainability
  • Define explainability goals
  • Identify suitable approaches
  • Anticipate potential limitations
  • Global vs. local explanations
  • Feature importance
  • Decision trees
  • Attention maps
Design
Explainability 2. Evaluate explainability
  • Apply explainable AI methods
  • Assess explanation correctness
  • Evaluate user understanding
  • Identify explanation limitations
  • Feature importance analysis
  • Saliency maps
  • User comprehension studies
  • Explanation consistency checks
Evaluation