The sixth and final principle of the FUTURE-AI guidelines is the one of Explainability, which states that medical AI algorithms should be able to provide meaningful and actionable explanations to the clinicians for their predictions. Explainability provides insight into the algorithmic mechanisms behind AI decision making processes thereby allowing for clinical validation and scrutinisation of these decisions. While local explainability highlights the reasons behind a particular prediction by the AI model for an individual image, global explanations identify the common characteristics that the AI model considers important for a particular image analysis task. Attribution maps (or heat-maps) are commonly used visual methods for explainability in medical AI, which highlight the relevant regions on the input image that the AI model considers important. To assess and achieve explainability in medical AI, we recommend the following quality checks: