In this page, we provide definitions and justifications for each of the six guiding principles and give an overview of the FUTURE-AI recommendations.

The following table provides a summary of the recommendations, together with the proposed level of compliance (i.e. recommended vs. highly recommended).

 

List of the FUTURE-AI recommendations, together with the expected compliance for both proof-of-concept (Low ML-TRL) and deployable (High ML-TRL) AI tools (+: Recommended, ++: Highly recommended).

Recommendations Res. Dep.
F 1 Define any potential sources of bias from an early stage ++ ++
2 Collect information on individuals’ and data attributes + +
3 Evaluate potential biases and, when needed, bias correction measures + ++
U 1 Define intended clinical settings and cross-setting variations ++ ++
2 Use community-defined standards (e.g. clinical definitions, technical standards) + +
3 Evaluate using external datasets and/or multiple sites ++ ++
4 Evaluate and demonstrate local clinical validity + ++
T 1 Implement a risk management process throughout the AI lifecycle + ++
2 Provide documentation (e.g. technical, clinical) ++ ++
3 Define mechanisms for quality control of the AI inputs and outputs + ++
4 Implement a system for periodic auditing and updating + ++
5 Implement a logging system for usage recording + ++
6 Establish mechanisms for AI governance + ++
U 1 Define intended use and user requirements from an early stage ++ ++
2 Establish mechanisms for human-AI interactions and oversight + ++
3 Provide training materials and activities (e.g. tutorials, hands-on sessions) + ++
4 Evaluate user experience and acceptance with independent end-users + ++
5 Evaluate clinical utility and safety (e.g. effectiveness, harm, cost-benefit) + ++
R 1 Define sources of data variation from an early stage ++ ++
2 Train with representative real-world data ++ ++
3 Evaluate and optimise robustness against real-world variations ++ ++
E 1 Define the need and requirements for explainability with end-users ++ ++
2 Evaluate explainability with end-users (e.g. correctness, impact on users) + +
General 1 Engage inter-disciplinary stakeholders throughout the AI lifecycle ++ ++
2 Implement measures for data privacy and security ++ ++
3 Implement measures to address identified AI risks ++ ++
4 Define adequate evaluation plan (e.g. datasets, metrics, reference methods) ++ ++
5 Identify and comply with applicable AI regulatory requirements + ++
6 Investigate and address application-specific ethical issues + ++
7 Investigate and address social and societal issues + +