Traceability

The Traceability principle states that medical AI tools should be developed together with mechanisms for documenting and monitoring the complete trajectory of the AI tool, from development and validation to deployment and usage. This will increase transparency and accountability by providing detailed and continuous information on the AI tools during their lifetime to clinicians, healthcare organisations, citizens and patients, AI developers and relevant authorities. AI traceability will also enable continuous auditing of AI models, identify risks and limitations, and update the AI models when needed.

To this end, six recommendations for Traceability are defined in the FUTURE-AI framework. First, a system for risk management should be implemented throughout the AI lifecycle, including risk identification, assessment, mitigation, monitoring and reporting (Traceability 1). To increase transparency, relevant documentation should be provided for the stakeholder groups of interest, including AI information leaflets, technical documentation, and/or scientific publications (Traceability 2). After deployment, continuous quality control of AI inputs and outputs should be implemented, to identify inconsistent input data and implausible AI outputs (e.g. using uncertainty estimation), and to implement necessary model updates (Traceability 3). Furthermore, periodic auditing and updating of AI tools should be implemented (e.g. yearly) to detect and address any potential issue or performance degradation (Traceability 4). To increase traceability and accountability, an AI logging system should be implemented to keep a record of the usage of the AI tool, including for instance, user actions, accessed and used datasets, and identified issues (Traceability 5). Finally, mechanisms for human oversight and governance should be implemented, to enable selected users to flag AI errors or risks, overrule AI decisions, use human judgement instead, assign roles and responsibilities, and maintain the AI system over time (Traceability 6).

Recommendation Practical steps Examples of approaches and methods Stage
Traceability 1. Implement risk management
  • Identify potential risks
  • Assess likelihood and impact
  • Define mitigation measures
  • Monitor risks and mitigations
  • Risk management file
  • Risk-benefit analysis
  • Mitigation strategies (e.g. warnings, system shutdown)
Design
Traceability 2. Provide documentation
  • Define documentation needs
  • Create documentation
  • Ensure documentation completeness
  • Update documentation regularly
  • AI information leaflet
  • Technical document
  • Scientific publication
  • Risk management file
Development
Traceability 3. Implement continuous quality control
  • Define quality control measures
  • Implement monitoring mechanisms
  • Provide uncertainty estimates
  • Calibrate uncertainty estimates
  • Input data validation
  • Output plausibility checks
  • Uncertainty quantification methods
  • Calibration techniques
Evaluation
Traceability 4. Implement periodic auditing and updating
  • Define auditing schedule
  • Perform periodic evaluations
  • Identify necessary updates
  • Implement and validate updates
  • Annual performance reviews
  • Drift detection methods
  • Model updating techniques
  • Validation of updated models
Deployment
Traceability 5. Implement AI logging
  • Design logging system
  • Implement user action tracking
  • Record AI predictions and decisions
  • Analyse logged data
  • User activity logs
  • AI decision logs
  • Time-series visualisations
  • Log analysis tools
Deployment
Traceability 6. Implement AI governance
  • Define governance structure
  • Assign responsibilities
  • Establish accountability mechanisms
  • Implement oversight procedures
  • AI ethics boards
  • Clear role assignments
  • Liability frameworks
  • Regular governance reviews
Deployment