Traceability

The Traceability principle states that medical AI tools should be developed together with mechanisms for documenting and monitoring the complete trajectory of the AI tool, from development and validation to deployment and usage. This will increase transparency and accountability by providing detailed and continuous information on the AI tools during their lifetime to clinicians, healthcare organisations, citizens and patients, AI developers and relevant authorities. AI traceability will also enable continuous auditing of AI models, identify risks and limitations, and update the AI models when needed.

To this end, six recommendations for Traceability are defined in the FUTURE-AI framework. First, a system for risk management should be implemented throughout the AI lifecycle, including risk identification, assessment, mitigation, monitoring and reporting (Traceability 1). To increase transparency, relevant documentation should be provided for the stakeholder groups of interest, including AI information leaflets, technical documentation, and/or scientific publications (Traceability 2). After deployment, continuous quality control of AI inputs and outputs should be implemented, to identify inconsistent input data and implausible AI outputs (e.g. using uncertainty estimation), and to implement necessary model updates (Traceability 3). Furthermore, periodic auditing and updating of AI tools should be implemented (e.g. yearly) to detect and address any potential issue or performance degradation (Traceability 4). To increase traceability and accountability, an AI logging system should be implemented to keep a record of the usage of the AI tool, including for instance, user actions, accessed and used datasets, and identified issues (Traceability 5). Finally, mechanisms for human oversight and governance should be implemented, to enable selected users to flag AI errors or risks, overrule AI decisions, use human judgement instead, assign roles and responsibilities, and maintain the AI system over time (Traceability 6).

Recommendations Operations Examples
Implement a risk management process (traceability 1) Identify all possible clinical, technical, ethical, and societal risks Bias against under-represented subgroups, limited generalisability to low resource facilities, data drift, lack of acceptance by end users, sensitivity to noisy inputs
Identify all possible operational risks Misuse of the AI tool, application outside target population, use by non-target users, hardware failure, incorrect data annotations, adversarial attacks
Assess the likelihood of each risk Very likely, likely, possible, rare
Assess the consequences of each risk Patient harm, discrimination, lack of transparency, loss of autonomy, patient reidentification
Prioritise all the risks depending on their likelihood and consequences Risk of bias vs risk of patient reidentification
Define mitigation measures to be applied during AI development Data enhancement, data augmentation, bias correction techniques, domain adaptation, transfer learning, continuous learning
Define mitigation measures to be applied after deployment Warnings to users, system shutdown, reprocessing of input data, acquisition of new input data, alternative procedure, human judgment only
Set up a mechanism to monitor and manage risks over time Periodic risk assessment every six months
Create a comprehensive risk management file Including all risks, their likelihood and consequences, risk mitigation measures, risk monitoring strategy
Provide documentation (traceability 2) Report evaluation results in publication using AI reporting guidelines Peer reviewed scientific publication using TRIPOD-AI reporting guideline
Create technical documentation for AI tool AI passport, model cards (including model hyperparameters, training and testing data, evaluations, limitations, etc)
Create clinical documentation for AI tool Guidelines for clinical use, AI information leaflet (including intended use, conditions and diseases, targeted populations, instructions, potential benefits, contraindications)
Provide risk management file Including identified risks, mitigation measures, monitoring measures
Create user and training documentation User manuals, training materials, troubleshooting, FAQs
Define mechanisms for quality control of AI inputs and outputs (traceability 3) Implement mechanisms to identify erroneous input data Missing value or out-of-distribution detector, automated image quality assessment
Implement mechanisms to detect implausible AI outputs Postprocessing sanity checks, anomaly detection algorithm
Provide calibrated uncertainty estimates Calibrated uncertainty estimates per patient or data point
Implement system for continuous quality monitoring Real time dashboard tracking data quality and performance metrics
Implement feedback mechanism for users to report issues Feedback portal enabling clinicians to report discrepancies or anomalies
Implement system for periodic auditing and updating (traceability 4) Define schedule for periodic audits Biannual or annual
Define audit criteria and metrics Accuracy, consistency, fairness, data security
Define datasets for periodic audits Newly acquired prospective dataset from local hospital
Implement mechanisms to detect data or concept drifts Detecting shifts in input data distributions
Assign role of auditor(s) for AI tool Internal auditing team, third party company
Update AI tool based on audit results Updating AI model, re-evaluating AI model, adjusting operational protocols, continuous learning
Implement reporting system from audits and subsequent updates Automatic sharing of detailed reports to healthcare managers and clinicians
Monitor impact of AI updates Impact on system performance and user satisfaction
Implement logging system for usage recording (traceability 5) Implement logging framework capturing all interactions User actions, AI inputs, AI outputs, clinical decisions
Define data to be logged Timestamp, user ID, patient ID (anonymised), action details, results
Implement mechanisms for data capture Software to automatically record every data and operation
Implement mechanisms for data security Encrypted log files, privacy preserving techniques
Provide access to logs for auditing and troubleshooting By defining authorised personnel, eg, healthcare or IT managers
Implement mechanism for end users to log any issues A user interface to enter information about operational anomalies
Implement log analysis Time series statistics and visualisations to detect unusual activities and alert administrators
Establish mechanisms for AI governance (traceability 6) Assign roles for AI tool’s governance For periodic auditing, maintenance, supervision (eg, healthcare manager)
Define responsibilities for AI related errors Responsibilities of clinicians, healthcare centres, AI developers, and manufacturers
Define mechanisms for accountability Individual vs collective accountability/liability, compensations, support for patients