Seven general recommendations are defined in the FUTURE-AI framework that span across all principles of trustworthy AI in healthcare. The first four recommendations are highly recommended (++) for both research and deployable tools: engaging interdisciplinary stakeholders throughout the AI lifecycle through methods like educational seminars and consensus meetings; implementing measures for data privacy and security including deidentification and encryption; implementing measures to address identified AI risks through methods like bias correction and robustness enhancement; and defining adequate evaluation plans with appropriate datasets, metrics and reference methods.
The final three recommendations have varying levels of compliance requirements. Identifying and complying with applicable AI regulatory requirements (like FDA’s SaMD and EU’s MDR) and investigating application-specific ethical issues are recommended (+) for research but highly recommended (++) for deployable tools. Meanwhile, investigating and addressing social and societal issues is recommended (+) for both research and deployable tools, focusing on aspects like workforce impact, environmental sustainability, and public engagement[1]. This graduated approach reflects the increasing importance of regulatory compliance and ethical considerations as AI tools move from research to deployment stages.
Recommendations | Operations | Examples |
---|---|---|
Engage interdisciplinary stakeholders (general 1) | Identify all relevant stakeholders | Patients, GPs, nurses, ethicists, data managers |
Provide information on the AI tool and AI | Educational seminars, training materials, webinars | |
Set up communication channels with stakeholders | Regular group meetings, one-to-one interviews, virtual platform | |
Organise cocreation consensus meetings | One day cocreation workshop with n=15 multidisciplinary stakeholders | |
Use qualitative methods to gather feedback | Online surveys, focus groups, narrative interviews | |
Implement measures for data privacy and security (general 2) | Implement measures to ensure data privacy and security | Data deidentification, federated learning, differential privacy, encryption |
Implement measures against malicious attacks | Firewalls, intrusion detection systems, regular security audits | |
Adhere to applicable data protection regulations | General Data Protection Regulation, Health Insurance Portability and Accountability Act | |
Define suitable data governance mechanisms | Access control, logging system | |
Implement measures to address identified AI risks (general 3) | Implement a baseline AI model and identify its limitations | Bias, lack of generalisability |
Implement methods to enhance robustness to real world variations | Regularisation, data augmentation, data harmonisation, domain adaptation | |
Implement methods to enhance fairness across subgroups | Data resampling, bias free representation, equalised odds postprocessing | |
Define adequate evaluation plan (general 4) | Identify the dimensions of trustworthy AI to be evaluated | Robustness, clinical safety, fairness, data drifts, usability, explainability |
Select appropriate testing datasets | External dataset from a new hospital, public benchmarking dataset | |
Compare the AI tool against standard of care | Conventional risk predictors, visual assessment by radiologist, decision by clinician | |
Select adequate evaluation metrics | F1 score for classification, concordance index for survival, statistical parity for fairness | |
Identify and comply with applicable AI regulatory requirements (general 5) | Engage regulatory experts to investigate regulatory requirements | Regulatory consultants from intended local settings |
Identify specific regulations based on AI tool’s intended markets | FDA’s SaMD in the United States, MDR and AI Act in the EU | |
Define list of milestones towards regulatory compliance | MDR certification: technical verification, pivotal clinical trial, risk and quality management, postmarket follow-up | |
Investigate and address application-specific ethical issues (general 6) | Consult ethicists on ethical considerations | Ethicists specialised in medical AI and/or in the application domain |
Assess if the AI tool’s design is aligned with relevant ethical values | Right to autonomy, information, consent, confidentiality, equity | |
Identify application specific ethical issues | Ethical risks for a paediatric AI tool (eg, emotional impact on children) | |
Comply with local ethical AI frameworks | AI ethical guidelines from Europe, United Kingdom, United States, Canada, China, India, Japan, Australia | |
Investigate and address social and societal issues (general 7) | Investigate AI tool’s social and environmental impact | Workforce displacement, worsened working conditions and relations, deskilling, dehumanisation of care, reduced health literacy, increased carbon footprint, negative public perception |
Define mitigations to enhance the AI tool’s social and environmental impact | Interfaces for physician-patient communication, workforce training, educational programmes, energy efficient computing practices, public engagement initiatives | |
Optimise algorithms, energy efficiency | Develop and use energy efficient algorithms that minimise computational demands. Techniques like model pruning, quantisation, and edge computing can reduce the energy required for AI tasks | |
Promote responsible data usage | Focus on collecting and processing only the necessary amount of data. Implement federated learning techniques to minimise data transfers | |
Monitor and report the environmental impact of the AI tool | Regularly monitor and report on the environmental impact of AI systems used in healthcare, including energy usage, carbon emissions, and waste generation |