The Usability principle states that the end-users should be able to use a medical AI tool to achieve a clinical goal efficiently and safely in their real-world environment. On one hand, this means that end-users should be able to use the AI tool’s functionalities and interfaces easily and with minimal errors. On the other hand, the AI tool should be clinically useful and safe, e.g. improve the clinicians’ productivity and/or lead to better health outcomes for the patients and avoid harm.
To this end, four recommendations for Usability are defined in the FUTURE-AI framework. First, through a human-centred approach, target end-users (e.g. general practitioners, specialists, nurses, patients, hospital managers) should be engaged from an early stage to define the AI tool’s intended use, user requirements and human-AI interfaces (Usability 1). Second, training materials and training activities should be provided for all intended end-users, to ensure adequate usage of the AI tool, minimise errors and thus patient harm, and increase AI literacy (Usability 2). At the evaluation stage, the usability within the local clinical workflows, including human factors that may impact the usage of the AI tool (e.g. satisfaction, confidence, ergonomics, learnability), should be assessed with representative and diverse end-users (Usability 3). Furthermore, the clinical utility and safety of the AI tools should be evaluated and compared with the current standard of care, to estimate benefits as well as potential harms for the citizens, clinicians and/or health organisations (Usability 4).
Recommendation | Description |
Usability 1
Define user requirements |
The AI developers should engage clinical experts, end-users (e.g. patients, physicians) and other relevant stakeholders (e.g. data managers, administrators) from an early stage, to compile information on the AI tool’s intended use and end-user requirements (e.g. human-AI interfaces), as well as on human factors that may impact the usage of the AI tool (e.g. ergonomics, intuitiveness, experience, learnability). |
Usability 2
Provide training |
To facilitate best usage of the AI tool, minimise errors and harm, and increase AI literacy, the developers should provide training materials (e.g. tutorials, manuals, examples) in accessible language and/or training activities (e.g. hands-on sessions), taking into account the diversity of end-users (e.g. clinical specialists, nurses, technicians, citizens or administrators). |
Usability 3
Evaluate clinical usability |
To facilitate adoption, the usability of the AI tool should be evaluated in the real world with representative and diverse end-users (e.g. with respect to sex, gender, age, clinical role, digital proficiency, (dis)ability). The usability tests should gather evidence on the user’s satisfaction, performance and productivity. These tests should also verify whether the AI tool impacts the behaviour and decision making of the end-users. |
Usability 4
Evaluate clinical utility |
The AI tool should be evaluated for its clinical utility and safety. The clinical evaluations of the AI tool should show benefits for the clinician (e.g. increased productivity, improved care), for the patient (e.g. earlier diagnosis, better outcomes), and/or for the healthcare organisation (e.g. reduced costs, optimised workflows), when compared to the current standard of care. Additionally, it is important to show that the AI tool is safe and does not cause harm to individuals (or specific groups), such as through a randomised clinical trial. |