The Usability principle states that the medical AI solutions should be usable, acceptable and deployable for the end-users in real-world practice, such as physicians, specialists, data managers and other end-users. To ensure their acceptance and adoption, the AI tools should facilitate the image analysis tasks, including the visualisation and interpretation of complex medical images, with increased productivity and satisfaction. It is important that the AI algorithms are developed while taking into account the human factors for each clinical task, by employing human-centred approaches with user engagement throughout the AI development process, including iterative usability testing. To ensure the deployability of medical AI tools, effectiveness must be estimated and integration into current clinical workflows must be demonstrated. To ensure the Usability of AI solutions in medicine and healthcare, we propose the following recommendations:

  1. User engagement: AI developers should continuously and actively engage end-users such as radiologists, specialists and/or patients in the AI production lifecycle, including at the design, implementation, evaluation and monitoring phases.
  2. User requirements: To understand and integrate the user’s needs and expectations, user requirements and user feedback should be continuously compiled from the end-users and domain experts, such as by organising co-creation workshops, hands-on-sessions and pilot tests.
  3. User interfaces: Each medical AI solution should be developed together with its own user interface, which should be specifically designed and implemented to facilitate the usage of the image analysis and machine learning functionalities.
  4. Usability testing: Usability testing should be an integral part of the AI evaluation process, to assess -in addition to the model’s performance- the user’s satisfaction, efficiency, understanding, and intention-to-use of the AI tool in the clinical environment.
  5. Usability metrics: For each AI tool, usability metrics and questionnaires should be carefully defined to gather qualitative and quantitative feedback on key aspects of the tool’s usability (e.g. by adapting the System Usability Score).
  6. In-silico validation: To accelerate the usability tests, emulated in-silico trials should be implemented by re-using retrospective data in a prospective fashion and simulating real clinical conditions, while measuring the user’s behaviour and agreement with the AI predictions.
  7. Clinical integration: AI developers should make sure their AI technologies can be integrated both technically and clinically into the existing workflows in the clinical practice.
  8. External evaluation: For more objective and trusted validation of the tool in clinical practice, an external evaluation should be performed by independent, third-party evaluators that did not take part in the design, development and pilot testing of the AI tool.
  9. Training material: The AI developers should provide user manuals and training resources to help end-users, including those with no expertise in AI, make best use of the AI tool’s capabilities.
  10. Usability monitoring: The AI manufacturers should implement mechanisms to monitor the user’s behaviour and experience, and to identify potential changes in the user needs over time.