While a certain degree of diversity in the design and implementation of AI solutions in medicine is both expected and desirable to promote innovation and differentiation, the Universality principle recommends the definition and application of standards during algorithm development, evaluation and deployment. These standards, including technical, clinical, ethical and regulatory standards, will achieve at least three key objectives: (1) They will enable the development of AI technologies with increased interoperability and applicability across clinical centres, radiology units and geographical locations; (2) they will promote a culture of quality, safety and trust in medical AI based on well-proven, widely accepted frameworks; (3) they will facilitate co-creation and cooperation in medical AI between AI developers, manufacturers, radiologists, physicians, data managers and healthcare bodies based on unified language and common approaches. For increased universality in medical AI, we propose the following recommendations:

  1. Definition of clinical tasks: The AI developers should ensure that the clinical tasks they aim to address are based on universal clinical definitions, such as those defined by recognised non-for-profit medical societies in the area of interest.
  2. AI programming standards: Developers should use AI software design conventions, code standards, and proven libraries and frameworks in medical AI (e.g. PyTorch, TensorFlow) to enhance interoperability, quality, maintenance and integration.
  3. Image annotations: AI algorithms should be developed based on image annotation and labelling standards (e.g. contouring systems, bounding boxes, lesion categories) to improve reproducibility and applicability in clinical practice.
  4. Biomarkers: Universal definitions and calculation methods for estimating biomarkers should be employed when building feature-based AI models in medicine, such as by using the conventions defined by the the International Biomarker Standardisation Initiative (IBSI) for medical imaging.
  5. Evaluation criteria: When evaluating and reporting the AI model’s performance and properties, universal evaluation criteria and metrics should be used based on those established by the scientific community.
  6. Reference datasets: When possible, AI models should be evaluated on open-access public datasets that are representative of the real-world clinical cases, to enable more objective benchmarking.
  7. Reporting guidelines: Medical AI studies and results should be disseminated by following existing reporting guidelines, such as TRIPOD (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis).
  8. Transferability to resource-limited settings: To ensure the AI solutions can be universally applied for global radiology, the AI solutions should be tested and optimised for their transferability in resource-limited settings, taking into account possible variations in medical equipment.