Fairness

The Fairness principle states that medical AI tools should maintain the same performance across individuals and groups of individuals (including under-represented and disadvantaged groups). AI-driven medical care should be provided equally for all citizens, independently of their sex, gender, ethnicity, age, socio-economic status and (dis)abilities, among other attributes. Fair medical AI tools should be developed such that potential AI biases are minimised as much as possible, or identified and reported.

To this end, three recommendations for Fairness are defined in the FUTURE-AI framework. First, AI developers together with domain experts should define fairness for their specific use case and make an inventory of potential sources of bias (Fairness 1). Accordingly, to facilitate verification of AI fairness and non-discrimination, information on the subjects’ relevant attributes should be included in the datasets (Fairness 2). Finally, whenever this data is available, the development team should apply bias detection and correction methods, to obtain the best possible trade-off between fairness and accuracy (Fairness 3).

Recommendation Description
Fairness 1

Define sources of bias

Bias in medical AI is application-specific.1 At the design phase, the development team should identify possible types and sources of bias for their AI tool.2 These may include group attributes (e.g. sex, gender, age, ethnicity, socioeconomics, geography), the medical profiles of the individuals (e.g. with comorbidities or disability), as well as human biases during data labelling, data curation, or the selection of the input features.
Fairness 2

Collect data on attributes

To identify biases and apply measures for increased fairness, relevant attributes of the individuals, such as sex, gender, age, ethnicity, risk factors, comorbidities or disabilities, should be collected. This should be subject to informed consent and approval by ethics committees to ensure an appropriate balance between the benefits for non-discrimination and risks for re-identification.
Fairness 3

Evaluate & correct biases

When possible, i.e. the individuals’ attributes are included in the data, bias detection methods should be applied by using fairness metrics.3,4 To correct for any identified biases, mitigation measures should be applied (e.g. data re-sampling, bias-free representations, equalised odds post-processing) 5-9 and tested to verify their impact on both the tool’s fairness and the model’s accuracy. Importantly, any potential bias should be documented and reported to inform the end-users and citizens (see Traceability 2).