Fairness

The Fairness principle states that medical AI tools should maintain the same performance across individuals and groups of individuals. AI-driven medical care should be provided equally for all citizens, independently of their sex, gender, ethnicity, age, socio-economic status and (dis)abilities, among other attributes. Fair medical AI tools should be developed such that potential AI biases are minimized as much as possible, or identified and reported.

Recommendations Operations Examples
Define any potential sources of bias (fairness 1) Engage relevant stakeholders to define the sources of bias Patients, clinicians, epidemiologists, ethicists, social carers
Define standard attributes that might affect the AI tool’s fairness Sex, age, socioeconomic status
Identify application specific sources of bias beyond standard attributes Skin colour for skin cancer detection, breast density for breast cancer detection
Identify all possible human biases Data labelling, data curation
Collect information on individuals’ and data attributes (fairness 2) Request approval for collecting data on personal attributes Sex, age, ethnicity, socioeconomic status
Collect information on standard attributes of the individuals Sex, age, nationality, education
Include application specific information relevant for fairness analysis Skin colour, breast density, presence of implants, comorbidity
Estimate data distributions across subgroups Male v female, across ethnic groups
Evaluate fairness and bias correction measures (fairness 3) Select attributes and factors for fairness evaluation Sex, age, skin colour, comorbidity
Define fairness metrics and criteria Statistical parity difference defined fairness between −0.1 and 0.1
Evaluate fairness and identify biases Fair with respect to age, biased with respect to sex
Evaluate bias mitigation measures Training data resampling, equalised odds postprocessing
Evaluate impact of mitigation measures on model performance Data resampling removed sex bias but reduced model performance
Report identified and uncorrected biases In AI information leaflet and technical documentation