The Fairness principle states that medical AI tools should maintain the same performance across individuals and groups of individuals (including under-represented and disadvantaged groups). AI-driven medical care should be provided equally for all citizens, independently of their sex, gender, ethnicity, age, socio-economic status and (dis)abilities, among other attributes. Fair medical AI tools should be developed such that potential AI biases are minimised as much as possible, or identified and reported.
To this end, three recommendations for Fairness are defined in the FUTURE-AI framework. First, AI developers together with domain experts should define fairness for their specific use case and make an inventory of potential sources of bias (Fairness 1). Accordingly, to facilitate verification of AI fairness and non-discrimination, information on the subjects’ relevant attributes should be included in the datasets (Fairness 2). Finally, whenever this data is available, the development team should apply bias detection and correction methods, to obtain the best possible trade-off between fairness and accuracy (Fairness 3).
Recommendation | Practical steps | Examples of approaches and methods | Stage |
---|---|---|---|
Fairness 1. Define sources of bias |
|
|
Design |
Fairness 2. Collect information on individual and data attributes |
|
|
Development |
Fairness 3. Evaluate fairness |
|
|
Evaluation |