If artificial intelligence (AI) systems shape decisions that affect people’s lives, they should do so fairly. This should be a given considering that potential applications for AI include automated hiring systems, as well as tools used in education, finance and criminal justice.
But ensuring the fairness of AI systems is far more complex than it might sound. Despite years of research, there is still no consensus on what fairness means, how it should be measured, or whether it can ever be fully achieved.
Fairness inherently depends on context. What counts as fair in one domain may be inappropriate or even harmful in another. In criminal justice, fairness may prioritise avoiding disproportionate harm to particular communities. In education, it may focus on equal opportunity and long-term outcomes.
In finance, it often involves balancing access to credit with risk assessment. Because AI systems must be formalised mathematically, researchers translate fairness into technical definitions expressed through metrics that specify how outcomes should be distributed across groups.
These metrics are useful tools, but they are not neutral. Each encodes assumptions about which differences matter and which trade-offs are acceptable.








