Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Algorithmic Bias

Algorithmic Bias

Systematic unfairness embedded in algorithmic outputs due to biased data or design.

Year: 2016Generality: 792
Back to Vocab

Algorithmic bias occurs when machine learning models or automated decision systems produce outputs that systematically disadvantage certain individuals or groups based on characteristics such as race, gender, age, or socioeconomic status. This bias can originate from multiple sources: training data that reflects historical inequalities, design choices that inadvertently encode human prejudices, or feedback loops that amplify small initial disparities over time. Because modern ML systems learn statistical patterns from data, they readily absorb and reproduce whatever biases exist in that data — sometimes in subtle, hard-to-detect ways.

Detecting algorithmic bias typically involves auditing model outputs across demographic subgroups and measuring disparities in error rates, approval rates, or other outcomes. Common fairness metrics include demographic parity (equal outcome rates across groups), equalized odds (equal true and false positive rates), and individual fairness (similar treatment for similar individuals). A persistent challenge is that these metrics are often mathematically incompatible with one another, meaning that optimizing for one definition of fairness can worsen another — a tension that has no purely technical resolution and requires normative judgment about what fairness means in a given context.

Mitigation strategies operate at multiple stages of the ML pipeline. Pre-processing techniques rebalance or re-weight training data to reduce representational skew. In-processing methods incorporate fairness constraints directly into the model's objective function during training. Post-processing approaches adjust model outputs after the fact to equalize outcomes across groups. None of these methods eliminates bias entirely, and each involves trade-offs with predictive accuracy or other performance goals.

The stakes are highest in domains where algorithmic decisions carry serious real-world consequences — criminal risk scoring, loan approvals, medical diagnosis, and hiring. In these settings, biased models can entrench and amplify existing social inequalities at scale, affecting millions of people with limited opportunity for recourse. This has driven growing regulatory attention, with frameworks such as the EU AI Act requiring bias assessments for high-risk systems, and has made algorithmic fairness a central concern in responsible AI development.

Related

Related

Bias
Bias

Systematic errors in data or algorithms that produce unfair or skewed outcomes.

Generality: 854
Fairness-Aware Machine Learning
Fairness-Aware Machine Learning

Building ML algorithms that produce equitable outcomes across demographic groups.

Generality: 694
Historical Bias
Historical Bias

Bias in AI systems inherited from prejudiced or unrepresentative historical training data.

Generality: 626
De-Biasing
De-Biasing

Techniques that reduce unfair bias in machine learning models and their outputs.

Generality: 694
In-Group Bias
In-Group Bias

AI systems unfairly favoring certain demographic groups due to biased training data.

Generality: 520
Sampling Bias
Sampling Bias

A data flaw where training samples misrepresent the true population, distorting model behavior.

Generality: 794