Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Bias

Bias

Systematic errors in data or algorithms that produce unfair or skewed outcomes.

Year: 1986Generality: 854
Back to Vocab

In machine learning, bias refers to systematic errors that cause a model to produce skewed, inaccurate, or unfair outputs. It manifests in several distinct but related forms: data bias, algorithmic bias, and societal bias. Data bias occurs when training datasets fail to accurately represent the target population — for example, a facial recognition system trained predominantly on light-skinned faces will perform poorly on darker-skinned individuals. Algorithmic bias emerges when modeling choices, objective functions, or feature engineering inadvertently encode or amplify existing prejudices, even when no discriminatory intent is present.

In a purely technical sense, bias also has a precise statistical meaning: it is one half of the bias-variance tradeoff, describing the error introduced when a model makes overly simplistic assumptions about the data-generating process. A high-bias model underfits the training data, failing to capture meaningful patterns. This statistical definition and the fairness-related definition are conceptually linked — both describe systematic, non-random errors — but they operate at different levels of abstraction and concern different stakeholders.

The societal dimension of bias has become increasingly critical as AI systems are deployed in high-stakes domains such as hiring, credit scoring, medical diagnosis, and criminal justice. When models trained on historically biased data are used to make consequential decisions, they risk perpetuating and even amplifying existing inequalities. Research by scholars like Joy Buolamwini and Timnit Gebru demonstrated measurable disparities in commercial AI systems, galvanizing the field of algorithmic fairness and prompting regulatory attention worldwide.

Addressing bias requires intervention at multiple stages of the machine learning pipeline. Practitioners can audit training data for representation gaps, apply fairness constraints during model training, use post-processing techniques to equalize outcomes across groups, and conduct ongoing monitoring after deployment. No single mitigation strategy eliminates bias entirely, and different fairness criteria — such as demographic parity, equalized odds, or individual fairness — can be mathematically incompatible with one another. This makes bias one of the most technically and ethically complex challenges in modern AI development.

Related

Related

Algorithmic Bias
Algorithmic Bias

Systematic unfairness embedded in algorithmic outputs due to biased data or design.

Generality: 792
De-Biasing
De-Biasing

Techniques that reduce unfair bias in machine learning models and their outputs.

Generality: 694
Sampling Bias
Sampling Bias

A data flaw where training samples misrepresent the true population, distorting model behavior.

Generality: 794
Historical Bias
Historical Bias

Bias in AI systems inherited from prejudiced or unrepresentative historical training data.

Generality: 626
In-Group Bias
In-Group Bias

AI systems unfairly favoring certain demographic groups due to biased training data.

Generality: 520
Fairness-Aware Machine Learning
Fairness-Aware Machine Learning

Building ML algorithms that produce equitable outcomes across demographic groups.

Generality: 694