Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Out-group Homogeneity Bias

Out-group Homogeneity Bias

The tendency to perceive out-group members as more similar to each other than in-group members.

Year: 2010Generality: 380
Back to Vocab

Out-group homogeneity bias is a well-documented cognitive phenomenon in which people perceive members of groups they do not belong to as more uniform and interchangeable than members of their own group. In human psychology, this manifests as the intuition that "they all look alike" or share the same attitudes, while one's own group is seen as richly diverse. When this bias is embedded in training data — which is collected, labeled, and curated by humans — machine learning models can inherit and amplify it, learning to make coarser, less individualized predictions about people belonging to demographic, cultural, or social groups that are underrepresented or viewed as "other" by the data's creators.

In practice, out-group homogeneity bias in ML systems emerges when training datasets lack sufficient diversity or when labelers apply less granular distinctions to out-group members. A facial recognition system trained predominantly on images of one demographic may learn finer-grained features for that group while collapsing distinctions within others, leading to higher error rates for underrepresented groups. Similarly, natural language models may associate out-group identities with a narrower range of attributes, reinforcing stereotypes in downstream applications like resume screening, content moderation, or risk assessment tools.

Addressing this bias requires both technical and procedural interventions. On the data side, this means actively auditing datasets for representational imbalances and ensuring diverse, high-quality labeling across all groups. Algorithmically, fairness constraints and disaggregated evaluation metrics can surface differential performance before deployment. More broadly, recognizing out-group homogeneity bias underscores why AI fairness cannot be reduced to aggregate accuracy alone — equitable systems must treat individuals within every group with the same degree of nuance and specificity, regardless of their relationship to the majority represented in training data.

Related

Related

In-Group Bias
In-Group Bias

AI systems unfairly favoring certain demographic groups due to biased training data.

Generality: 520
Participation Bias
Participation Bias

A dataset imbalance where certain groups are over- or underrepresented, skewing model outcomes.

Generality: 524
Coverage Bias
Coverage Bias

A dataset imbalance where underrepresented groups cause skewed model performance.

Generality: 520
Bias
Bias

Systematic errors in data or algorithms that produce unfair or skewed outcomes.

Generality: 854
Historical Bias
Historical Bias

Bias in AI systems inherited from prejudiced or unrepresentative historical training data.

Generality: 626
Sampling Bias
Sampling Bias

A data flaw where training samples misrepresent the true population, distorting model behavior.

Generality: 794