Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Horizons
  4. Algorithmic Auditing

Algorithmic Auditing

Systematic evaluation of AI systems for bias, fairness, compliance, and performance
Back to HorizonsView interactive version

Algorithmic auditing involves systematic evaluation of automated systems—AI models, algorithms, and decision-making systems—to assess their performance, fairness, bias, robustness, security, and compliance with regulations and ethical standards. Audits examine how algorithms work, what data they use, how they make decisions, and what outcomes they produce, using techniques including code review, statistical analysis of inputs and outputs, testing for bias and discrimination, red-teaming to find vulnerabilities, and continuous monitoring of system behavior. Audits are often conducted by independent third parties to ensure objectivity and build trust.

The technology addresses growing concerns about algorithmic decision-making as AI systems are deployed in critical applications affecting people's lives, rights, and opportunities. Auditing provides transparency, accountability, and assurance that systems work as intended and don't cause harm. Regular audits can identify problems before they cause damage, ensure compliance with regulations, and build public trust in automated systems. Applications include auditing hiring algorithms for discrimination, evaluating credit scoring systems for fairness, assessing AI systems used in criminal justice, and ensuring compliance with regulations like GDPR or AI governance frameworks. Companies, research institutions, and standards bodies are developing auditing methodologies and tools.

At TRL 5, algorithmic auditing methodologies and tools are available, though standardization and widespread adoption continue to develop. The technology faces challenges including the complexity of auditing black-box AI systems, defining appropriate standards and metrics, ensuring auditors have necessary access and expertise, and keeping audits current as systems evolve. However, as regulations require algorithmic accountability and trust becomes essential for AI adoption, auditing becomes increasingly important. The technology could enable responsible deployment of AI by providing mechanisms for transparency and accountability, potentially identifying and preventing harmful algorithmic decisions, ensuring fairness, and building trust, though effective auditing requires appropriate standards, methodologies, and independence to be meaningful and trustworthy.

TRL
5/9Validated
Impact
3/5
Investment
3/5
Category
Ethics & Security

Related Organizations

NIST logo
NIST

United States · Government Agency

98%

The US federal agency leading the global competition to select and standardize post-quantum cryptographic algorithms.

Standards Body
Algorithmic Justice League logo
Algorithmic Justice League

United States · Nonprofit

95%

An organization that combines art and research to illuminate the social implications and harms of AI systems.

Researcher
Credo AI logo
Credo AI

United States · Startup

95%

Provides an AI governance platform that helps enterprises measure and monitor the fairness and performance of their AI systems.

Developer
ORCAA logo
ORCAA

United States · Company

95%

A boutique consultancy founded by Cathy O'Neil that develops methodologies for auditing algorithmic risk.

Developer
Holistic AI logo
Holistic AI

United Kingdom · Startup

92%

A software platform for AI governance, risk management, and compliance.

Developer
AlgorithmWatch logo
AlgorithmWatch

Germany · Nonprofit

90%

A non-profit research and advocacy organization that audits automated decision-making systems, specifically focusing on social media platforms and recommender systems in Europe.

Researcher
Arthur AI logo
Arthur AI

United States · Startup

90%

A model monitoring platform that specializes in explainability, bias detection, and performance tracking.

Developer
Eticas logo
Eticas

Spain · Company

90%

Conducts algorithmic audits and impact assessments to identify bias and inefficiency in automated systems.

Developer
Fiddler AI logo
Fiddler AI

United States · Startup

88%

Provides Model Performance Management (MPM) to monitor, explain, and analyze AI models in production.

Developer
Citadel AI logo
Citadel AI

Japan · Startup

85%

Automated testing and monitoring for AI reliability, focusing on the Japanese and global markets.

Developer

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Software
Software
Automated Compliance

AI-driven systems that monitor and verify regulatory adherence in real time

TRL
7/9
Impact
4/5
Investment
3/5
Software
Software
Explainable Artificial Intelligence (XAI)

AI systems designed to explain their decisions and reasoning in human-understandable terms

TRL
5/9
Impact
3/5
Investment
5/5
Ethics & Security
Ethics & Security
Synthetic Media Forensics

Detection and analysis tools for identifying AI-generated images, video, and audio

TRL
5/9
Impact
3/5
Investment
5/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions