Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. CBRN Risk

CBRN Risk

AI-related risks involving chemical, biological, radiological, and nuclear threat scenarios.

Year: 2023Generality: 322
Back to Vocab

CBRN Risk refers to the potential for AI systems to be misused—or to fail catastrophically—in contexts involving Chemical, Biological, Radiological, and Nuclear threats. In AI safety and policy discussions, this framing has become especially urgent as large language models and other capable AI systems grow able to provide detailed technical guidance that could lower the barrier for bad actors seeking to develop or deploy weapons of mass destruction. The concern is not merely that AI operates in dangerous environments, but that AI itself could become an enabler of CBRN harm by synthesizing and communicating specialized knowledge that was previously difficult to access.

From a technical standpoint, CBRN risk assessment in AI involves evaluating model outputs for hazardous information generation, stress-testing safety filters and refusal mechanisms, and red-teaming systems against adversarial prompts designed to elicit dangerous content. Techniques such as classifier-based content filtering, reinforcement learning from human feedback (RLHF), and constitutional AI methods are deployed to reduce the likelihood that a model will assist with synthesis routes for chemical agents, pathogen enhancement, or radiological device construction. Evaluating these risks requires domain expertise spanning microbiology, chemistry, and nuclear physics alongside machine learning engineering.

CBRN risk has become a central concern in AI governance frameworks, appearing prominently in national AI safety strategies, the EU AI Act's prohibited use categories, and guidelines from organizations such as the UK AI Safety Institute and the US NIST AI Risk Management Framework. Frontier AI developers now routinely include CBRN uplift evaluations as part of pre-deployment safety assessments, attempting to quantify whether a model meaningfully increases a non-expert's ability to cause mass harm compared to freely available resources.

The challenge is that CBRN-relevant knowledge is deeply entangled with legitimate scientific research, making blanket restrictions both technically difficult and potentially harmful to beneficial applications in medicine, environmental monitoring, and emergency response. Calibrating AI systems to refuse genuinely dangerous assistance while remaining useful for lawful scientific inquiry represents one of the most consequential open problems in applied AI safety today.

Related

Related

Catastrophic Risk
Catastrophic Risk

The potential for AI systems to cause severe, large-scale harm or societal disruption.

Generality: 745
Autonomy Risk
Autonomy Risk

Dangers arising when autonomous AI systems operate beyond intended boundaries or human control.

Generality: 624
AI Safety
AI Safety

Research field ensuring AI systems remain beneficial, aligned, and free from catastrophic risk.

Generality: 871
Uncensored AI
Uncensored AI

AI systems that generate outputs without content restrictions or safety filters applied.

Generality: 450
Dual Use
Dual Use

AI capabilities developed for beneficial purposes that can also enable harmful applications.

Generality: 703
Safety Net
Safety Net

Layered safeguards that prevent, detect, and mitigate harmful AI system outcomes.

Generality: 521