Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Dual Use

Dual Use

AI capabilities developed for beneficial purposes that can also enable harmful applications.

Year: 2014Generality: 703
Back to Vocab

Dual use refers to the property of technologies, research, or knowledge that can serve both constructive and destructive ends. In the context of AI and machine learning, this means that systems designed to advance medicine, scientific discovery, or economic productivity can often be repurposed—sometimes with minimal modification—for surveillance, autonomous weapons, disinformation campaigns, or cyberattacks. The same large language model that assists with writing and education can generate targeted propaganda. The same computer vision system that aids medical imaging can power facial recognition for authoritarian control. This inherent versatility is what makes dual use a foundational concern in AI ethics and governance.

The challenge is structural rather than incidental. Unlike a physical weapon, AI capabilities are encoded in software, models, and datasets that can be copied, fine-tuned, and redeployed at near-zero marginal cost. A model trained to synthesize proteins for drug discovery may also lower the barrier to designing biological agents. Diffusion models built for creative image generation can produce non-consensual synthetic media. Because the same underlying architecture, training data, and techniques drive both beneficial and harmful applications, restricting harmful use without impeding beneficial development is genuinely difficult. This distinguishes AI dual use from simpler cases of technology misuse.

Addressing dual use in AI requires coordinated responses across multiple levels. At the research level, this includes pre-publication risk assessments, staged release strategies, and red-teaming to anticipate misuse before deployment. At the organizational level, it involves access controls, use-case restrictions, and monitoring of downstream applications. At the policy level, governments and international bodies are developing export controls, liability frameworks, and norms around particularly dangerous capability thresholds—such as those enabling weapons of mass destruction or large-scale manipulation. The EU AI Act, U.S. executive orders on AI safety, and multilateral discussions at forums like the UN reflect growing institutional recognition of dual-use risks.

Dual use is not a problem that can be fully solved, but it can be managed through deliberate design choices, governance structures, and ongoing vigilance. Researchers and developers bear particular responsibility for anticipating misuse pathways, since they possess the deepest understanding of what their systems can do. Treating dual use as a core design consideration—rather than an afterthought—is increasingly seen as a professional and ethical obligation in the AI field.

Related

Related

Dual Use Foundational Model
Dual Use Foundational Model

Powerful general-purpose AI systems adaptable for both beneficial and harmful applications.

Generality: 646
AI Misuse
AI Misuse

Deliberate application of AI systems in ways that cause harm or violate ethical norms.

Generality: 739
Catastrophic Risk
Catastrophic Risk

The potential for AI systems to cause severe, large-scale harm or societal disruption.

Generality: 745
Ethical AI
Ethical AI

Developing AI systems that are fair, transparent, accountable, and beneficial to society.

Generality: 853
Responsible AI
Responsible AI

Developing and deploying AI systems that are ethical, fair, transparent, and accountable.

Generality: 834
AI Safety
AI Safety

Research field ensuring AI systems remain beneficial, aligned, and free from catastrophic risk.

Generality: 871