AI capabilities developed for beneficial purposes that can also enable harmful applications.
Dual use refers to the property of technologies, research, or knowledge that can serve both constructive and destructive ends. In the context of AI and machine learning, this means that systems designed to advance medicine, scientific discovery, or economic productivity can often be repurposed—sometimes with minimal modification—for surveillance, autonomous weapons, disinformation campaigns, or cyberattacks. The same large language model that assists with writing and education can generate targeted propaganda. The same computer vision system that aids medical imaging can power facial recognition for authoritarian control. This inherent versatility is what makes dual use a foundational concern in AI ethics and governance.
The challenge is structural rather than incidental. Unlike a physical weapon, AI capabilities are encoded in software, models, and datasets that can be copied, fine-tuned, and redeployed at near-zero marginal cost. A model trained to synthesize proteins for drug discovery may also lower the barrier to designing biological agents. Diffusion models built for creative image generation can produce non-consensual synthetic media. Because the same underlying architecture, training data, and techniques drive both beneficial and harmful applications, restricting harmful use without impeding beneficial development is genuinely difficult. This distinguishes AI dual use from simpler cases of technology misuse.
Addressing dual use in AI requires coordinated responses across multiple levels. At the research level, this includes pre-publication risk assessments, staged release strategies, and red-teaming to anticipate misuse before deployment. At the organizational level, it involves access controls, use-case restrictions, and monitoring of downstream applications. At the policy level, governments and international bodies are developing export controls, liability frameworks, and norms around particularly dangerous capability thresholds—such as those enabling weapons of mass destruction or large-scale manipulation. The EU AI Act, U.S. executive orders on AI safety, and multilateral discussions at forums like the UN reflect growing institutional recognition of dual-use risks.
Dual use is not a problem that can be fully solved, but it can be managed through deliberate design choices, governance structures, and ongoing vigilance. Researchers and developers bear particular responsibility for anticipating misuse pathways, since they possess the deepest understanding of what their systems can do. Treating dual use as a core design consideration—rather than an afterthought—is increasingly seen as a professional and ethical obligation in the AI field.