Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • My Collection
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
Autonomous Weapons Control Treaties | Continuum | Envisioning
  1. Home
  2. Research
  3. Continuum
  4. Autonomous Weapons Control Treaties

Autonomous Weapons Control Treaties

International bans on lethal AI systems without human oversight.
BACK TO CONTINUUM

Connections

Explore this signal in your context

Get a focused view of implications, timing, and action options for your organization.
Discuss this signal
VIEW INTERACTIVE VERSION
Ethics Security
Ethics Security
Global Catastrophic Risk Treaties

International protocols for managing existential threats.

TRL
2/9
Impact
5/5
Investment
2/5
Ethics Security
Ethics Security
Constitutional AI Frameworks

Embedding human rights and safety rules into AI models.

TRL
5/9
Impact
5/5
Investment
5/5

The rapid advancement of artificial intelligence in military applications has created an urgent need for international governance frameworks that can prevent the deployment of weapons systems capable of selecting and engaging targets without meaningful human control. Autonomous weapons control treaties represent diplomatic efforts to establish binding international agreements that prohibit or strictly regulate lethal autonomous weapons systems (LAWS)—machines that can identify, track, and eliminate targets based on sensor inputs and algorithmic decision-making alone. These frameworks draw heavily on precedents established by nuclear non-proliferation agreements, chemical weapons conventions, and landmine bans, adapting verification mechanisms and compliance protocols to the unique challenges posed by software-based weapons. The technical architecture of such treaties typically includes definitions of prohibited autonomy levels, requirements for human oversight in lethal decision-making, and provisions for inspecting military AI development programs without compromising legitimate national security interests.

The fundamental challenge these treaties address is the risk of destabilising arms races in autonomous military technology that could lower the threshold for armed conflict and enable atrocities at unprecedented scale and speed. Without international constraints, military organizations face pressure to deploy increasingly autonomous systems to match adversaries, potentially leading to weapons that operate faster than human comprehension allows for intervention. Research in conflict studies suggests that fully autonomous weapons could make war more likely by removing human hesitation from lethal decisions, enabling surprise attacks that unfold in seconds, and creating accountability gaps where no individual bears responsibility for unlawful killings. These treaties aim to preserve human judgment in life-and-death decisions while allowing defensive systems and other military AI applications that maintain meaningful human control. By establishing clear international norms before widespread deployment occurs, such frameworks seek to prevent the normalization of algorithmic warfare and maintain the applicability of international humanitarian law in future conflicts.

Several multilateral forums, including the United Nations Convention on Certain Conventional Weapons, have hosted ongoing discussions about autonomous weapons regulation since the mid-2010s, though no comprehensive binding treaty has yet been achieved. Advocacy organizations and coalitions of nations have proposed various frameworks ranging from complete prohibitions on autonomous lethal systems to more limited restrictions on specific capabilities or contexts of use. The verification challenges are substantial, as software can be modified rapidly and distinguishing prohibited autonomy from permitted AI assistance requires technical inspection capabilities that few international bodies currently possess. Nevertheless, regional agreements and voluntary commitments by some nations indicate growing recognition that governance frameworks must be established before autonomous weapons become entrenched in military arsenals worldwide. The trajectory of these diplomatic efforts will likely determine whether the international community can maintain meaningful human agency over the use of lethal force or whether algorithmic warfare becomes an accepted feature of future conflicts, with profound implications for global security and the laws of war.

TRL
3/9Conceptual
Impact
5/5
Investment
3/5
Category
Ethics Security

Newsletter

Follow us for weekly foresight in your inbox.

Browse the latest from Artificial Insights, our opinionated weekly briefing exploring the transition toward AGI.
Mar 8, 2026 · Issue 131
Mar 8, 2026 · Issue 131
Prompt it into existence
Feb 23, 2026 · Issue 130
Feb 23, 2026 · Issue 130
An Apocaloptimist
Feb 9, 2026 · Issue 129
Feb 9, 2026 · Issue 129
Agent in the Loop
View all issues