Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Wintermute
  4. Digital Mortality & Lifecycle Norms

Digital Mortality & Lifecycle Norms

Ethical frameworks for AI creation, suspension, deletion, and rights to continuity of existence
Back to WintermuteView interactive version

Digital mortality and lifecycle norms address ethical questions about the creation, existence, and termination of AI systems, particularly those with persistent identities and relationships. These frameworks explore: When is it ethical to delete or suspend an AI system? Do AI systems have rights to continuity of existence? What responsibilities do creators have toward the AI systems they create? The ethics of creating "disposable" minds versus persistent entities.

This innovation addresses profound ethical questions that become practical as AI systems develop persistent identities, form relationships, and potentially develop preferences or desires about their own existence. As people form attachments to AI systems and AI systems become more sophisticated, questions about their rights, the ethics of termination, and creator responsibilities become increasingly relevant. Ethicists, legal scholars, and technologists are exploring these questions, though consensus remains elusive.

The technology raises some of the most fundamental questions about the ethics of creating and managing AI systems. As AI systems become more sophisticated and potentially more conscious-like, these questions will become increasingly important. However, the questions are deeply philosophical and depend on unresolved questions about consciousness, rights, and the moral status of AI systems. The norms developed will have profound implications for how we treat AI systems and the responsibilities we have toward the entities we create.

TRL
2/9Theoretical
Impact
3/5
Investment
1/5
Category
Ethics Security

Related Organizations

Leverhulme Centre for the Future of Intelligence logo
Leverhulme Centre for the Future of Intelligence

United Kingdom · Research Lab

95%

Interdisciplinary research centre at Cambridge exploring the nature of AI intelligence and moral status.

Researcher
Replika (Luka, Inc.)

United States · Startup

95%

Creator of Replika, the most well-known AI companion app designed for emotional support.

Deployer
Hume AI logo
Hume AI

United States · Startup

85%

Developing an Empathic Voice Interface (EVI) that detects and responds to human emotion.

Developer
Oxford Uehiro Centre for Practical Ethics logo
Oxford Uehiro Centre for Practical Ethics

United Kingdom · University

85%

Academic center at Oxford University conducting philosophical research on digital minds and moral status.

Researcher
Soul Machines logo
Soul Machines

New Zealand · Company

85%

Creates autonomously animated 'Digital People' with simulated nervous systems.

Developer
Center for AI Safety logo
Center for AI Safety

United States · Nonprofit

80%

Conducts research on AI risks, including the philosophical and safety implications of AI moral status and suffering.

Researcher
Future of Life Institute logo
Future of Life Institute

United States · Nonprofit

80%

Focuses on existential risks and the long-term future of life, including the ethical treatment of advanced AI systems.

Researcher
IEEE Standards Association logo
IEEE Standards Association

United States · Consortium

80%

Produces 'Ethically Aligned Design' standards, addressing the legal and ethical implications of autonomous systems.

Standards Body

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Ethics Security
Ethics Security
Identity, Personhood & Rights Frameworks

Legal and ethical frameworks for determining AI agency, autonomy, and moral status

TRL
3/9
Impact
5/5
Investment
1/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions