Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Liminal
  4. Generative Physics Engines

Generative Physics Engines

Machine learning models that infer and adapt physical behaviors in virtual environments in real-time
Back to LiminalView interactive version

Traditional physics engines in virtual environments rely on pre-programmed rules and rigid mathematical models to simulate how objects interact, move, and respond to forces. While effective for many applications, these systems require extensive manual configuration of material properties, collision behaviors, and environmental parameters. Generative physics engines represent a fundamental shift in this paradigm by employing machine learning models that can infer, predict, and adapt physical behaviors in real-time. Rather than relying solely on classical physics equations, these systems use neural networks trained on vast datasets of real-world physical interactions to generate plausible—or deliberately implausible—simulations. The underlying architecture typically combines physics-informed neural networks with generative models that can interpolate between different physical regimes, allowing the system to simulate materials and dynamics it has never explicitly been programmed to handle. This approach enables the engine to make educated predictions about how novel objects or unusual scenarios should behave, filling gaps in its knowledge through learned patterns rather than hard-coded rules.

For developers of immersive experiences, this technology addresses a longstanding challenge: the enormous effort required to create convincing virtual worlds with rich, responsive physics. Traditional engines demand that every material property be manually specified and every interaction carefully tuned, creating bottlenecks in content creation and limiting the spontaneity of virtual environments. Generative physics engines dramatically reduce this burden by automatically inferring appropriate behaviors based on visual and contextual cues. More significantly, they enable entirely new categories of experience that blend realism with creative expression. A virtual environment might shift seamlessly from obeying conventional physics to adopting dream-like or stylized dynamics based on narrative context, user emotion, or artistic intent. This capability is particularly valuable for therapeutic applications, artistic installations, and experimental storytelling where the malleability of physical laws becomes a creative tool rather than a constraint.

Early implementations of generative physics are emerging in research laboratories and specialized creative tools, with some game engines beginning to incorporate machine learning-assisted physics prediction for secondary effects like cloth simulation and particle systems. The technology shows particular promise in virtual production environments where directors can experiment with impossible physics for visual effects, and in training simulations where adaptive difficulty might involve subtly altering physical responses. As spatial computing platforms mature and demand grows for more responsive, believable virtual worlds, generative physics engines are positioned to become foundational infrastructure. The trajectory points toward hybrid systems that can fluidly transition between strict physical accuracy—essential for engineering simulations or scientific visualization—and expressive, context-aware dynamics that serve narrative and emotional goals. This flexibility represents a crucial evolution in how we construct and inhabit digital spaces, moving beyond the binary choice between realistic simulation and artistic abstraction toward a continuum where physical behavior itself becomes a dynamic, generative element of the experience.

TRL
2/9Theoretical
Impact
3/5
Investment
3/5
Category
Software

Related Organizations

Google DeepMind logo
Google DeepMind

United Kingdom · Research Lab

95%

Developers of the Gemini family of models, which are trained from the start to be multimodal across text, images, video, and audio.

Researcher
NVIDIA logo
NVIDIA

United States · Company

95%

Developing foundation models for robotics (Project GR00T) and vision-language models like VILA.

Developer
MIT CSAIL logo
MIT CSAIL

United States · University

90%

Research lab hosting Josh Tenenbaum's Computational Cognitive Science group, a leader in probabilistic programming and neuro-symbolic models.

Researcher
OpenAI logo

OpenAI

United States · Company

90%

Creator of GPT-4o, a natively multimodal model capable of reasoning across audio, vision, and text in real-time.

Developer
Electronic Arts (SEED) logo
Electronic Arts (SEED)

United States · Research Lab

85%

Search for Extraordinary Experiences Division, researching deep learning for real-time physics and animation.

Researcher
ETH Zurich (Interactive Geometry Lab) logo
ETH Zurich (Interactive Geometry Lab)

Switzerland · University

85%

Researches differentiable physics and neural simulation for computer graphics.

Researcher
Runway logo
Runway

United States · Startup

85%

Applied AI research company shaping the next era of art, entertainment and human creativity.

Developer
Ubisoft La Forge logo

Ubisoft La Forge

Canada · Research Lab

85%

The R&D branch of Ubisoft bridging academic research and game production.

Researcher
Unity logo
Unity

United States · Company

80%

Creators of the Unity Engine and the ML-Agents toolkit, which allows researchers to train intelligent agents within game environments.

Developer

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Software
Software
Inverse Rendering Engines

Extracts 3D geometry, materials, and lighting from photographs and video

TRL
4/9
Impact
4/5
Investment
4/5
Software
Software
Embodied AI Agents

AI systems that perceive and navigate 3D spaces like physical or virtual worlds

TRL
3/9
Impact
4/5
Investment
4/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions