Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Liminal
  4. Volumetric Capture Rigs

Volumetric Capture Rigs

Multi-camera arrays that record people and spaces as navigable 3D video
Back to LiminalView interactive version

Volumetric capture rigs represent a convergence of multiple sensing technologies designed to record three-dimensional reality as dynamic digital assets. Unlike traditional video, which captures flat images from a single viewpoint, these systems employ arrays of synchronized RGB cameras, depth sensors, and lidar units positioned around a subject or environment. The cameras capture color and texture information from multiple angles simultaneously, while depth sensors and lidar measure the precise spatial coordinates of surfaces and objects in real time. Advanced processing algorithms then fuse these data streams into a unified volumetric representation—essentially a moving point cloud or mesh that preserves the full three-dimensional geometry and appearance of the captured subject. The resulting datasets are time-varying 3D models that viewers can observe from any angle, enabling a fundamentally different form of visual media that bridges the gap between traditional video and computer-generated imagery. Some systems operate in fixed studio environments with dozens of precisely calibrated cameras, while emerging portable rigs compress this capability into mobile configurations suitable for location shooting.

The entertainment and enterprise sectors face persistent challenges in creating realistic digital representations of human performance and physical spaces. Traditional motion capture requires actors to wear specialized suits covered in markers, limiting natural movement and requiring extensive post-production to add realistic appearance. Photogrammetry can produce high-quality static 3D models but struggles with moving subjects. Volumetric capture addresses these limitations by recording both geometry and appearance simultaneously, preserving subtle details like fabric movement, facial expressions, and lighting interactions that are difficult to recreate artificially. For the film and gaming industries, this enables the integration of real human performances into virtual environments with unprecedented fidelity. In enterprise contexts, volumetric capture supports immersive training scenarios where learners can observe complex procedures from optimal vantage points, or review recorded events from multiple perspectives. The technology also enables new forms of remote collaboration, where participants appear as realistic three-dimensional presences rather than flat video feeds, preserving spatial relationships and non-verbal communication cues that are lost in conventional video conferencing.

Major production studios have deployed permanent volumetric capture stages for entertainment content, while research institutions and technology companies are exploring applications in medical training, sports analysis, and cultural preservation. Museums and heritage organizations are beginning to use portable volumetric systems to create interactive archives of performances, ceremonies, and historical sites, capturing not just static geometry but the movement and atmosphere of living cultural practices. In the telepresence domain, early deployments indicate that volumetric representations significantly improve the sense of co-presence compared to traditional video, particularly in scenarios requiring spatial reasoning or collaborative physical tasks. As processing capabilities improve and capture systems become more compact and affordable, the technology is expanding beyond specialized studios into broader commercial and educational use. The convergence of volumetric capture with real-time rendering engines and spatial computing platforms positions this approach as a foundational element in the emerging ecosystem of immersive media, where the boundaries between physical and digital presence continue to blur.

TRL
5/9Validated
Impact
4/5
Investment
4/5
Category
Hardware

Related Organizations

4Dviews logo
4Dviews

France · Company

95%

Manufactures the HOLOSYS volumetric capture system used by studios worldwide for high-fidelity 3D video.

Developer
Dimension Studio logo
Dimension Studio

United Kingdom · Company

95%

A leading volumetric production studio that has produced high-profile volumetric experiences for fashion and music.

Deployer
Metastage logo
Metastage

United States · Company

95%

A premier volumetric capture stage in Los Angeles, utilizing Microsoft Mixed Reality Capture technology.

Deployer
Microsoft logo
Microsoft

United States · Company

95%

Through Copilot and the 'Recall' feature in Windows, Microsoft is integrating persistent memory and agentic capabilities directly into the operating system.

Developer
Arcturus logo
Arcturus

United States · Company

90%

Creators of HoloSuite, a post-production and streaming platform for volumetric video, enabling adaptive streaming of 3D data.

Developer
Canon logo
Canon

Japan · Company

90%

Multinational corporation specializing in optical, imaging, and industrial products.

Developer
Evercoast logo
Evercoast

United States · Company

85%

Provides a software platform for the capture, rendering, and streaming of volumetric video.

Developer
Scatter logo
Scatter

United States · Company

85%

Creators of Depthkit, a software tool allowing volumetric capture using accessible depth sensors.

Developer
IO Industries logo
IO Industries

Canada · Company

80%

Manufactures compact, high-speed video cameras (Volucam) specifically designed for synchronized volumetric capture arrays.

Developer
Volograms logo
Volograms

Ireland · Startup

80%

AI-powered software that enables volumetric capture using standard smartphones rather than expensive studio rigs.

Developer

Supporting Evidence

Evidence data is not available for this technology yet.

Same technology in other hubs

Soma
Soma
Volumetric Capture Arrays

Synchronized camera rigs that capture full 3D human performance from all angles

Vortex
Vortex
Portable Volumetric Capture Rigs

Mobile camera arrays that capture subjects as navigable 3D models from multiple angles

Vortex
Vortex
Volumetric Capture Stages

Multi-camera studios that record performers as 3D digital assets instead of flat video

Pixels
Pixels
Volumetric Capture Studios

Multi-camera rigs that record actors as navigable 3D holograms for games and XR

Connections

Software
Software
Avatar Embodiment Systems

Real-time systems translating human motion and expression into digital avatars

TRL
4/9
Impact
4/5
Investment
3/5
Applications
Applications
Spatial Journalism

Immersive news experiences using volumetric video and spatial audio to place viewers inside events

TRL
5/9
Impact
4/5
Investment
3/5
Hardware
Hardware
Light Field Displays

Displays that recreate 3D scenes by controlling individual light rays for natural depth perception

TRL
4/9
Impact
5/5
Investment
4/5
Software
Software
Inverse Rendering Engines

Extracts 3D geometry, materials, and lighting from photographs and video

TRL
4/9
Impact
4/5
Investment
4/5
Applications
Applications
Spatial Design Collaboration

Real-time co-creation of 3D environments using mixed reality workspaces

TRL
6/9
Impact
5/5
Investment
4/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions