Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Prism
  4. Spatial Audio Broadcasting

Spatial Audio Broadcasting

Object-based audio pipelines that preserve 3D sound metadata from studio to listener's device
Back to PrismView interactive version

Spatial audio broadcasting takes object-based mixes from Dolby Atmos, MPEG-H, or Sony 360RA consoles and carries them intact through contribution networks, playout, and streaming apps. Instead of a fixed stereo downmix, metadata describing each sound object’s position, priority, and language tag travels alongside compressed audio, allowing client devices to render it for earbuds, soundbars, or car arrays. Broadcasters tie the pipeline into editorial systems so accessibility tracks, alternate commentators, and ASMR-style mixes can be toggled in real time.

Sports leagues deliver “player POV” mixes with isolated on-field mics, drama series stream height channels that make rain fall realistically, and connected cars adjust mixes based on seat occupancy. Brands sponsor premium audio layers (e.g., director’s commentary) and targeted ads that spatially wrap around the listener without overpowering main content.

Adoption (TRL 7) faces workflow friction: mixing engineers must monitor multiple renderers, and legacy distribution infrastructure wasn’t designed for object metadata. Standards work by SMPTE, DVB, and CTA ensures consistent labeling so OTT apps know how to present user options. As more smart TVs and phones ship with spatial renderers, object audio becomes a baseline expectation, much like HDR for video.

TRL
7/9Operational
Impact
5/5
Investment
4/5
Category
Applications

Related Organizations

Dolby Laboratories

United States · Company

99%

Creators of Dolby Atmos, providing the end-to-end infrastructure (encoding, monitoring, delivery) for live spatial audio broadcasting.

Developer
Fraunhofer IIS logo
Fraunhofer IIS

Germany · Research Lab

95%

Develops light-field production tools and Realception software for processing volumetric video.

Researcher
Sony logo
Sony

Japan · Company

95%

Developer of 360 Reality Audio (360RA), an object-based spatial audio format used in live music broadcasting and streaming.

Developer
Calrec Audio

United Kingdom · Company

90%

Leading provider of broadcast audio mixing consoles that support Dolby Atmos workflows for live sports coverage.

Developer
Lawo

Germany · Company

90%

Manufactures IP-based audio mixing consoles (mc² series) capable of handling immersive/spatial audio production for live broadcast.

Developer
NHK

Japan · Research Lab

90%

NHK Science & Technology Research Laboratories developed the 22.2 multichannel sound system, a precursor and high-end variant of spatial broadcasting.

Researcher
Ateme logo
Ateme

France · Company

85%

Video compression and delivery company that integrates Next Generation Audio (Dolby Atmos/MPEG-H) encoding into broadcast headends.

Developer

European Broadcasting Union (EBU)

Switzerland · Consortium

85%

Develops the ADM (Audio Definition Model) and EAR (EBU ADM Renderer) to standardize object-based audio exchange.

Standards Body
Genelec logo
Genelec

Finland · Company

80%

Manufacturer of active monitoring systems used in OB trucks and studios for mixing immersive audio formats.

Developer

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Applications
Applications
Volumetric Concert Streaming

Livestreamed concerts captured in 3D, letting remote viewers walk around the stage in real time

TRL
5/9
Impact
4/5
Investment
4/5
Software
Software
Procedural Audio Generation Suites

AI engines that generate adaptive sound effects and music from scene metadata and visual cues

TRL
5/9
Impact
4/5
Investment
3/5
Hardware
Hardware
Acoustic Holography Speakers

Phased-array transducers that sculpt focused audio beams and 3D sound shapes in mid-air

TRL
5/9
Impact
4/5
Investment
3/5
Software
Software
Multi-Sensory Synchronization Protocols

Timing protocols that align visuals, audio, haptics, scent, and lighting across devices

TRL
5/9
Impact
3/5
Investment
3/5
Hardware
Hardware
Olfactory Media Synthesizers

Devices that synthesize and release scents in sync with media to deepen immersion

TRL
3/9
Impact
2/5
Investment
2/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions