Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • My Collection
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Substrate
  4. Photonic Interconnects for AI Supercomputers

Photonic Interconnects for AI Supercomputers

Lightmatter's Passage M1000 photonic superchip uses light instead of electrical signals to connect AI processors, eliminating the interconnect bottleneck that leaves GPUs idle 30-50% of the time in large AI training clusters.
Back to SubstrateView interactive version

Lightmatter, a Boston-based startup, unveiled its Passage M1000 3D photonic superchip in March 2025 — the world's fastest AI interconnect, using integrated photonics to replace the electrical cables that connect GPUs in data centers. The chip uses silicon photonics to transmit data between processors at the speed of light with dramatically lower latency and energy consumption than copper or even optical fiber connections. Celestial AI, Ayar Labs, and Lightelligence are pursuing complementary photonic approaches for memory disaggregation and I/O.

The interconnect bottleneck is a critical but underappreciated constraint on AI scaling. In clusters of thousands of GPUs training frontier models, processors spend 30-50% of their time waiting for data from other processors. Electrical interconnects consume significant power and generate heat, and their bandwidth doesn't scale with the exponentially growing demands of AI models. Photonic interconnects offer 10-100x better energy efficiency per bit transferred, enabling larger clusters to train larger models without proportionally increasing power consumption.

Lightmatter's dual strategy — Passage for interconnect and Envise for photonic AI compute — positions it to address both the communication and computation bottlenecks simultaneously. The company's interposer shipped in 2025 with the full chiplet following in 2026. If photonic interconnects become standard in AI data centers, they could reduce the energy cost of AI training by 30-40% while enabling clusters of unprecedented scale. This is a characteristically non-obvious technology: the limiting factor for AI isn't just the processors but the wires between them.

TRL
6/9Demonstrated
Impact
4/5
Investment
5/5
Category
Hardware

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions