Lightmatter, a Boston-based startup, unveiled its Passage M1000 3D photonic superchip in March 2025 — the world's fastest AI interconnect, using integrated photonics to replace the electrical cables that connect GPUs in data centers. The chip uses silicon photonics to transmit data between processors at the speed of light with dramatically lower latency and energy consumption than copper or even optical fiber connections. Celestial AI, Ayar Labs, and Lightelligence are pursuing complementary photonic approaches for memory disaggregation and I/O.
The interconnect bottleneck is a critical but underappreciated constraint on AI scaling. In clusters of thousands of GPUs training frontier models, processors spend 30-50% of their time waiting for data from other processors. Electrical interconnects consume significant power and generate heat, and their bandwidth doesn't scale with the exponentially growing demands of AI models. Photonic interconnects offer 10-100x better energy efficiency per bit transferred, enabling larger clusters to train larger models without proportionally increasing power consumption.
Lightmatter's dual strategy — Passage for interconnect and Envise for photonic AI compute — positions it to address both the communication and computation bottlenecks simultaneously. The company's interposer shipped in 2025 with the full chiplet following in 2026. If photonic interconnects become standard in AI data centers, they could reduce the energy cost of AI training by 30-40% while enabling clusters of unprecedented scale. This is a characteristically non-obvious technology: the limiting factor for AI isn't just the processors but the wires between them.