AI-Based Signal Processing for Hearing

Technologies enhancing speech clarity in noisy environments, featuring Denoiser, Beamformer, Echo Canceller, and Coneformer™ for voice isolation by angle and distance, optimized for low-latency, low-power DSPs/SoCs such as Airoha, Qualcomm, NXP, BES, Wuqi, ARM, and STMicro.
AI-Based Signal Processing for Hearing

AI-based signal processing for hearing uses advanced algorithms including denoisers, beamformers, echo cancellers, and spatial audio processing to enhance speech clarity in challenging acoustic environments. These technologies work together to isolate target speech from background noise, cancel echoes and reverberation, and focus on voices from specific directions or distances. The Coneformer technology enables voice isolation by angle and distance, allowing hearing devices to focus on a specific speaker even in crowded, noisy environments. The algorithms are optimized for low-latency, low-power digital signal processors (DSPs) and system-on-chips (SoCs) commonly used in hearing aids, earbuds, and audio devices.

The technology dramatically improves speech understanding in noisy restaurants, crowded spaces, phone calls, and other challenging listening situations. The low-latency processing ensures natural sound without noticeable delays, while low-power optimization enables battery-powered devices to run these advanced algorithms continuously. The spatial processing capabilities allow users to focus on specific speakers or conversations, making group conversations and meetings much more accessible. Applications include hearing aids, true wireless earbuds, communication devices, and any audio system where speech clarity is critical. The technology makes hearing devices more effective in real-world conditions, significantly improving the user experience for people with hearing challenges and anyone in noisy environments.

Technology Readiness Level
5/9Validated
Category
Wearables Health Sensing