Neural networks that incorporate higher-order spectral features to capture nonlinear signal interactions.
Bispectral Neural Networks (BNNs) are a specialized class of neural architectures that integrate bispectral analysis directly into the learning pipeline. The bispectrum is a higher-order spectral representation that characterizes the statistical dependencies between triplets of frequency components in a signal. Unlike the standard power spectrum, which discards phase information, the bispectrum preserves phase relationships and captures nonlinear interactions — making it sensitive to non-Gaussian structure that conventional spectral methods miss entirely. BNNs exploit this richer representation by feeding bispectral features into neural network layers, either as preprocessed inputs or through learned bispectral transformations embedded within the architecture itself.
The core mechanism involves computing the bispectrum of an input signal — typically via the Fourier transform of the third-order cumulant — and using the resulting two-dimensional frequency map as a feature space. Neural network components then learn discriminative patterns within this space, effectively combining the statistical expressiveness of higher-order analysis with the representational power of deep learning. This hybrid approach allows the model to detect subtle phase couplings, harmonic relationships, and nonlinear resonances that would be invisible to networks operating on raw signals or standard spectral features alone.
BNNs have found practical application in domains where signals exhibit complex, non-Gaussian, or nonstationary behavior. In biomedical signal processing, they have been applied to EEG and ECG analysis, where nonlinear neural dynamics produce characteristic bispectral signatures. In mechanical fault detection, bispectral features reveal gear mesh interactions and bearing defects that linear methods overlook. Communications and radar signal classification have also benefited, as bispectral representations are inherently robust to additive Gaussian noise, which contributes zero to the bispectrum by definition.
The relevance of bispectral methods to machine learning grew substantially in the late 2010s as researchers sought principled ways to inject domain-specific signal structure into neural architectures. BNNs represent a broader trend of physics- and signal-theory-informed deep learning, where mathematical properties of the problem domain are encoded into the model rather than left entirely to data-driven discovery. Their primary limitation is computational cost, as bispectrum estimation scales quadratically with frequency resolution, making efficient approximations an active area of research.