Mapping inputs through sinusoidal functions to help models capture complex, periodic patterns.
Fourier features are a technique in machine learning that transforms raw input data into a higher-dimensional representation using sine and cosine basis functions derived from Fourier analysis. Rather than feeding coordinates or raw signals directly into a model, inputs are projected through a set of sinusoidal functions at various frequencies, producing a richer feature vector. This mapping allows models—particularly neural networks—to represent oscillatory, periodic, and high-frequency structure in data that would otherwise be difficult to capture with standard linear or polynomial transformations.
The practical motivation for Fourier features became especially clear in the context of neural radiance fields (NeRF) and implicit neural representations, where networks must reconstruct fine spatial detail from continuous coordinate inputs. Without positional encoding via Fourier features, multilayer perceptrons tend to exhibit a strong spectral bias, preferring low-frequency solutions and struggling to fit high-frequency details. By explicitly encoding inputs at multiple frequency scales, Fourier features counteract this bias and dramatically improve a network's ability to learn sharp, detailed functions. A related approach, random Fourier features, uses randomly sampled frequencies to approximate kernel functions such as the RBF kernel, enabling scalable kernel methods for large datasets.
The technique connects to classical signal processing theory but found renewed relevance in machine learning around 2020, when Tancik et al. demonstrated that Fourier positional encodings were critical to the success of coordinate-based neural networks. Since then, Fourier features have become a standard component in architectures dealing with spatial data, audio, and physics simulations, as well as in transformer models where positional encodings serve a similar frequency-decomposition role.
Fourier features matter because they give practitioners a principled, computationally lightweight way to inject inductive bias about frequency content into a model. Instead of hoping a deep network will discover periodic structure on its own, the encoding makes that structure explicit from the first layer. This leads to faster convergence, better generalization on tasks involving spatial or temporal regularity, and improved fidelity in generative and reconstruction tasks where preserving fine-grained detail is essential.