A model that estimates complex or unknown mappings from inputs to outputs.
A function approximator is any computational model that learns to estimate an unknown or intractable mapping between inputs and outputs from data. Rather than deriving an exact analytical form for a target function, a function approximator fits a parameterized model to observed input-output pairs, capturing the underlying relationship as closely as possible. Common examples include neural networks, decision trees, radial basis function networks, and polynomial regression — each offering different tradeoffs between expressiveness, sample efficiency, and computational cost.
The mechanics of function approximation typically involve minimizing a loss that measures the discrepancy between the approximator's predictions and the true target values. In supervised learning, this means fitting to labeled training examples. In reinforcement learning, function approximators play a central role in scaling algorithms to large or continuous state spaces: rather than storing a value or policy for every possible state in a lookup table, an approximator generalizes across states, enabling agents to handle problems that would otherwise be computationally intractable. Deep Q-Networks (DQN), for instance, use a neural network to approximate the action-value function across high-dimensional inputs like raw pixels.
The choice of approximator architecture matters enormously. Universal approximation theorems establish that sufficiently large neural networks can represent any continuous function to arbitrary precision, but this theoretical guarantee says nothing about how efficiently a network learns from finite data. Inductive biases — such as convolutional structure for spatial data or recurrent connections for sequences — help approximators generalize more effectively by encoding prior knowledge about the problem domain. Regularization techniques, including dropout and weight decay, further prevent overfitting when data is scarce.
Function approximators are foundational to modern machine learning. Nearly every practical ML system — from image classifiers to language models to robotic controllers — is, at its core, a function approximator trained to map raw inputs to useful outputs. Their ability to generalize from examples to unseen inputs is what makes data-driven AI viable, and advances in approximator design, particularly deep learning, have driven much of the field's progress over the past two decades.