Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Function Approximation

Function Approximation

Using parameterized models to estimate unknown functions from observed data.

Year: 1988Generality: 838
Back to Vocab

Function approximation is the practice of using a parameterized model to represent an unknown or intractable function based on observed input-output pairs. In machine learning, this arises constantly: a regression model approximates a continuous target function, a classifier approximates a decision boundary, and a neural network approximates the mapping from raw inputs to predictions. The core assumption is that even though the true underlying function may be impossibly complex to express analytically, a sufficiently flexible model trained on enough data can capture its essential behavior.

The mechanics of function approximation involve choosing a model family — polynomials, kernel methods, decision trees, or neural networks — and then optimizing its parameters to minimize some measure of discrepancy between the model's outputs and the true function values observed in training data. The choice of model family encodes inductive biases about the function's structure, such as smoothness or linearity, and heavily influences generalization. Regularization techniques help prevent the approximator from overfitting to noise rather than learning the true underlying function.

Function approximation is especially critical in reinforcement learning, where agents must estimate value functions or policies over enormous or continuous state spaces that cannot be enumerated explicitly. Tabular methods break down in these settings, and function approximators — particularly neural networks — allow agents to generalize across similar states. This combination, deep reinforcement learning, enabled landmark achievements such as superhuman game-playing agents and robotic control systems.

The theoretical backbone of neural function approximation is the universal approximation theorem, which establishes that feedforward networks with sufficient capacity can approximate any well-behaved function to arbitrary precision. While this result guarantees expressive power in principle, it says nothing about how easily that approximation can be learned from finite data or how well it will generalize. Understanding the gap between approximation capacity and practical learnability remains an active area of research, touching on generalization theory, optimization landscapes, and the geometry of high-dimensional function spaces.

Related

Related

Function Approximator
Function Approximator

A model that estimates complex or unknown mappings from inputs to outputs.

Generality: 794
Universal Approximation Theorem
Universal Approximation Theorem

A single hidden-layer neural network can approximate any continuous function arbitrarily well.

Generality: 720
Universality Hypothesis
Universality Hypothesis

The claim that sufficiently expressive models can approximate any learnable function.

Generality: 720
Parameterized Model
Parameterized Model

A model whose behavior is governed by learnable numerical values called parameters.

Generality: 875
Radial Basis Function Network
Radial Basis Function Network

A neural network using radial basis functions as hidden-layer activations for function approximation.

Generality: 563
Objective Function
Objective Function

A mathematical function that quantifies what a machine learning model is optimizing.

Generality: 908