Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Computational Complexity Theory

Computational Complexity Theory

A framework classifying problems by the computational resources required to solve them.

Year: 1971Generality: 875
Back to Vocab

Computational complexity theory is a branch of theoretical computer science that categorizes problems according to the time, space, or other resources a computational model needs to solve them. Problems are grouped into complexity classes — most famously P (solvable in polynomial time) and NP (verifiable in polynomial time) — and the relationships between these classes define the landscape of what is efficiently computable. The central open question, whether P equals NP, remains one of the most consequential unsolved problems in all of mathematics and science.

For machine learning and AI, complexity theory plays a foundational role in determining which learning problems are tractable. Training certain neural network architectures, solving constraint satisfaction problems, and optimizing combinatorial objectives can all be shown to be NP-hard in the worst case, meaning no known algorithm can guarantee efficient solutions at scale. These hardness results shape how researchers approach algorithm design — motivating approximation algorithms, heuristics, and probabilistic methods that trade exactness for speed.

The theory also informs the study of sample complexity and PAC (Probably Approximately Correct) learning, where questions about how much data is needed to learn a concept class intersect directly with computational hardness. A concept class might be statistically learnable with enough data but computationally intractable to learn efficiently, a distinction that has real consequences for the design of practical ML systems. Cryptographic hardness assumptions, themselves rooted in complexity theory, also underpin privacy-preserving machine learning techniques like differential privacy and secure multiparty computation.

Beyond worst-case analysis, average-case complexity and smoothed analysis have become increasingly relevant as practitioners observe that many NP-hard problems are routinely solved in practice. Understanding why algorithms like stochastic gradient descent work so well on non-convex loss surfaces — despite theoretical hardness results — remains an active area where complexity theory and deep learning intersect. Complexity theory thus serves both as a ceiling on what AI can achieve in principle and as a guide for building systems that work well within those limits.

Related

Related

Linear Complexity
Linear Complexity

An algorithm whose runtime or memory usage scales directly with input size.

Generality: 796
Verifier Theory
Verifier Theory

A framework for validating solutions to computational problems within complexity classes.

Generality: 520
Complex Interaction
Complex Interaction

Non-linear, emergent behaviors arising from interconnected components within AI systems.

Generality: 694
AIT (Algorithmic Information Theory)
AIT (Algorithmic Information Theory)

A framework quantifying data complexity via the shortest program that produces it.

Generality: 752
Quantum Computing
Quantum Computing

A computing paradigm using quantum mechanical phenomena to perform calculations exponentially faster.

Generality: 792
Kolmogorov Complexity
Kolmogorov Complexity

The length of the shortest program that produces a given string as output.

Generality: 760