Whether a problem can be solved by an algorithm that always terminates with a yes-or-no answer.
Decidability is a foundational concept in computational theory that classifies problems based on whether an algorithm can reliably solve them. A problem is considered decidable if there exists a procedure that, given any valid input, will always halt and return a correct yes-or-no answer in finite time. Problems that lack such a procedure are called undecidable. The distinction matters enormously in computer science because it defines the hard boundary between what machines can and cannot do in principle, regardless of available computing power.
The most famous undecidable problem is the Halting Problem, proven undecidable by Alan Turing in 1936: no general algorithm can determine, for an arbitrary program and input, whether that program will eventually stop or run forever. This result, alongside Alonzo Church's independent work using lambda calculus, established the Church-Turing thesis — the claim that any effectively computable function can be computed by a Turing machine. Together, these insights defined the theoretical ceiling of algorithmic computation and remain central to how researchers reason about what AI systems can and cannot accomplish.
In machine learning and AI, decidability surfaces in several important ways. Questions about whether a learning algorithm will converge, whether a neural network satisfies a given safety property, or whether a logical inference system can answer all queries within a formal language all touch on decidability. Formal verification of AI systems — a growing concern in safety-critical applications — frequently confronts undecidable problems, forcing researchers to work with approximations, restricted problem classes, or semi-decidable methods that may not terminate on all inputs.
Understanding decidability helps practitioners set realistic expectations about automated reasoning and model verification. When a problem is undecidable, no amount of engineering can produce a perfect general solution; the best achievable outcomes involve heuristics, bounded search, or probabilistic guarantees. This theoretical grounding is increasingly relevant as AI systems are deployed in high-stakes domains where correctness and reliability must be formally characterized rather than merely assumed.