A heuristic that never overestimates goal cost, guaranteeing optimal search solutions.
An admissible heuristic is a function used in search algorithms that estimates the cost of reaching a goal from a given state, with the strict requirement that this estimate never exceeds the true cost. This non-overestimation property is the defining constraint: the heuristic may underestimate or be exact, but it must never be optimistic in the wrong direction. In practice, this means the algorithm is always working with a lower bound on the remaining cost, which is the key property that allows search algorithms like A* to guarantee they find the optimal solution.
The mechanism behind admissibility becomes clear when examining how A* selects paths to explore. A* maintains a priority queue ordered by f(n) = g(n) + h(n), where g(n) is the known cost to reach node n and h(n) is the heuristic estimate to the goal. Because an admissible heuristic never inflates h(n), the algorithm cannot prematurely dismiss a path that might turn out to be optimal. When A* expands the goal node, it is mathematically guaranteed that no cheaper path exists — a proof that depends entirely on the admissibility of h. Without this property, the algorithm might settle for a suboptimal solution.
A closely related concept is consistency, also called monotonicity, which requires that the heuristic estimate for any node is no greater than the estimated cost to a neighbor plus the heuristic estimate from that neighbor. Consistency implies admissibility, and consistent heuristics additionally ensure that A* never needs to re-expand a node, improving efficiency. Common examples of admissible heuristics include the straight-line (Euclidean) distance in geographic pathfinding and the Manhattan distance or number of misplaced tiles in the 15-puzzle, both of which provably underestimate the true solution cost.
Admissible heuristics matter because the quality of the heuristic directly governs search efficiency. A heuristic that is admissible but very loose — say, always returning zero — technically satisfies the requirement but forces the algorithm to explore nearly every node. Tighter admissible heuristics dramatically reduce the search space, making the difference between tractable and intractable problems in domains like robotics, automated planning, and game AI.