Guided search methods that use domain knowledge to find solutions efficiently.
Heuristic search techniques are algorithms that use problem-specific knowledge—often called "rules of thumb"—to guide exploration through a search space more efficiently than exhaustive methods. Rather than evaluating every possible state, these techniques prioritize which paths or nodes to explore based on an estimated measure of their promise. This estimation is the heuristic function, and its quality directly determines how much computation can be saved. Classic examples include A* search, which balances the known cost to reach a node with an estimated cost to the goal, and greedy best-first search, which pursues whichever state looks most immediately promising regardless of the path taken to get there.
The mechanics of heuristic search typically involve maintaining a priority queue of candidate states, ordered by some scoring function that incorporates the heuristic. A* search, for instance, scores each node as f(n) = g(n) + h(n), where g(n) is the actual cost from the start and h(n) is the heuristic estimate to the goal. When h(n) never overestimates the true cost—a property called admissibility—A* is guaranteed to find the optimal solution. Other methods like beam search, iterative deepening A*, and simulated annealing trade off optimality guarantees for reduced memory usage or faster practical performance.
Heuristic search became foundational to AI because many real-world problems—route planning, game playing, scheduling, and constraint satisfaction—involve search spaces too vast for brute-force enumeration. In game AI, heuristics evaluate board positions to guide minimax or Monte Carlo tree search. In robotics and navigation, they power real-time pathfinding. In classical planning systems, heuristics derived automatically from problem structure enable solvers to tackle complex logistics and scheduling tasks.
In modern machine learning, heuristic search remains relevant in neural architecture search, hyperparameter optimization, and combinatorial optimization problems where learned heuristics—sometimes produced by reinforcement learning or graph neural networks—replace hand-crafted ones. The interplay between learned representations and search algorithms is an active research frontier, with methods like AlphaZero demonstrating that neural-guided heuristic search can achieve superhuman performance on tasks once thought to require deep human expertise.