Systematic exploration of a problem space to find goal-achieving solutions or action sequences.
Search is a foundational concept in AI that involves systematically exploring a space of possible states or actions to find a path, configuration, or solution that satisfies some goal condition. The problem space is typically represented as a graph or tree, where nodes correspond to states and edges represent transitions between them. Search algorithms must balance completeness (guaranteeing a solution if one exists), optimality (finding the best solution), and computational efficiency in terms of time and memory.
Search algorithms divide broadly into uninformed and informed categories. Uninformed methods like breadth-first search (BFS) and depth-first search (DFS) explore the space without domain-specific guidance, while informed methods use heuristics to prioritize promising directions. The A* algorithm, which combines path cost with an admissible heuristic estimate of remaining cost, remains one of the most widely used informed search techniques due to its optimality guarantees. Adversarial search, used in game-playing systems, extends these ideas with algorithms like minimax and alpha-beta pruning to handle environments with competing agents.
Search remains deeply relevant to modern machine learning and AI. Hyperparameter optimization, neural architecture search (NAS), and reinforcement learning all rely on search principles to navigate high-dimensional solution spaces. In natural language processing, beam search is a standard decoding strategy for sequence generation models, trading off exploration breadth against computational cost. As AI systems tackle increasingly complex planning and reasoning tasks, efficient search strategies—often combined with learned heuristics or value functions—continue to be essential tools for building capable, goal-directed systems.