Optimization methods that evolve populations of candidate solutions through selection, crossover, and mutation.
Evolutionary algorithms (EAs) are a family of population-based optimization methods inspired by the mechanics of biological evolution. Rather than searching for solutions through gradient descent or exhaustive enumeration, EAs maintain a population of candidate solutions and iteratively improve them across generations. At each generation, individuals are evaluated according to a fitness function that measures how well they solve the problem at hand. The fittest individuals are selected to reproduce, passing their characteristics to the next generation.
The core operators that drive this process are selection, crossover, and mutation. Selection preferentially propagates higher-quality solutions, mimicking natural survival pressure. Crossover combines segments of two parent solutions to produce offspring that inherit traits from both, enabling the algorithm to exploit promising regions of the search space. Mutation introduces random perturbations to individual solutions, maintaining diversity and allowing the population to explore areas it might otherwise miss. Together, these operators balance exploration of new regions against exploitation of known good solutions.
EAs encompass several related paradigms, including genetic algorithms, evolution strategies, genetic programming, and differential evolution. Each variant differs in how it represents solutions, applies operators, and manages population dynamics. Genetic algorithms typically encode solutions as binary strings, while evolution strategies work directly with real-valued vectors and adapt their mutation step sizes over time. Genetic programming evolves tree-structured programs rather than fixed-length encodings, making it particularly suited to symbolic regression and automated program synthesis.
In machine learning, evolutionary algorithms have found broad application in hyperparameter optimization, neural architecture search, and the training of reinforcement learning agents — a field sometimes called neuroevolution. Their key advantage is that they require no gradient information, making them applicable to discontinuous, noisy, or black-box objective functions where gradient-based methods fail. They are also naturally parallelizable, since candidate solutions in a population can be evaluated independently. While often slower to converge than gradient-based optimizers on smooth problems, EAs remain a powerful tool when the fitness landscape is complex, multimodal, or poorly understood.