Using evolutionary algorithms to optimize neural network architectures and weights.
Neuroevolution is a family of techniques that applies evolutionary algorithms to the design and training of artificial neural networks, treating network architectures and connection weights as genomes subject to selection, mutation, and recombination. Rather than relying on gradient-based optimization like backpropagation, neuroevolution maintains a population of candidate networks, evaluates each on a target task, and iteratively selects the best performers to produce successive generations of improved solutions. This population-based search can simultaneously optimize both the structural topology of a network and its numerical parameters, a capability that distinguishes it from conventional training methods.
The mechanics of neuroevolution vary across algorithms, but the core loop involves fitness evaluation, selection pressure, and genetic operators. Crossover combines structural or weight information from two parent networks, while mutation introduces random perturbations to weights, connections, or layer configurations. A landmark advance came with Kenneth Stanley and Risto Miikkulainen's NEAT algorithm (2002), which solved the competing conventions problem in network crossover by tracking gene history and allowing topological complexity to grow incrementally from minimal starting structures. More recent approaches such as NEAT variants, novelty search, and quality-diversity algorithms have extended these ideas considerably.
Neuroevolution is especially valuable in reinforcement learning settings where reward signals are sparse, delayed, or non-differentiable, making gradient-based methods unreliable or inapplicable. It also offers a natural mechanism for architecture search, discovering unconventional network topologies that human designers or gradient-based neural architecture search might overlook. OpenAI and other research groups demonstrated in the late 2010s that simple evolution strategies could match or exceed deep reinforcement learning on challenging continuous control benchmarks, renewing broad interest in the field.
Beyond performance, neuroevolution offers practical advantages in parallelization, since fitness evaluations across a population are independent and can be distributed across many processors. It also avoids vanishing gradient problems and does not require differentiable loss functions, broadening the range of tasks it can address. As hardware and algorithmic efficiency improve, neuroevolution continues to serve as both a competitive optimization strategy and a tool for studying how complex neural structures can emerge through open-ended search processes.