The biological formation of new neurons, inspiring adaptive neural network architectures.
Neurogenesis refers to the biological process by which new neurons are generated from neural stem and progenitor cells. In the brain, this occurs most prominently in the hippocampus and the olfactory bulb, regions associated with memory consolidation, spatial navigation, and sensory processing. The process is regulated by a complex interplay of molecular signals, environmental inputs, physical activity, stress hormones, and aging, making it a dynamic and context-sensitive mechanism for neural adaptation.
In machine learning, neurogenesis serves as a conceptual inspiration for architectures and training algorithms that can dynamically grow their neural structures. Rather than fixing network topology before training, neurogenesis-inspired systems add new neurons or units in response to task demands, data complexity, or detected errors. This stands in contrast to conventional deep learning, where architecture is largely static. Algorithms such as progressive neural networks, dynamic node creation in self-organizing maps, and growing neural gas models all draw on this biological metaphor to enable more flexible, continual learning.
The relevance of neurogenesis to AI became particularly salient in the context of continual learning and catastrophic forgetting. When a model must learn new tasks without losing previously acquired knowledge, the ability to allocate fresh computational units — analogous to new neurons — provides a structural solution that avoids overwriting existing representations. This makes neurogenesis-inspired mechanisms a promising tool for lifelong learning systems that must adapt to non-stationary data distributions over time.
Beyond architecture design, neurogenesis also informs thinking about neural plasticity more broadly, including how learning rates, regularization, and structural pruning interact with network growth. Research at the intersection of computational neuroscience and deep learning continues to explore how the principles governing biological neurogenesis — such as activity-dependent survival of new neurons — can be translated into practical inductive biases for more robust and adaptive AI systems.