Neural networks that process information through discrete, time-dependent electrical spikes.
A spiking neural network (SNN) is a class of artificial neural network designed to more closely replicate the behavior of biological neurons than conventional deep learning architectures. Unlike standard artificial neurons, which pass continuous-valued activations at every forward pass, spiking neurons remain silent until their accumulated input crosses a threshold, at which point they fire a discrete spike and reset. This event-driven communication means that computation is inherently sparse and asynchronous — neurons only do work when something meaningful happens — which mirrors the electrochemical signaling observed in biological brains.
The mechanics of an SNN depend on neuron models that track membrane potential over time. Popular formulations include the leaky integrate-and-fire (LIF) model, which decays membrane potential between inputs, and the more biologically detailed Izhikevich model, which reproduces a wider range of firing patterns seen in cortical neurons. Training SNNs is non-trivial because the spike-generation step is non-differentiable, blocking standard backpropagation. Researchers have addressed this through surrogate gradient methods, spike-timing-dependent plasticity (STDP), and conversion techniques that translate pre-trained rate-coded networks into spiking equivalents.
SNNs matter for two distinct reasons. First, they offer a path to dramatically more energy-efficient inference: because most neurons are silent most of the time, neuromorphic hardware platforms such as Intel's Loihi and IBM's TrueNorth can execute SNN workloads with orders-of-magnitude lower power consumption than GPU-based inference. Second, SNNs are natural candidates for tasks with a strong temporal structure — event-based vision from dynamic vision sensors (DVS cameras), audio classification, and real-time control — where the timing of spikes itself encodes information that static architectures must approximate.
Despite these advantages, SNNs have not yet matched the accuracy of conventional deep networks on standard benchmarks, and training at scale remains an active research challenge. Progress in surrogate gradients, hybrid architectures that blend spiking and non-spiking layers, and purpose-built neuromorphic chips is steadily narrowing the gap, making SNNs an increasingly practical option for edge AI applications where power budgets are tight.