
SPDE
Stochastic Partial Differential Equation
Stochastic Partial Differential Equation
A class of differential equations that model the evolution of spatially distributed systems under random forcing, combining partial differential operators with stochastic processes to represent spatiotemporal uncertainty.
SPDE (stochastic partial differential equation) are PDEs driven by stochastic terms (noise) that describe the dynamics of random fields over space and time; they extend SDEs to infinite-dimensional state spaces and are formulated to capture correlated, spatially structured uncertainty in continuum systems relevant to modeling, inference, and control.
In theory, SPDEs are framed either as evolution equations on infinite-dimensional Hilbert/Banach spaces (semigroup/mild-solution approaches) or via martingale-measure/weak formulations; they admit linear and nonlinear forms, and require interpretation choices (Itô vs. Stratonovich, renormalization for singular nonlinearities). Key analytical tools include semigroup theory (Da Prato & Zabczyk), stochastic calculus in Hilbert spaces, energy/regularity estimates, and the theory of distributions for singular models (Martin Hairer’s regularity structures). In AI and ML contexts SPDEs serve multiple roles: they define expressive priors over functions and fields (linking to Gaussian process and random field models via SPDE representations), characterize continuum limits of discrete stochastic generative processes (e.g., diffusion-based generative modeling generalized to spatially-structured data), underpin stochastic physics-informed neural networks for uncertainty-aware simulation, and provide principled models for spatiotemporal uncertainty quantification and Bayesian inverse problems constrained by PDEs. Numerically, SPDEs drive development of discretization, multilevel Monte Carlo, and probabilistic numerical methods that are increasingly integrated with ML-based surrogates and learned solvers.
First usages of stochastic PDEs trace to mid-20th-century extensions of stochastic calculus and PDE theory (1960s–1970s); rigorous, widely cited frameworks and textbooks emerged in the 1980s–1990s, and relevance to ML/AI surged in the 2010s–2020s as probabilistic numerics, Bayesian inverse problems, and diffusion-based generative models brought continuum stochastic models into modern learning pipelines.
Notable contributors include Kiyosi Itô (foundations of stochastic calculus), Jacques-Louis Lions and others in PDE theory, I. G. Rozovskii and John Walsh (martingale measure and foundational SPDE formulations), Giuseppe Da Prato and Jerzy Zabczyk (semigroup/infinite-dimensional frameworks), N. V. Krylov and Étienne Pardoux (existence/regularity theory), Martin Hairer (regularity structures for singular SPDEs), and applied/ML-focused figures such as Andrew Stuart (Bayesian inverse problems and PDE-constrained inference) plus researchers in diffusion-based generative modeling (e.g., Yang Song, Stefano Ermon and collaborators) who connected stochastic dynamics to contemporary generative ML.
