TDL (Topological Deep Learning)

TDL
Topological Deep Learning

Integration of algebraic and computational topology with deep neural networks to encode, preserve, and exploit global shape and connectivity as features, layers, or regularizers.

TDL (Topological Deep Learning) applies tools from algebraic topology — most prominently persistent homology, mapper and related topological summaries — to represent, analyze, constrain, or augment deep neural networks so that models capture global geometric and connectivity patterns that are invariant or robust to local perturbations. At an expert level, TDL leverages topological invariants (e.g., persistence diagrams, barcodes, Euler characteristics, Reeb graphs) as differentiable or proxy features and designs layers, pooling schemes, loss terms, or architectural priors that enforce or exploit desired topological properties. The approach is especially relevant for data modalities where global shape matters (graphs, point clouds, 3D surfaces, biological structures, and some image tasks), and it provides complementary inductive biases to those used in geometric deep learning. Theoretical underpinnings draw on stability results for persistent homology (ensuring robustness to noise), Morse-theoretic interpretations of data manifolds, and connections to equivariant representations; practical progress has come from algorithmic advances (efficient computation of persistence via Ripser/Dionysus/GUDHI) and differentiable approximations or layers (e.g., PersLay, TopologyLayer) that allow end-to-end training. Trade-offs include computational cost for large-scale persistence computations, challenges in exact differentiability (often requiring subgradient or smoothing strategies), and the need to design topology-aware architectures that interplay effectively with standard ML (Machine Learning) regularizers.

First uses of topology in ML date to the early 2000s with the rise of topological data analysis (TDA); the explicit phrase and focused efforts around "Topological Deep Learning" emerged roughly in the late 2010s (circa 2017–2019) and gained broader traction between 2019–2022 as differentiable TDA modules and geometric deep learning techniques matured and made practical integration feasible.

Key contributors include foundational TDA researchers (Gunnar Carlsson; Herbert Edelsbrunner; Afra Zomorodian) and theorists of persistence and stability (Frédéric Chazal; Steve Oudot), algorithm and software authors (Ulrich Bauer — Ripser; Dmitriy Morozov — Dionysus; the GUDHI development community), and ML/geometric-DL researchers who integrated topology with neural architectures (e.g., Michael Bronstein, Joan Bruna, Taco Cohen and collaborators). Practitioners who advanced differentiable/topology-aware layers and applied pipelines include authors of PersLay (Alex Carrière, Marco Cuturi, Steve Oudot) and related TopologyLayer/TopoNet work (Hofer et al.), along with teams developing higher-level tooling (Giotto-tda, GUDHI) that enabled wider adoption in AI and ML workflows.

Related