Neural networks that apply convolution-like operations to learn from graph-structured data.
Graph Convolutional Networks (GCNs) extend the core idea of convolutional neural networks to non-Euclidean, graph-structured data. Where traditional CNNs exploit the regular grid structure of images by sliding a filter across spatial neighborhoods, GCNs generalize this operation to irregular graphs by aggregating feature information from a node's local neighborhood. This is accomplished by multiplying node feature matrices with a normalized version of the graph's adjacency matrix, effectively allowing each node to collect and transform signals from its directly connected neighbors. Stacking multiple such layers enables information to propagate across increasingly distant parts of the graph, building rich, context-aware node representations.
The practical mechanics of a GCN layer involve three steps: aggregating neighbor features, linearly transforming the result with learned weight matrices, and applying a nonlinear activation function. Normalization—typically by node degree—prevents nodes with many connections from dominating the aggregation. This spectral interpretation, grounded in graph signal processing, was made computationally tractable by Kipf and Welling's 2016 simplification, which approximated expensive spectral graph convolutions with an efficient first-order localized filter. The result was a model that scales to large graphs while remaining straightforward to implement and train.
GCNs have become foundational in graph machine learning, achieving strong results on node classification, link prediction, and graph-level classification tasks. Their applications span molecular property prediction in drug discovery, fraud detection in financial networks, knowledge graph reasoning, and recommendation systems where users and items form a bipartite graph. GCNs also served as the conceptual springboard for a broader family of graph neural network architectures—including GraphSAGE, GAT, and GIN—each refining how neighborhood information is aggregated or weighted. Understanding GCNs remains essential for anyone working with relational or structured data.