Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. MLP (Multilayer Perceptron)

MLP (Multilayer Perceptron)

A fully connected feedforward neural network trained via backpropagation for classification and regression.

Year: 1986Generality: 838
Back to Vocab

A multilayer perceptron (MLP) is a class of feedforward artificial neural network consisting of at least three layers of nodes: an input layer, one or more hidden layers, and an output layer. Every neuron in each layer is connected to every neuron in the adjacent layers — a topology called full connectivity — and each non-input neuron applies a nonlinear activation function (such as sigmoid, tanh, or ReLU) to a weighted sum of its inputs. This nonlinearity is what allows MLPs to model complex, non-linear relationships that simpler linear models cannot capture.

Training an MLP relies on the backpropagation algorithm combined with gradient descent. During a forward pass, inputs propagate through the network to produce a prediction; the error between that prediction and the true label is then computed via a loss function. Backpropagation efficiently calculates the gradient of this loss with respect to every weight in the network by applying the chain rule layer by layer, and those gradients are used to update weights in the direction that reduces error. Repeated over many training examples, this process allows the network to learn rich internal representations of the data.

MLPs became a cornerstone of practical machine learning after Rumelhart, Hinton, and Williams demonstrated the effectiveness of backpropagation in 1986, enabling networks with hidden layers to be trained reliably for the first time at scale. They remain widely used today as baseline models for tabular data, and as components within larger architectures. While deep convolutional and transformer-based models have surpassed MLPs on tasks like image recognition and natural language processing, the MLP's simplicity, interpretability, and versatility keep it central to both research and production machine learning pipelines.

Related

Related

Perceptron
Perceptron

A single-neuron linear classifier that learns binary decisions by adjusting weighted inputs.

Generality: 795
Feedforward Neural Network
Feedforward Neural Network

A neural network architecture where information flows strictly from input to output.

Generality: 838
Neural Network
Neural Network

A layered system of interconnected nodes that learns patterns from data.

Generality: 947
ANN (Artificial Neural Networks)
ANN (Artificial Neural Networks)

Layered computational models that learn from data by adjusting weighted connections.

Generality: 928
Backpropagation
Backpropagation

The algorithm that trains neural networks by propagating error gradients backward through layers.

Generality: 922
Forward Propagation
Forward Propagation

The process of passing input data through a neural network to produce output.

Generality: 838