Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Negative Feedback

Negative Feedback

A self-correcting loop that drives system outputs back toward a desired target.

Year: 1986Generality: 792
Back to Vocab

Negative feedback is a control mechanism in which a system's output is measured, compared against a desired target, and the resulting error signal is used to push the output back toward that target. In machine learning, this principle is most visibly embodied in gradient descent: the loss function measures how far a model's predictions deviate from the ground truth, and the gradient of that loss is used to update weights in the direction that reduces the error. The "negative" in negative feedback refers to the fact that the corrective signal opposes the deviation rather than amplifying it, which is what produces stability rather than runaway divergence.

The mechanism appears throughout modern ML in forms that are not always labeled as feedback. Backpropagation is essentially a structured application of negative feedback across multiple layers of a neural network, propagating error signals backward so that each layer's parameters can be adjusted to reduce the overall loss. In reinforcement learning, the reward signal functions similarly: when an agent's action produces a worse-than-expected outcome, the negative feedback reduces the probability of that action being selected again in similar states, steering policy toward higher-reward behavior over time.

Negative feedback is also central to training stability. Techniques like batch normalization, learning rate scheduling, and adaptive optimizers such as Adam all serve, in part, to regulate the strength and direction of corrective updates so that the feedback loop converges rather than oscillates. Without properly tuned negative feedback, training can become unstable — gradients explode, weights diverge, and the model fails to learn. The challenge in deep learning is not just having a feedback signal but calibrating its magnitude so corrections are neither too aggressive nor too weak.

Understanding negative feedback as a unifying concept helps clarify why so many ML techniques share a common structure: measure error, compute a corrective signal, apply it to reduce future error, and repeat. This loop is the engine of supervised learning, the backbone of reinforcement learning, and a key design consideration in any adaptive system that must improve its behavior over time.

Related

Related

Feedforward Neural Network
Feedforward Neural Network

A neural network architecture where information flows strictly from input to output.

Generality: 838
Backpropagation
Backpropagation

The algorithm that trains neural networks by propagating error gradients backward through layers.

Generality: 922
Neural Network
Neural Network

A layered system of interconnected nodes that learns patterns from data.

Generality: 947
Noise
Noise

Unwanted variation in data or signals that degrades machine learning model performance.

Generality: 794
Negative References
Negative References

Techniques that suppress harmful, biased, or unethical outputs during AI text generation.

Generality: 337
Target
Target

The correct output a model is trained to predict, serving as the learning signal.

Generality: 720