Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. RNN (Recurrent Neural Network)

RNN (Recurrent Neural Network)

Neural networks with feedback connections that process sequential data using internal memory.

Year: 1986Generality: 838
Back to Vocab

A Recurrent Neural Network (RNN) is a class of neural network architecture designed to handle sequential and time-series data by maintaining an internal hidden state that persists across time steps. Unlike feedforward networks, which process each input independently, RNNs feed their hidden state from one step forward into the next, effectively giving the network a form of short-term memory. This makes them naturally suited for tasks where context and ordering matter — such as language modeling, speech recognition, machine translation, and time-series forecasting.

At each time step, an RNN takes the current input and the previous hidden state, combines them through a learned weight matrix, and produces both an output and an updated hidden state. This recurrent connection creates a directed cycle through time, allowing information to theoretically persist across arbitrarily long sequences. In practice, however, standard RNNs struggle to retain information over long distances due to the vanishing gradient problem: as gradients are backpropagated through many time steps, they shrink exponentially, making it difficult to learn long-range dependencies. The complementary exploding gradient problem, where gradients grow uncontrollably, can also destabilize training.

These limitations motivated the development of gated architectures. Long Short-Term Memory (LSTM) networks, introduced by Hochreiter and Schmidhuber in 1997, added explicit memory cells and learnable gates to control what information is stored, forgotten, or passed forward. Gated Recurrent Units (GRUs), proposed in 2014, offered a simpler variant with comparable performance. These extensions dramatically improved RNNs' ability to model long-range dependencies and became the dominant sequence modeling tools throughout the 2010s.

While Transformer-based architectures have largely supplanted RNNs in natural language processing due to their parallelizability and superior handling of long contexts, RNNs remain relevant in settings with strict latency constraints, streaming data, or limited compute — such as embedded systems and real-time signal processing. Their sequential inductive bias also makes them a natural fit for certain scientific and engineering domains where data is inherently ordered and causal.

Related

Related

LSTM (Long Short-Term Memory)
LSTM (Long Short-Term Memory)

A recurrent neural network architecture that learns long-range dependencies in sequential data.

Generality: 838
Sequential Models
Sequential Models

AI models that process ordered data by capturing dependencies across time or position.

Generality: 795
LNN (Liquid Neural Network)
LNN (Liquid Neural Network)

A recurrent neural network that continuously adapts its internal state to process time-varying data.

Generality: 339
Sequence Model
Sequence Model

A model that learns patterns and dependencies within ordered data sequences.

Generality: 840
Recursive Language Model
Recursive Language Model

A language model that applies the same neural structure repeatedly to process hierarchical data.

Generality: 521
Gating Mechanism
Gating Mechanism

A learned control system that selectively regulates information flow through a neural network.

Generality: 781