Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Wintermute
  4. In-Memory Computing Chips

In-Memory Computing Chips

Chips that compute directly in memory arrays, eliminating data transfer overhead
Back to WintermuteView interactive version

In-memory computing chips integrate processing logic directly within memory arrays, performing calculations where data is stored rather than moving data to separate processors. These systems use various memory technologies including SRAM, resistive RAM (ReRAM), or phase-change memory, combining memory cells with arithmetic units to execute operations in-place, dramatically reducing energy consumption and latency compared to traditional von Neumann architectures.

This innovation addresses the fundamental inefficiency of traditional computer architectures, where moving data between processors and memory consumes significant energy and time. By computing where data resides, in-memory chips can achieve orders-of-magnitude improvements in energy efficiency for AI workloads, particularly transformer models used in language processing and other applications. Companies like Mythic, Syntiant, and various research institutions are developing these technologies, with some chips already deployed in edge devices.

The technology is particularly valuable for edge AI applications where power efficiency is critical, such as autonomous vehicles, robots, and satellites that must operate on limited power budgets. As transformer models become the foundation of modern AI, in-memory computing offers a pathway to deploying these powerful models in resource-constrained environments. However, the technology faces challenges including precision limitations, manufacturing variability, and the need for algorithms optimized for in-memory computation.

TRL
6/9Demonstrated
Impact
5/5
Investment
5/5
Category
Hardware

Related Organizations

Mythic

United States · Startup

98%

Develops Analog Matrix Processors (AMP) that perform compute-in-memory using flash memory cells.

Developer
EnCharge AI

United States · Startup

95%

Developing in-memory computing technology for AI based on charge-based analog computation.

Developer

Untether AI

Canada · Startup

95%

Creates 'At-Memory' compute architectures that place processing elements directly adjacent to SRAM arrays.

Developer

Upmem

France · Startup

95%

Develops Processing-in-Memory (PIM) solutions by integrating processors directly into DRAM chips.

Developer
Axelera AI logo
Axelera AI

Netherlands · Startup

92%

Designs the Metis AI platform based on in-memory computing for computer vision at the edge.

Developer
GSI Technology

United States · Company

90%

Produces the Associative Processing Unit (APU), enabling compute-in-memory for search and AI.

Developer
Samsung Electronics logo
Samsung Electronics

South Korea · Company

90%

Global electronics leader.

Developer

SK Hynix

South Korea · Company

90%

Developed GDDR6-AiM (Accelerator in Memory) technology for generative AI applications.

Developer
NeuroBlade

Israel · Startup

88%

Building a SQL processing unit that computes directly inside memory to accelerate data analytics.

Developer
Syntiant logo
Syntiant

United States · Startup

85%

Develops Neural Decision Processors with near-memory compute architectures for ultra-low power edge AI.

Developer

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Hardware
Hardware
Analog In-Memory Compute Chips

Chips that compute directly in memory arrays, bypassing data transfer bottlenecks for AI workloads

TRL
5/9
Impact
4/5
Investment
4/5
Hardware
Hardware
Memristor Crossbar Arrays

Programmable resistive grids that compute neural network operations directly in memory

TRL
5/9
Impact
4/5
Investment
4/5
Hardware
Hardware
Edge Neuromorphic Processors

Brain-inspired chips running spiking neural networks at milliwatt power for always-on edge AI

TRL
4/9
Impact
4/5
Investment
4/5
Hardware
Hardware
3D-Stacked Neuromorphic Architectures

Vertically stacked chips mimicking brain connectivity for spiking neural networks

TRL
3/9
Impact
5/5
Investment
3/5
Hardware
Hardware
Reversible Computing Architectures

Logic circuits that run backwards to recover energy instead of dissipating it as heat

TRL
3/9
Impact
3/5
Investment
3/5
Hardware
Hardware
Cryogenic AI Processors

AI chips cooled to near-zero temperatures for ultra-fast, near-zero-power computation

TRL
4/9
Impact
4/5
Investment
4/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions