Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • My Collection
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
In-Memory Computing Chips | Wintermute | Envisioning
  1. Home
  2. Research
  3. Wintermute
  4. In-Memory Computing Chips

In-Memory Computing Chips

Logic and memory co-located to remove von Neumann bottlenecks.
BACK TO WINTERMUTE

Related Organizations

Mythic

US · Startup

98%

Develops Analog Matrix Processors (AMP) that perform compute-in-memory using flash memory cells.

Developer
EnCharge AI

US · Startup

95%

Developing in-memory computing technology for AI based on charge-based analog computation.

Developer

Untether AI

CA · Startup

95%

Creates 'At-Memory' compute architectures that place processing elements directly adjacent to SRAM arrays.

Developer

Upmem

FR · Startup

95%

Develops Processing-in-Memory (PIM) solutions by integrating processors directly into DRAM chips.

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Explore this signal in your context

Get a focused view of implications, timing, and action options for your organization.
Discuss this signal
VIEW INTERACTIVE VERSION
Developer
Axelera AI logo
Axelera AI

NL · Startup

92%

Designs the Metis AI platform based on in-memory computing for computer vision at the edge.

Developer
GSI Technology

US · Company

90%

Produces the Associative Processing Unit (APU), enabling compute-in-memory for search and AI.

Developer
Samsung Electronics logo
Samsung Electronics

KR · Company

90%

Global electronics leader.

Developer

SK Hynix

KR · Company

90%

Developed GDDR6-AiM (Accelerator in Memory) technology for generative AI applications.

Developer
NeuroBlade

IL · Startup

88%

Building a SQL processing unit that computes directly inside memory to accelerate data analytics.

Developer
Syntiant logo
Syntiant

US · Startup

85%

Develops Neural Decision Processors with near-memory compute architectures for ultra-low power edge AI.

Developer
Hardware
Hardware
Analog In-Memory Compute Chips

Chips optimized for transformer attention paths and continual learning.

TRL
5/9
Impact
4/5
Investment
4/5
Hardware
Hardware
Memristor Crossbar Arrays

Resistive memory arrays executing MAC operations natively.

TRL
5/9
Impact
4/5
Investment
4/5
Hardware
Hardware
Edge Neuromorphic Processors

Ultra-low-power spiking chips for always-on embodied agents.

TRL
4/9
Impact
4/5
Investment
4/5
Hardware
Hardware
3D-Stacked Neuromorphic Architectures

Hardware supporting sparse, recurrent, and spiking behavior.

TRL
3/9
Impact
5/5
Investment
3/5
Hardware
Hardware
Reversible Computing Architectures

Low-entropy logic exploring energy-efficient AI pipelines.

TRL
3/9
Impact
3/5
Investment
3/5
Hardware
Hardware
Cryogenic AI Processors

Superconducting logic enabling ultra-low-power AI acceleration.

TRL
4/9
Impact
4/5
Investment
4/5

In-memory computing chips integrate processing logic directly within memory arrays, performing calculations where data is stored rather than moving data to separate processors. These systems use various memory technologies including SRAM, resistive RAM (ReRAM), or phase-change memory, combining memory cells with arithmetic units to execute operations in-place, dramatically reducing energy consumption and latency compared to traditional von Neumann architectures.

This innovation addresses the fundamental inefficiency of traditional computer architectures, where moving data between processors and memory consumes significant energy and time. By computing where data resides, in-memory chips can achieve orders-of-magnitude improvements in energy efficiency for AI workloads, particularly transformer models used in language processing and other applications. Companies like Mythic, Syntiant, and various research institutions are developing these technologies, with some chips already deployed in edge devices.

The technology is particularly valuable for edge AI applications where power efficiency is critical, such as autonomous vehicles, robots, and satellites that must operate on limited power budgets. As transformer models become the foundation of modern AI, in-memory computing offers a pathway to deploying these powerful models in resource-constrained environments. However, the technology faces challenges including precision limitations, manufacturing variability, and the need for algorithms optimized for in-memory computation.

TRL
6/9Demonstrated
Impact
5/5
Investment
5/5
Category
Hardware

Newsletter

Follow us for weekly foresight in your inbox.

Browse the latest from Artificial Insights, our opinionated weekly briefing exploring the transition toward AGI.
Mar 8, 2026 · Issue 131
Mar 8, 2026 · Issue 131
Prompt it into existence
Feb 23, 2026 · Issue 130
Feb 23, 2026 · Issue 130
An Apocaloptimist
Feb 9, 2026 · Issue 129
Feb 9, 2026 · Issue 129
Agent in the Loop
View all issues