Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • My Collection
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Substrate
  4. Processing-in-Memory (PIM)

Processing-in-Memory (PIM)

SK Hynix is embedding compute logic directly inside DRAM chips, reducing the data movement bottleneck that wastes up to 60% of energy in conventional AI systems.
Back to SubstrateView interactive version

SK Hynix's AiM (Accelerator-in-Memory) technology embeds processing units within HBM and GDDR memory chips, allowing basic compute operations to happen where data already resides rather than shuttling it to a separate CPU or GPU. The first-generation AiM chips target AI inference workloads like recommendation systems and natural language processing.

The data movement problem is the dominant energy cost in modern computing — moving a byte of data from DRAM to a processor consumes 100-1000x more energy than the computation itself. PIM architectures attack this directly by putting simple multiply-accumulate units inside the memory array. Samsung has parallel PIM research (HBM-PIM), making Korea the only country with two companies simultaneously developing production-grade processing-in-memory.

PIM is unlikely to replace GPUs for large-scale AI training, but for inference at the edge and in data centers, it promises 2-5x energy efficiency improvements. As AI inference costs begin to dwarf training costs, PIM could become a critical technology for sustainable AI deployment.

TRL
6/9Demonstrated
Impact
3/5
Investment
3/5
Category
Hardware

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions