Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Lossy Compression

Lossy Compression

A compression method that reduces data size by permanently discarding less perceptible information.

Year: 1990Generality: 694
Back to Vocab

Lossy compression is a data encoding strategy that achieves significant file size reductions by selectively discarding information judged to be less critical to the end user's experience. Unlike lossless compression, which preserves every bit of original data, lossy methods accept a degree of irreversible quality degradation in exchange for dramatically smaller file sizes. This trade-off is especially practical for audio, image, and video data, where human perceptual systems are insensitive to certain frequencies, fine textures, or subtle color gradients — meaning their removal goes largely unnoticed.

The mechanics of lossy compression typically rely on transform coding, quantization, and perceptual modeling. In image compression (e.g., JPEG), the image is decomposed via a discrete cosine transform into frequency components, and high-frequency details that the human eye struggles to resolve are quantized aggressively or dropped entirely. In audio compression (e.g., MP3), psychoacoustic models identify sounds masked by louder simultaneous tones and remove them. The degree of compression — and thus quality loss — is usually tunable via a quality parameter, allowing practitioners to balance fidelity against storage or bandwidth constraints.

In machine learning, lossy compression appears in several important contexts. Large-scale training datasets consisting of images, audio, or video are almost universally stored in lossy formats, and the compression artifacts introduced can subtly influence model behavior, generalization, and robustness. More recently, lossy compression principles have been applied directly within ML pipelines: gradient compression during distributed training discards small gradient updates to reduce communication overhead, and neural network weight quantization can be viewed as a form of lossy compression that shrinks model size with minimal accuracy loss. Learned image compression — where neural networks are trained end-to-end to compress and reconstruct images — has emerged as a research frontier that outperforms classical codecs at equivalent bit rates.

Understanding lossy compression matters for ML practitioners because data quality, storage efficiency, and inference latency are deeply intertwined. Choosing compression settings carelessly can introduce systematic biases into training data or degrade the inputs seen at inference time, while thoughtful compression enables scalable deployment of models on resource-constrained devices.

Related

Related

Model Compression
Model Compression

Techniques that shrink machine learning models while preserving predictive accuracy.

Generality: 795
Contextual Optical Compression
Contextual Optical Compression

Compressing optical signals before digitization using task-aware, AI-optimized sensing strategies.

Generality: 111
Loss Optimization
Loss Optimization

Iteratively adjusting model parameters to minimize prediction error measured by a loss function.

Generality: 875
LAQ (Locally-Adaptive Quantization)
LAQ (Locally-Adaptive Quantization)

Quantization method that adjusts precision locally based on data characteristics for better efficiency.

Generality: 101
Loss Function
Loss Function

A mathematical measure of error that guides model training toward better predictions.

Generality: 909
Quantization
Quantization

Reducing numerical precision of model weights and activations to shrink size and accelerate inference.

Generality: 794