Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Interestingness

Interestingness

A measure of how novel, surprising, or valuable information is to a learner or system.

Year: 1999Generality: 520
Back to Vocab

In machine learning and data mining, interestingness refers to a family of metrics used to evaluate whether a discovered pattern, rule, or piece of information is worth surfacing to a user or system. Rather than treating all statistically valid findings as equally valuable, interestingness measures help prioritize outputs that are novel, unexpected, actionable, or otherwise meaningful. This is especially important in knowledge discovery tasks where the sheer volume of technically valid patterns far exceeds what any human analyst could usefully review.

Interestingness metrics generally fall into two broad categories: objective and subjective. Objective measures rely on statistical properties of the data itself — such as support, confidence, lift, or surprise — to score patterns independently of any particular user. Subjective measures, by contrast, incorporate user beliefs, goals, or prior knowledge, flagging patterns as interesting precisely when they contradict expectations or reveal something the user did not already know. In practice, effective systems often combine both, using statistical filters to prune the search space before applying user-aware scoring.

The concept has found application across a wide range of ML subfields. In recommendation systems, interestingness-inspired diversity and serendipity metrics push against the tendency of collaborative filtering to produce obvious, redundant suggestions. In reinforcement learning, intrinsic motivation frameworks operationalize interestingness as a curiosity signal — rewarding agents for exploring states that are novel or hard to predict — enabling learning in sparse-reward environments. In computational creativity, interestingness guides generative models toward outputs that balance coherence with surprise, avoiding both random noise and tedious predictability.

Despite its intuitive appeal, interestingness remains difficult to formalize universally. What is surprising to one user may be obvious to another, and metrics that work well in one domain often fail to transfer. This has driven ongoing research into adaptive and personalized interestingness measures, as well as theoretical work on connecting the concept to information-theoretic quantities like Kolmogorov complexity and prediction error. As AI systems are increasingly expected to surface insights rather than just process data, principled notions of interestingness are becoming more, not less, important.

Related

Related

Surprise
Surprise

A measure of how unexpected or novel an outcome is given a model's predictions.

Generality: 620
Surprisal
Surprisal

A measure of how unexpected an event is, based on its probability.

Generality: 620
Artificial Curiosity
Artificial Curiosity

An intrinsic motivation mechanism that drives AI agents to explore novel environments autonomously.

Generality: 592
Salience
Salience

A measure of how much certain features or regions stand out as important.

Generality: 694
Interpretability
Interpretability

The degree to which humans can understand why an AI system made a decision.

Generality: 800
Emergence
Emergence

Complex behaviors arising from simple component interactions that no single component exhibits alone.

Generality: 752