A measure of how novel, surprising, or valuable information is to a learner or system.
In machine learning and data mining, interestingness refers to a family of metrics used to evaluate whether a discovered pattern, rule, or piece of information is worth surfacing to a user or system. Rather than treating all statistically valid findings as equally valuable, interestingness measures help prioritize outputs that are novel, unexpected, actionable, or otherwise meaningful. This is especially important in knowledge discovery tasks where the sheer volume of technically valid patterns far exceeds what any human analyst could usefully review.
Interestingness metrics generally fall into two broad categories: objective and subjective. Objective measures rely on statistical properties of the data itself — such as support, confidence, lift, or surprise — to score patterns independently of any particular user. Subjective measures, by contrast, incorporate user beliefs, goals, or prior knowledge, flagging patterns as interesting precisely when they contradict expectations or reveal something the user did not already know. In practice, effective systems often combine both, using statistical filters to prune the search space before applying user-aware scoring.
The concept has found application across a wide range of ML subfields. In recommendation systems, interestingness-inspired diversity and serendipity metrics push against the tendency of collaborative filtering to produce obvious, redundant suggestions. In reinforcement learning, intrinsic motivation frameworks operationalize interestingness as a curiosity signal — rewarding agents for exploring states that are novel or hard to predict — enabling learning in sparse-reward environments. In computational creativity, interestingness guides generative models toward outputs that balance coherence with surprise, avoiding both random noise and tedious predictability.
Despite its intuitive appeal, interestingness remains difficult to formalize universally. What is surprising to one user may be obvious to another, and metrics that work well in one domain often fail to transfer. This has driven ongoing research into adaptive and personalized interestingness measures, as well as theoretical work on connecting the concept to information-theoretic quantities like Kolmogorov complexity and prediction error. As AI systems are increasingly expected to surface insights rather than just process data, principled notions of interestingness are becoming more, not less, important.