A training method that fine-tunes language models directly from human preference data.
Direct Preference Optimization (DPO) is a fine-tuning algorithm that aligns large language models with human preferences without requiring a separately trained reward model. Introduced in a 2023 paper by Rafailov et al., DPO reframes the reinforcement learning from human feedback (RLHF) pipeline as a straightforward supervised learning problem. Given a dataset of preference pairs — where human annotators have indicated which of two model responses they prefer — DPO directly optimizes the language model's policy to increase the likelihood of preferred responses relative to rejected ones, using a binary cross-entropy objective derived from the Bradley-Terry preference model.
The key insight behind DPO is that the optimal policy under a KL-constrained reward maximization objective can be expressed analytically in terms of the language model's own log-probabilities, eliminating the need for an explicit reward model or the complex, often unstable reinforcement learning loop used in PPO-based RLHF. This makes DPO substantially simpler to implement, more computationally efficient, and easier to tune. The training signal comes entirely from the relative ranking of response pairs, and the reference model — typically the supervised fine-tuned base model — serves as a regularizer to prevent the policy from drifting too far from coherent language generation.
DPO quickly became one of the dominant alignment techniques in the open-source LLM community because it achieves competitive results with PPO-based RLHF at a fraction of the engineering complexity. It has been used to train or fine-tune models across instruction following, summarization, and dialogue tasks. Subsequent work has extended DPO with variants such as IPO, KTO, and SimPO, each addressing specific limitations like overfitting to preference noise or dependence on paired data. DPO represents a broader shift in alignment research toward stable, scalable, and reproducible training methods that can be applied with modest computational resources.