A formal model representing system behavior through states and state-changing transitions.
A transition system is a mathematical framework for modeling dynamic behavior by defining a set of states, a set of possible actions or events, and the transitions those actions induce between states. Formally, it consists of a state space, an initial state or set of initial states, and a transition relation that maps state-action pairs to successor states. This abstraction is powerful precisely because it is general: deterministic finite automata, Markov decision processes, planning domains, and concurrent programs can all be expressed as transition systems, making the formalism a unifying language across many subfields of AI and computer science.
In practice, transition systems serve as the backbone of automated planning and search. Classical planning algorithms like A* and STRIPS-based planners operate by exploring a transition system whose states encode world configurations and whose transitions correspond to applicable actions. Model checking tools use transition systems to exhaustively verify that a system satisfies temporal logic properties — for example, that a robot controller never enters an unsafe state. In reinforcement learning, the environment is typically modeled as a Markov decision process, which is a transition system augmented with reward signals and probability distributions over successor states.
The formalism matters in AI because it provides a rigorous substrate for reasoning about correctness, reachability, and optimality. Questions like "can the agent reach the goal?" or "is this policy safe under all possible event sequences?" reduce to graph-theoretic or logical queries over the underlying transition system. This enables both theoretical analysis — proving guarantees about algorithm behavior — and practical tools like symbolic planners and formal verifiers that scale to large state spaces through techniques such as binary decision diagrams and heuristic search.
Transition systems also connect naturally to learning. When the transition function is unknown, model-based reinforcement learning algorithms attempt to estimate it from interaction data, then plan within the learned model. This interplay between learned and specified transition systems is central to contemporary work on safe reinforcement learning, world models, and neurosymbolic AI, where the goal is to combine the expressiveness of learned representations with the verifiability of formal models.