Computing optimal paths for agents or objects to reach goals under real-world constraints.
Trajectory generation refers to the computational process of designing a time-parameterized path that an agent, robot, or vehicle should follow to move from an initial state to a desired goal state. Unlike simple path planning, which concerns only geometric routes, trajectory generation explicitly accounts for the dynamics of motion — including velocity, acceleration, jerk, and timing — as well as physical constraints such as actuator limits, obstacle avoidance, energy budgets, and smoothness requirements. The output is typically a continuous function describing position (and often velocity and acceleration) as a function of time.
In practice, trajectory generation algorithms draw on techniques from optimal control theory, numerical optimization, and differential geometry. Common approaches include polynomial spline fitting, where smooth curves are constructed through a series of waypoints; sampling-based methods such as RRT* that explore the configuration space stochastically; and direct collocation or shooting methods that discretize the trajectory and solve a constrained optimization problem. In machine learning contexts, learned models — including neural networks and reinforcement learning agents — are increasingly used to generate or refine trajectories, particularly in settings where the environment is too complex for hand-crafted dynamics models.
Trajectory generation is foundational to a wide range of applied domains. In robotics, it governs the motion of manipulator arms on assembly lines, ensuring smooth, collision-free movements that respect joint torque limits. In autonomous vehicles, it determines how a car accelerates, steers, and brakes to navigate traffic safely. In aerospace, it underpins flight path optimization for fuel efficiency and safety. In animation and simulation, it produces physically plausible character motion. The rise of deep learning has introduced data-driven trajectory generation, where models learn motion priors from large datasets of demonstrated behavior, enabling more naturalistic and context-aware motion in complex, unstructured environments.
The intersection of trajectory generation with machine learning has grown substantially with advances in imitation learning, model-based reinforcement learning, and diffusion-based generative models applied to motion synthesis. These approaches allow systems to generalize across diverse scenarios rather than relying solely on hand-engineered constraints, making trajectory generation a vibrant and rapidly evolving area at the boundary of classical control and modern AI.