A finite sequence of instructions that solves a problem or performs a computation.
An algorithm is a finite, ordered set of well-defined instructions designed to solve a class of problems or perform a computation. In machine learning, algorithms are the engines that drive model training, optimization, and inference — governing everything from how a neural network updates its weights via backpropagation to how a decision tree selects the best feature to split on. They can be expressed in pseudocode, mathematical notation, or programming languages, and range from simple sorting routines to complex iterative procedures involving probabilistic reasoning and high-dimensional optimization.
In practice, ML algorithms operate by taking input data and applying a series of transformations or decision rules to produce an output — a prediction, a classification, a generated sample, or an optimized parameter set. Learning algorithms specifically adjust internal parameters by minimizing a loss function, often using gradient-based methods such as stochastic gradient descent. The choice of algorithm profoundly affects a model's accuracy, computational cost, scalability, and generalization ability, making algorithmic selection and design central concerns in applied machine learning.
Algorithms are evaluated along several dimensions: correctness (does it produce the right answer?), time complexity (how does runtime scale with input size?), space complexity (how much memory does it require?), and convergence behavior (does it reliably reach a good solution?). In deep learning, these trade-offs become especially acute — algorithms like Adam or RMSProp are favored over vanilla gradient descent because they adapt learning rates and converge faster in practice, even if their theoretical guarantees are weaker.
The centrality of algorithms to AI cannot be overstated. Advances in machine learning are often algorithmic breakthroughs: the backpropagation algorithm made deep networks trainable, the attention mechanism enabled transformers, and contrastive learning algorithms unlocked self-supervised representation learning. Understanding algorithms — their assumptions, limitations, and computational demands — is foundational to building, evaluating, and improving intelligent systems.