A behavioral model defining data structures by their operations, not their implementation.
An abstract data type (ADT) is a theoretical construct in computer science that defines a data structure entirely through its behavior — the set of operations it supports and the rules governing those operations — rather than through any specific implementation. This separation of interface from implementation allows developers and researchers to reason about data at a conceptual level, treating structures like stacks, queues, graphs, and trees as logical entities with well-defined contracts. In AI and machine learning, this abstraction is foundational: algorithms can be designed and analyzed independently of the underlying hardware or language-specific data representations.
ADTs work by specifying a type's possible values and the operations that can be performed on them, along with the expected behavior of those operations (often expressed as axioms or preconditions and postconditions). For example, a stack ADT defines push, pop, and peek operations with specific behavioral guarantees, without dictating whether the implementation uses an array or a linked list. This modularity enables AI systems to swap out data structure implementations for performance tuning without altering the logic of the algorithms that depend on them — a critical property when optimizing machine learning pipelines or search algorithms.
In machine learning specifically, ADTs underpin the design of core components such as priority queues in beam search, graphs in knowledge representation, and tensors as generalized array types. Frameworks like TensorFlow and PyTorch implicitly rely on ADT principles by exposing high-level tensor operations while abstracting away memory layout and hardware-specific execution. This allows researchers to prototype models without worrying about low-level details, accelerating experimentation and reproducibility.
The concept became particularly influential in software engineering during the 1970s and 1980s, driven by work on formal specification and object-oriented design. Its relevance to AI lies in enabling clean architectural boundaries between data representation and algorithmic logic — a principle that scales from classical symbolic AI systems to modern deep learning frameworks. By enforcing well-defined interfaces, ADTs promote code reuse, testability, and the kind of modular design that is essential for building complex, maintainable AI systems.