An interaction technique where users influence systems through abstract or intermediary operations.
Indirect manipulation is an interaction paradigm in which users achieve goals not by acting on objects or data directly, but through commands, scripts, agents, or other mediating abstractions that translate high-level intent into system behavior. This stands in contrast to direct manipulation — the touchstone of graphical user interfaces — where users drag, click, or gesture on visible representations of objects. In indirect manipulation, the user issues instructions at a remove, and the system interprets and executes them, often with considerable autonomy.
In AI and machine learning contexts, indirect manipulation becomes especially relevant when users configure model training pipelines, specify optimization objectives, or interact with intelligent agents through natural language or declarative interfaces. Rather than adjusting individual weights or parameters by hand, a practitioner might specify a loss function, a reward signal, or a prompt — and the learning system handles the rest. This abstraction is not merely a convenience; it is often a necessity, since the underlying computational processes operate at a scale and complexity that precludes direct, granular control.
The tradeoffs of indirect manipulation are significant. On the benefit side, it reduces cognitive load, enables automation, and allows non-experts to harness powerful systems without mastering their internals. Prompt engineering, hyperparameter search interfaces, and AutoML tools all exemplify indirect manipulation made accessible. On the cost side, the indirection can obscure cause and effect, making it harder for users to understand why a system behaves as it does or to correct unwanted outcomes — a concern that has grown more pressing as AI systems become more capable and opaque.
As AI systems increasingly act as intermediaries themselves — executing multi-step tasks, browsing the web, or writing and running code on a user's behalf — indirect manipulation has become a defining feature of human-AI interaction. Designing interfaces that preserve user agency and interpretability while still leveraging the power of abstraction is one of the central challenges in contemporary HCI and AI alignment research.