A subordinate AI agent executing specific subtasks within a multi-agent system.
In multi-agent AI systems, a minion refers to a subordinate agent that operates under the direction of a higher-level orchestrator or controller agent. Rather than handling complex, open-ended reasoning, minions are typically specialized for narrow, well-defined tasks — such as web search, code execution, file I/O, or API calls. The orchestrator decomposes a large goal into discrete subtasks and delegates each to the appropriate minion, collecting and synthesizing their outputs to produce a final result. This division of labor mirrors hierarchical organizational structures and is a core architectural pattern in agentic AI frameworks.
Minions function by receiving instructions — often in natural language or structured prompts — from the orchestrating agent, executing their designated function, and returning results. They may themselves be language models, deterministic scripts, or hybrid systems. In LLM-based pipelines, a minion might be a smaller, cheaper model fine-tuned for a specific capability, while the orchestrator is a larger, more capable model responsible for planning and coordination. This asymmetry allows system designers to optimize cost and latency by routing only the tasks that require heavy reasoning to expensive frontier models.
The minion pattern matters because it enables scalable, modular AI systems that can tackle complex, multi-step problems beyond the capacity of any single model call. By isolating responsibilities, developers gain easier debugging, clearer accountability, and the ability to swap or upgrade individual components without redesigning the entire pipeline. Frameworks like LangChain, AutoGen, and CrewAI have popularized this pattern, offering abstractions for defining agent roles, communication protocols, and tool access. The concept is closely related to — and sometimes used interchangeably with — terms like "worker agent," "sub-agent," or "tool agent."
As agentic AI systems grow more sophisticated, the minion architecture raises important considerations around trust, safety, and control. A compromised or misbehaving minion can propagate errors or malicious outputs upstream, making robust validation between agent layers essential. Researchers studying multi-agent coordination increasingly focus on how orchestrators should verify minion outputs, handle failures gracefully, and maintain alignment with the original user intent throughout complex, long-horizon task execution.