Agentic systems gain their power not from a single monolithic prompt, but from composing language models into structured workflows. Anthropic’s research on building effective agents identifies six core patterns that serve as fundamental building blocks for these systems. Each pattern addresses a distinct class of problems and can be combined with others to tackle increasingly complex tasks.
Anthropic’s six core patterns are:
- Prompt Chaining — Decompose a task into a sequence of LLM calls, where each step processes the output of the previous one, with optional gate checks between steps.
- Routing — Classify an input and direct it to a specialized handler or prompt, allowing each branch to be optimized independently.
- Parallelization — Run multiple LLM calls simultaneously, either by splitting a task into independent subtasks (sectioning) or by running the same task multiple times for consensus (voting).
- Orchestrator-Worker — A central LLM dynamically breaks down a task, delegates subtasks to worker LLMs, and synthesizes the results.
- Evaluator-Optimizer — One LLM generates output while another evaluates it, looping until a quality threshold is met.
- Reflection — An LLM reviews its own output to identify errors and improvements, either through self-reflection or via a separate evaluator model.
These patterns are ordered roughly by complexity. Start with the simplest pattern that solves your problem — many tasks that seem to require complex agent architectures can be handled well by prompt chaining or routing alone. Move to more sophisticated patterns only when the task genuinely demands dynamic decision-making, iterative refinement, or multi-step autonomy.
Additional Patterns
Beyond Anthropic’s six primitives, the industry has converged on a set of additional architectural patterns that show up repeatedly across frameworks (LangGraph, OpenAI Agents SDK, Google ADK, Microsoft AutoGen) and research.
- Handoffs — Explicit agent-to-agent delegation with a clear context transfer contract.
- Human-in-the-Loop — Structured pause points for review, approval, and controlled resumption.
- Plan-and-Execute — Separate planning from execution; treat the plan as a first-class artifact.
- Iterative Refinement — Generator/critic/refiner loops that converge on higher-quality output.
- Multi-Agent Debate — Multiple peer agents collaborate (or argue) in a shared conversation.
- Adaptive Orchestration — Learned or RL-trained orchestrators that adapt policies to task state.
- Workflow Search — Automatically discover and optimize workflows via search (MCTS, evolutionary).
- Composite Patterns — Practical guidance for composing patterns without creating brittle systems.
- Autonomous Task Generation — Agents maintain and prioritize their own task queues.
- Collaborative Scaling — Multi-agent networks at scale (DAGs, topologies, scaling laws).
- Latent Communication — Agents exchange compressed latent representations instead of natural language.