Once you move beyond a single augmented LLM call, there are two broad architectural approaches for combining LLMs with tools and other LLM calls: workflows and agents. Understanding the distinction — and the tradeoff it represents — is essential for making sound design decisions.
Workflows: Orchestration Through Code
In a workflow, LLMs and tools are chained together through predefined code paths. A developer writes the orchestration logic: which model gets called when, what data flows between steps, and how errors are handled. The LLM operates within fixed boundaries at each stage.
Examples of workflow patterns include:
- Prompt chaining. One LLM call generates output that feeds into the next, with validation gates between steps.
- Routing. A classifier examines the input and dispatches it to a specialized handler.
- Parallelization. Multiple LLM calls run simultaneously on sub-tasks, and their results are aggregated programmatically.
- Orchestrator-workers. A central LLM breaks a task into pieces, delegates each piece, and synthesizes the results.
- Evaluator-optimizer. One LLM generates a response while another evaluates and requests refinements in a loop.
The defining characteristic is that the developer controls the flow. The LLM is a powerful component within a system the developer has designed.
Agents: Orchestration Through the Model
In an agent, the LLM itself decides what to do next. It operates in a loop: observe the current state, choose an action (often a tool call), observe the result, and decide whether to continue or stop. The developer provides the tools and the environment, but the model controls the flow.
Agents are typically built around a core loop:
- The model receives a goal and the current context.
- It decides which tool to call (or whether to respond directly).
- It observes the tool’s output.
- It repeats until it determines the task is complete or it should escalate to a human.
The developer’s role shifts from writing orchestration logic to designing good tool interfaces, crafting clear system prompts, and building in appropriate guardrails.
The Core Tradeoff: Predictability vs. Flexibility
This distinction comes down to one fundamental tradeoff:
| Workflows | Agents | |
|---|---|---|
| Control | Developer directs the process | Model directs the process |
| Predictability | High — behavior follows known code paths | Lower — behavior depends on model decisions |
| Consistency | Easier to guarantee uniform outputs | Outputs vary with the model’s reasoning |
| Flexibility | Limited to anticipated scenarios | Can adapt to novel situations |
| Debugging | Straightforward — trace the code | Harder — must inspect the model’s reasoning |
| Cost | Predictable token usage | Variable and potentially higher |
Workflows excel when you can anticipate the task structure in advance. Agents excel when you cannot.
When to Choose Each
Favor workflows when:
- The task has a well-defined structure that can be decomposed into clear steps.
- Consistency and predictability matter more than adaptability.
- You need tight control over latency, cost, or output format.
- The failure modes need to be well-understood and recoverable.
Favor agents when:
- The problem is open-ended and the required steps cannot be predicted in advance.
- The task demands flexible decision-making across a large or dynamic tool space.
- You can tolerate higher cost and latency in exchange for the ability to handle novel situations.
In practice, many production systems blend both approaches: workflows provide structure where the task is predictable, while agent-like loops handle the portions that require dynamic reasoning.