Prompt Chaining¶
Prompt chaining breaks a complex task into a sequence of smaller steps. Each step (LLM call) builds on the output of the previous one, with intermediate gates that validate quality before proceeding. If an output fails validation, the process can exit early to prevent error propagation.
Think of it as an assembly line where each station contributes a specific part to the final product.
Why It Matters¶
- Increased accuracy — Each LLM call focuses on a specific goal, reducing errors and improving overall quality
- Modularity — You can inspect and debug intermediate steps, making the workflow easier to adapt and refine
- Enhanced reliability — Programmatic gates catch errors early, increasing confidence in the final output
- Efficiency trade-off — This workflow trades some latency (multiple steps) for more robust, higher-quality outputs
Key Components¶
| Component | Purpose | Example |
|---|---|---|
| LLM Call 1 | Handles the initial step and produces the first output | Generate a document outline based on user input |
| Gate | Validates the output of the previous call; continues or exits | Check if the outline contains all necessary sections |
| LLM Call 2 | Processes the validated output from the previous step | Expand the approved outline into a detailed draft |
| LLM Call 3 | Finalizes the task by refining or transforming the output | Translate the draft or format it for publication |
| Exit | Ends the workflow early if validation fails | Stop if the outline doesn't meet quality standards |
| Output | Delivers the final product after all steps complete successfully | A polished, translated document ready for publication |
When to Use It¶
- Multi-step processes — Tasks that naturally decompose into sequential steps (generate, validate, refine)
- Tasks requiring validation — When intermediate outputs need quality checks before proceeding
- Complex workflows — When different steps benefit from different prompts, models, or configurations
Example: Generating Marketing Materials¶
A company needs product launch copy translated into multiple languages with consistent tone:
- LLM Call 1 — Generate initial marketing copy from product details and target audience
- Gate — Validate against tone guide and campaign goals
- LLM Call 2 — Translate the validated copy into multiple languages
- Gate — Ensure translations retain tone and key messages
- LLM Call 3 — Format translated copy for each platform (email, social media, website)
- Output — Finalized, multilingual marketing materials ready for deployment
How to Implement¶
- Decompose the task — Identify the logical subtasks that make up the overall goal
- Define validation criteria — Establish clear checkpoints (gates) to evaluate outputs after each step
- Connect steps programmatically — Design the workflow so outputs from one step feed into the next
- Test and refine — Ensure each step performs as intended and adjust based on intermediate results
Based on Building Effective Agents by Anthropic.
Related¶
- Workflow Architecture Patterns Overview
- Augmented LLM — the foundation this pattern builds on
- Routing — another structured workflow for branching paths
- Evaluator-Optimizer — iterative refinement with feedback loops
- Build > Design Your AI Workflow