LLM Prompt Chaining Cheat Sheet

Compose prompts, agents, and tool calls for complex workflows

Last Updated: November 21, 2025

Chain Types

Type Goal
Sequential Tap previous LLM answers as context for the next step.
Agents Call tools via prompts to retrieve external data or actions.
Map-Reduce Chunk long docs, summarize each chunk, then synthesize.
ReAct Combine reasoning + action to browse the web or call APIs.

Chain Snippets

chain.run({'question': q})
Start a LangChain chain with structured inputs.
agent_executor.run('search the docs')
Let an agent call a tool before answering.
prompt_template.format(doc_summary=summary)
Materialize templates for the next prompt.
llm.generate([human_message])
Get responses while controlling temperature.

Summary

Prompt chaining lets you build multi-step reasoning flows; keep tool calls concise and capture state so you can rerun failed chains.

💡 Pro Tip: Log intermediate outputs so you can replay chains when answers drift after prompt tweaks.
← Back to Data Science & ML | Browse all categories | View all cheat sheets