Last Updated: November 21, 2025
LangChain
Building LLM-powered applications
Core Concepts
| Item | Description |
|---|---|
LLM
|
Language model wrapper |
Chain
|
Sequence of operations |
Prompt Template
|
Reusable prompts |
Agent
|
Autonomous LLM with tools |
Memory
|
Conversation history |
Vector Store
|
Embeddings database |
Basic Usage
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
# Simple LLM call
llm = OpenAI(temperature=0.7)
response = llm("What is the capital of France?")
# Chain with prompt template
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?"
)
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run("colorful socks")
# Agent with tools
from langchain.agents import load_tools, initialize_agent
tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description")
agent.run("What is the weather in NYC?")
Common Chains
| Item | Description |
|---|---|
LLMChain
|
Simple LLM call with prompt |
SequentialChain
|
Multiple chains in sequence |
RetrievalQA
|
Question answering over docs |
ConversationalChain
|
Chat with memory |
MapReduceChain
|
Process large documents |
Best Practices
- Use prompt templates for consistency
- Add memory for conversational apps
- Use vector stores for document QA
- Implement error handling for API calls
💡 Pro Tips
Quick Reference
LangChain simplifies building with LLMs