89. Hands-on ~ Create Your First Simple Graph
The passage explains the basics of building a simple LangGraph workflow
with core.py.
Key points
-
core.pyprovides access to common LangGraph and LangChain classes likeHumanMessage,AIMessage,BaseMessage,ChatOpenAI,TypedDict, and graph tools such asStateGraph,START, andEND. -
A StateGraph is a graph where nodes share and update a common state.
-
Nodes are functions that read the state and return updated values.
-
Edges define the execution flow between nodes.
Example workflow
-
Define a shared state with
TypedDict:-
input: str -
output: str -
step: int
-
-
Create a node function, such as
process, that:-
copies
inputtooutput -
increments
step
-
-
Build the graph with
StateGraph(SimpleState). -
Add the node with
graph.add_node("process", process). -
Connect the flow:
-
START -> process -> END
-
-
Compile the graph with
graph.compile(). -
Run it using
app.invoke(…)with initial state values.
Result
The example shows how the state changes during execution:
-
inputstays the same -
outputbecomes the input value -
stepincreases by 1
Extra notes
-
LangSmith tracing can be disabled if it causes issues.
-
The graph can also be visualized as a diagram for easier understanding.
Overall
This is a basic template for creating LangGraph applications: define state, write node functions, connect them with edges, compile, and run.
90. Hands-on ~ Understanding Reducers and Accumulating State
|
The passage explains how LangGraph state can accumulate values instead of overwriting them by using reducers.
Main idea
A workflow’s state is the source of truth, so it should preserve all important information as the graph runs.
Example shown
A new state class, AccumulatingState, is created with two fields:
-
messages: a list of strings using the add reducer, so new items are appended
-
count: an integer using the add reducer, so values are summed
Graph behavior
Two steps are defined and connected in a graph:
-
step one
-
adds
"step one executed"tomessages -
adds
1tocount
-
-
step two
-
adds
"step two executed"tomessages -
adds
1tocount
-
The graph runs in order:
-
start
-
step one
-
step two
-
end
Result
Starting from:
-
messages = ["initial message"] -
count = 0
The final state becomes:
-
messages = ["initial message", "step one executed", "step two executed"] -
count = 2
Key takeaway
Reducers tell LangGraph not to replace state values, but to combine new values with old ones. This preserves context across nodes and is essential for correct workflow behavior.
91. Hands-on ~ Message State - The Chat Pattern
This passage explains LangGraph’s message state pattern, which is especially important because many LangGraph apps are chat-based.
Main idea
Instead of creating a custom state structure, you can use
add_messages so that a messages field automatically accumulates
conversation history rather than being overwritten.
Example shown
-
Define a
MessageStatewith:-
messages: Annotated[list[BaseMessage], add_messages]
-
-
Create a chat node that:
-
takes the current messages
-
sends them to an LLM
-
appends the model response back into
messages
-
Graph setup
The graph is built with:
-
START -> chat_node -> END
Then it is invoked with a human message like:
-
“Say hello in Tagalog”
The result contains both:
-
the original human message
-
the AI reply, e.g. “Kamusta”
Why it matters
The same message objects (HumanMessage, AIMessage, etc.) are used
across LangGraph and LangChain, which makes it easy to:
-
pass prompts to models
-
manage chat history
-
build multi-node agent workflows without format conversion
Key takeaway
The combination of message state + an LLM node is a core pattern for building chat agents in LangGraph, because it preserves conversation history across the graph.
92. Hands-on ~ Multi-Node Pipelines - Chaining LLM Calls
Agent handoffs in LangGraph: why they matter
The post explains that a single chatbot agent can’t reliably handle every customer request, especially in production systems. To solve this, LangGraph uses a handoff pattern where a triage agent routes each request to the right specialist, such as:
-
Billing for charge and refund issues
-
Support for bugs and troubleshooting
-
Sales for upgrades and pricing
-
Direct response when no escalation is needed
Why handoffs are useful
Handoffs improve:
-
response accuracy
-
customer satisfaction
-
latency
-
operational cost
They also reduce unnecessary LLM calls by allowing triage to answer simple questions directly.
Shared state and structured routing
The system uses shared state fields like:
-
messages -
current_agent -
handoff_reason -
context_summary
The triage agent makes routing decisions using structured output rather than free-form text, typically returning values like:
-
sales -
support -
billing -
stay -
end
This keeps routing predictable and reliable.
System design
The architecture includes:
-
a triage agent that decides where the request goes
-
specialist agents for sales, support, and billing
-
a routing function that sends the flow based on
current_agent
Each specialist receives the context summary so it does not start from scratch.
Handoff vs supervisor pattern
The post contrasts:
-
Handoff pattern: triage sends the request once and the specialist handles it
-
Supervisor pattern: tasks often loop back to a central coordinator
It notes that both can be combined in real systems.
Main takeaway
The handoff pattern is a practical production design that mirrors real organizations. It helps route users efficiently, save cost, and improve experience, while LangGraph provides the flexibility to build these workflows cleanly.
93. Exercise ~ Build Your First Node
The passage explains how to build a simple LangGraph workflow that:
-
accepts a topic
-
uses Node 1 to generate three questions about that topic
-
uses Node 2 to answer one of those questions (the first one)
-
uses Node 4 to return both the questions and the answer
Main steps described
-
Define the state
-
Create a
TypedDictwith:-
topic -
questions -
answer
-
-
-
Initialize the LLM
-
Create node functions
-
generate_questions: generates three questions from the topic -
answer_question: answers the first generated question
-
-
Build the graph
-
add the nodes
-
connect them with edges
-
set the entry point
-
compile the graph
-
-
Run the graph
-
test it with a topic like “The future of renewable energy”
-
Expected output
The graph should return:
-
the original topic
-
three questions
-
one answer
Key takeaway
A LangGraph workflow is built from:
-
state
-
nodes
-
edges
Understanding those three parts lets you create more advanced workflows later.
94. Hands-on ~ Full LangGraph Step-by-Step Workflow
This walkthrough shows how to build a simple conversation graph with a stateful flow:
-
Define a ConversationState containing:
-
messagesfor chat history -
sentimentfor the latest sentiment label -
response_countto track replies
-
-
Create two graph nodes:
-
analyze_sentiment: looks at the latest user message and classifies it as positive, negative, or neutral
-
generate_response: uses that sentiment to choose a response style and generate an AI reply
-
-
Connect the nodes with edges so execution flows:
-
analyze_sentiment→generate_response
-
-
Compile the graph into an executable app and test it with example messages.
Main behavior
-
Positive input gets an enthusiastic reply
-
Negative input gets an empathetic reply
-
Neutral input gets a helpful reply
Overall purpose
The example demonstrates the basics of:
-
state management
-
node definition
-
graph wiring
-
sequential graph execution
It’s a simple introduction to building conversation workflows with a graph-based structure.
96. Hands-on ~ Basic Routing with Literal Routing Types
This example shows how to use conditional edges in a StateGraph to
route inputs dynamically based on their type.
Summary
-
Define shared state
-
A
RouterStateholds:-
query: the user input -
query_type: the classified category -
response: the final output
-
-
-
Create processing nodes
-
classify_query: uses an LLM to label the query asquestion,command, orstatement -
handle_question: answers the question -
handle_command: returns a command-style response -
handle_statement: acknowledges the statement
-
-
Route based on classification
-
route_by_typechecksquery_type -
It returns one of three fixed branches using
Literal:-
"question" -
"command" -
"statement"
-
-
-
Build the graph
-
Add nodes for classification and each handler
-
Set
classifyas the entry point -
Use
add_conditional_edgesto send the flow to the correct handler -
Each handler ends at
END
-
-
Run the app
-
Example inputs are processed differently depending on whether they are questions, commands, or statements
-
Core idea
The key pattern is:
classify → route conditionally → handle appropriately
This makes the graph flexible, modular, and easy to extend with more branches later.
97. Hands-on ~ Conditional Looping
This passage explains a graph workflow that evaluates content quality, optionally improves it, and loops until it is good enough or a maximum number of iterations is reached.
Main components
-
State fields:
content,quality_score,feedback,final, anditeration -
evaluate_quality: sends the content to an LLM and gets a quality score from 1 to 10; defaults to5if there is an error -
improve_content: uses the LLM to improve the content and incrementsiteration -
finalize_content: outputs the final content plus feedback about whether it was approved
Loop logic
A conditional route decides what happens after evaluation:
-
If quality >= 7 → finalize
-
If iteration <= 3 → finalize because the iteration limit is reached
-
Otherwise → improve
Graph structure
-
Start →
evaluate -
evaluateroutes conditionally to:-
improve -
finalize
-
-
improveloops back toevaluate -
finalizeends the graph
Example run
The graph is initialized with:
-
content: "AI is cool" -
quality: 0 -
feedback: "" -
final: "" -
iteration: 0
The workflow repeatedly evaluates and improves the content until it either:
-
achieves a good enough quality score, or
-
hits the maximum number of iterations.
The example ends with feedback like:
approved after 1 iteration with a score of 7
Overall idea
This is a conditional loop in a graph, where content is repeatedly evaluated and improved until it passes the quality threshold or reaches the iteration cap.
98. Hands-on ~ Multipath Routing
The passage explains multi-path routing in LangGraph, where a task is directed down one of several routes based on its properties.
Main idea
-
Analyze the task first using an LLM-based node.
-
The analysis node determines:
-
Urgency:
urgentornormal -
Complexity:
complexorsimple
-
-
These labels are then used by a routing function to choose one of four paths:
-
urgent_complex -
urgent_simple -
normal_complex -
normal_simple
-
Handlers
Each route has a corresponding handler:
-
Urgent + Complex → senior team
-
Urgent + Simple → quick response
-
Normal + Complex → specialist
-
Normal + Simple → standard path
Graph structure
The graph is built by:
-
adding the analysis node
-
adding the four handler nodes
-
setting the analysis node as the start
-
using a conditional edge to route tasks based on urgency and complexity
-
connecting each handler to the end node
Example outcomes
The passage gives examples such as:
-
“Server is down, need immediate fix” → urgent, complex → senior team
-
“Update the documentation for the API” → normal, simple → standard path
-
“Fix the typo on the homepage” → urgent, simple → quick response
Key takeaway
Once conditional routing is understood, it can be used to build workflows of nearly any complexity. The core mechanics stay the same; only the number of branches grows.
99. Hands-on ~ Cycles and Loops - Self-Correcting Code Writer
The passage explains how LangGraph supports cycles and loops, enabling agents to retry and improve instead of following only a straight-line workflow.
Main example: self-correcting code generator
A graph is built where the agent:
-
Generates code
-
Validates the code
-
If validation fails, loops back and tries again
-
Stops when the code works or when a maximum iteration limit is reached
Key components
-
State (
CodeGenState) includes:-
task -
code -
errors -
iteration -
max_iterations -
success
-
-
Errors use a reducer (
operator.add) so new errors are appended, giving the agent memory of past failures. -
Iteration limits prevent infinite loops.
Nodes
-
generate_code: uses the LLM to produce or fix code based on prior errors. -
validate_code: checks code using real Python compilation/execution, not the model’s judgment. -
should_continue: decides whether to stop, loop again, or finalize. -
finalize: clean exit node.
Workflow
The graph runs as:
generate → validate → (generate or finalize) → end
Demonstration
-
In a simple factorial task, the model succeeds on the first try.
-
With stricter validation, the loop runs multiple times and may stop at the max iteration limit if it still fails.
Core lesson
The important pattern is:
-
LLM for generation
-
deterministic validation for checking
-
graph logic for controlled retries
This is what makes LangGraph useful for building more intelligent, self-correcting agents.
100. Hands-on ~ Iterative Research Agent with Loops and Cycles
The demo describes an iterative research agent that repeatedly researches a topic, generates deeper follow-up questions, and continues until a maximum depth is reached, then synthesizes all findings into a final summary.
Core workflow
-
Research the initial topic.
-
Generate a deeper question from the latest findings.
-
Repeat research using that new question.
-
Use a router/condition to decide whether to continue or stop.
-
Synthesize all collected findings into one final response.
State design
The agent’s state tracks:
-
topic -
findings(accumulated with a reducer) -
questions -
iteration -
max_depth -
summary
Nodes in the graph
-
Research node: searches the topic or the latest generated question.
-
Generate questions node: creates a deeper follow-up question and updates iteration.
-
Synthesizer node: combines all findings into a final summary.
Control flow
A should_continue function checks whether the current iteration
has reached max_depth:
-
If not, it loops back to research
-
If yes, it routes to synthesize
Example run
With:
-
topic:
"quantum computing applications" -
max_depth = 2
the graph:
-
researches the topic,
-
generates a deeper question,
-
researches again,
-
then synthesizes the results after reaching depth 2.
Main takeaway
The demo shows how a graph of collaborating nodes can incrementally explore a topic, refine questions, and produce a final synthesized answer in a controlled loop.
102. Hands-on ~ Human Input - Interrupt for Approval
This example explains how to build a human-in-the-loop workflow in
LangGraph using a checkpointer (MemorySaver) so the graph can
pause, accept human feedback, and then resume.
Workflow pattern
-
Interrupt
-
Review
-
Modify
-
Resume
Main pieces
-
State fields:
-
request: original user request -
draft: LLM-generated draft -
approved: human approval boolean -
feedback: human revision notes -
final: final output
-
-
Nodes:
-
create_draft: generates a draft from the request -
wait_for_approval: pause point before human review -
finalize: either keeps the draft or revises it based on approval/feedback
-
Graph flow
start -> draft -> approval -> finalize -> end
Execution process
-
Run the graph with a unique
thread_id. -
Use
get_state(config)to inspect the paused state. -
Use
update_state(…)to add human approval or feedback. -
Resume with
invoke(None, config)to continue from the pause point.
Key idea
The checkpointer saves intermediate state, so LangGraph can stop before approval and later continue exactly where it left off.
Purpose
This pattern is useful when you want a human to review and possibly revise AI-generated output before it is finalized.
103. Full Human in the Loop Workflow
The passage explains an iterative human review workflow using LangGraph, where a document can go through multiple review and revision cycles before final approval.
Key points:
-
State includes:
-
document -
review_comments -
revision_count -
status
-
-
Workflow nodes:
-
submit for review: marks the document as waiting for human review
-
apply feedback: uses reviewer comments to revise the document and increments the revision count
-
route after review: sends the flow to either:
-
finalizeif approved -
apply_feedbackif changes are still needed
-
-
finalize: ends the workflow cleanly
-
-
Graph behavior:
-
The flow loops through review and revision repeatedly:
-
submit → review decision → apply feedback → submit → … → done
-
-
A checkpointer is used for memory
-
An interrupt before submit pauses the workflow so a human can inspect and update the state
-
-
Process example:
-
Initial document is submitted
-
Human adds comments like “add more technical depth” and “include examples”
-
Status is set to
needs_revision -
Workflow resumes, revises the document, and pauses again
-
Eventually status is set to
approved -
Workflow finalizes and returns the completed document
-
Main takeaway:
This pattern enables a controlled human-in-the-loop loop where AI can revise content repeatedly based on feedback until a human approves it.
104. Hands-on ~ Checkpointing Deep Dive
Checkpointing makes graph-based apps stateful instead of stateless. It saves a snapshot of the graph’s state after each node, which enables:
-
conversation memory across runs
-
recovery after crashes or failures
-
human-in-the-loop pause and resume
-
replay, rollback, and branching conversations
The example shows how to define chat state with a messages field using
add_messages, build a simple StateGraph, and compile it with a
checkpointer.
Two checkpointing backends are demonstrated:
-
MemorySaver for in-memory testing
-
SqliteSaver for durable persistence across restarts
With a thread ID in the config, the app can retrieve previous state, continue a conversation, and inspect state history. SQLite persistence lets the same conversation be recovered after the app restarts.
Checkpoint history is useful for debugging, auditing, and rewinding. Branching is also possible by copying a prior state into new thread IDs, allowing independent paths like alternative planning or what-if exploration.
Overall, checkpointing turns temporary execution state into durable, inspectable, reusable memory.
105. Checkpoint Internals Deep Dive
The passage explains how a graph checkpoint stores and exposes execution state.
Main points
-
A simple two-node graph is built with:
-
messages: a list with an add reducer, so new messages append -
step: a string
-
-
The nodes are:
-
analyze: sends messages to the LLM and setsstep = "analyze" -
summarize: summarizes the current state
-
Inspecting checkpoints
-
After running the graph,
get_state()shows:-
current state values
-
current step
-
messages
-
next node to run
-
-
get_state_history()shows the full checkpoint trail, including:-
step number
-
source
-
writes
-
message count
-
checkpoint ID
-
timestamps
-
What a checkpoint contains
A checkpoint includes:
-
State values — actual data like messages and step
-
Next nodes — what will run next
-
State config — thread ID and checkpoint ID
-
Parent config — link to the previous checkpoint
-
Metadata — source, step, and execution info
-
Timestamp — when it was created
Why checkpoints matter
-
They are saved:
-
before execution starts
-
after each node runs
-
at interrupt points
-
-
This enables pausing and resuming, especially for human-in-the-loop workflows.
Mental model
Think of checkpoints as a linked list of snapshots:
-
each checkpoint records the graph’s state at a moment in time
-
each one points to its parent
-
the latest checkpoint represents the current state
Overall, the text shows how checkpoints preserve both current state and full execution history.