89. Hands-on ~ Create Your First Simple Graph

The passage explains the basics of building a simple LangGraph workflow with core.py.

Key points

  • core.py provides access to common LangGraph and LangChain classes like HumanMessage, AIMessage, BaseMessage, ChatOpenAI, TypedDict, and graph tools such as StateGraph, START, and END.

  • A StateGraph is a graph where nodes share and update a common state.

  • Nodes are functions that read the state and return updated values.

  • Edges define the execution flow between nodes.

Example workflow

  1. Define a shared state with TypedDict:

    • input: str

    • output: str

    • step: int

  2. Create a node function, such as process, that:

    • copies input to output

    • increments step

  3. Build the graph with StateGraph(SimpleState).

  4. Add the node with graph.add_node("process", process).

  5. Connect the flow:

    • START -> process -> END

  6. Compile the graph with graph.compile().

  7. Run it using app.invoke(…​) with initial state values.

Result

The example shows how the state changes during execution:

  • input stays the same

  • output becomes the input value

  • step increases by 1

Extra notes

  • LangSmith tracing can be disabled if it causes issues.

  • The graph can also be visualized as a diagram for easier understanding.

Overall

This is a basic template for creating LangGraph applications: define state, write node functions, connect them with edges, compile, and run.

90. Hands-on ~ Understanding Reducers and Accumulating State

The passage explains how LangGraph state can accumulate values instead of overwriting them by using reducers.

Main idea

A workflow’s state is the source of truth, so it should preserve all important information as the graph runs.

Example shown

A new state class, AccumulatingState, is created with two fields:

  • messages: a list of strings using the add reducer, so new items are appended

  • count: an integer using the add reducer, so values are summed

Graph behavior

Two steps are defined and connected in a graph:

  1. step one

    • adds "step one executed" to messages

    • adds 1 to count

  2. step two

    • adds "step two executed" to messages

    • adds 1 to count

The graph runs in order:

  • start

  • step one

  • step two

  • end

Result

Starting from:

  • messages = ["initial message"]

  • count = 0

The final state becomes:

  • messages = ["initial message", "step one executed", "step two executed"]

  • count = 2

Key takeaway

Reducers tell LangGraph not to replace state values, but to combine new values with old ones. This preserves context across nodes and is essential for correct workflow behavior.

91. Hands-on ~ Message State - The Chat Pattern

This passage explains LangGraph’s message state pattern, which is especially important because many LangGraph apps are chat-based.

Main idea

Instead of creating a custom state structure, you can use add_messages so that a messages field automatically accumulates conversation history rather than being overwritten.

Example shown

  • Define a MessageState with:

    • messages: Annotated[list[BaseMessage], add_messages]

  • Create a chat node that:

    • takes the current messages

    • sends them to an LLM

    • appends the model response back into messages

Graph setup

The graph is built with:

  • START -> chat_node -> END

Then it is invoked with a human message like:

  • “Say hello in Tagalog”

The result contains both:

  • the original human message

  • the AI reply, e.g. “Kamusta”

Why it matters

The same message objects (HumanMessage, AIMessage, etc.) are used across LangGraph and LangChain, which makes it easy to:

  • pass prompts to models

  • manage chat history

  • build multi-node agent workflows without format conversion

Key takeaway

The combination of message state + an LLM node is a core pattern for building chat agents in LangGraph, because it preserves conversation history across the graph.

92. Hands-on ~ Multi-Node Pipelines - Chaining LLM Calls

Agent handoffs in LangGraph: why they matter

The post explains that a single chatbot agent can’t reliably handle every customer request, especially in production systems. To solve this, LangGraph uses a handoff pattern where a triage agent routes each request to the right specialist, such as:

  • Billing for charge and refund issues

  • Support for bugs and troubleshooting

  • Sales for upgrades and pricing

  • Direct response when no escalation is needed

Why handoffs are useful

Handoffs improve:

  • response accuracy

  • customer satisfaction

  • latency

  • operational cost

They also reduce unnecessary LLM calls by allowing triage to answer simple questions directly.

Shared state and structured routing

The system uses shared state fields like:

  • messages

  • current_agent

  • handoff_reason

  • context_summary

The triage agent makes routing decisions using structured output rather than free-form text, typically returning values like:

  • sales

  • support

  • billing

  • stay

  • end

This keeps routing predictable and reliable.

System design

The architecture includes:

  • a triage agent that decides where the request goes

  • specialist agents for sales, support, and billing

  • a routing function that sends the flow based on current_agent

Each specialist receives the context summary so it does not start from scratch.

Handoff vs supervisor pattern

The post contrasts:

  • Handoff pattern: triage sends the request once and the specialist handles it

  • Supervisor pattern: tasks often loop back to a central coordinator

It notes that both can be combined in real systems.

Main takeaway

The handoff pattern is a practical production design that mirrors real organizations. It helps route users efficiently, save cost, and improve experience, while LangGraph provides the flexibility to build these workflows cleanly.

93. Exercise ~ Build Your First Node

The passage explains how to build a simple LangGraph workflow that:

  • accepts a topic

  • uses Node 1 to generate three questions about that topic

  • uses Node 2 to answer one of those questions (the first one)

  • uses Node 4 to return both the questions and the answer

Main steps described

  1. Define the state

    • Create a TypedDict with:

      • topic

      • questions

      • answer

  2. Initialize the LLM

  3. Create node functions

    • generate_questions: generates three questions from the topic

    • answer_question: answers the first generated question

  4. Build the graph

    • add the nodes

    • connect them with edges

    • set the entry point

    • compile the graph

  5. Run the graph

    • test it with a topic like “The future of renewable energy”

Expected output

The graph should return:

  • the original topic

  • three questions

  • one answer

Key takeaway

A LangGraph workflow is built from:

  • state

  • nodes

  • edges

Understanding those three parts lets you create more advanced workflows later.

94. Hands-on ~ Full LangGraph Step-by-Step Workflow

This walkthrough shows how to build a simple conversation graph with a stateful flow:

  • Define a ConversationState containing:

    • messages for chat history

    • sentiment for the latest sentiment label

    • response_count to track replies

  • Create two graph nodes:

    1. analyze_sentiment: looks at the latest user message and classifies it as positive, negative, or neutral

    2. generate_response: uses that sentiment to choose a response style and generate an AI reply

  • Connect the nodes with edges so execution flows:

    • analyze_sentimentgenerate_response

  • Compile the graph into an executable app and test it with example messages.

Main behavior

  • Positive input gets an enthusiastic reply

  • Negative input gets an empathetic reply

  • Neutral input gets a helpful reply

Overall purpose

The example demonstrates the basics of:

  • state management

  • node definition

  • graph wiring

  • sequential graph execution

It’s a simple introduction to building conversation workflows with a graph-based structure.

96. Hands-on ~ Basic Routing with Literal Routing Types

This example shows how to use conditional edges in a StateGraph to route inputs dynamically based on their type.

Summary

  1. Define shared state

    • A RouterState holds:

      • query: the user input

      • query_type: the classified category

      • response: the final output

  2. Create processing nodes

    • classify_query: uses an LLM to label the query as question, command, or statement

    • handle_question: answers the question

    • handle_command: returns a command-style response

    • handle_statement: acknowledges the statement

  3. Route based on classification

    • route_by_type checks query_type

    • It returns one of three fixed branches using Literal:

      • "question"

      • "command"

      • "statement"

  4. Build the graph

    • Add nodes for classification and each handler

    • Set classify as the entry point

    • Use add_conditional_edges to send the flow to the correct handler

    • Each handler ends at END

  5. Run the app

    • Example inputs are processed differently depending on whether they are questions, commands, or statements

Core idea

The key pattern is:

classify → route conditionally → handle appropriately

This makes the graph flexible, modular, and easy to extend with more branches later.

97. Hands-on ~ Conditional Looping

This passage explains a graph workflow that evaluates content quality, optionally improves it, and loops until it is good enough or a maximum number of iterations is reached.

Main components

  • State fields: content, quality_score, feedback, final, and iteration

  • evaluate_quality: sends the content to an LLM and gets a quality score from 1 to 10; defaults to 5 if there is an error

  • improve_content: uses the LLM to improve the content and increments iteration

  • finalize_content: outputs the final content plus feedback about whether it was approved

Loop logic

A conditional route decides what happens after evaluation:

  • If quality >= 7finalize

  • If iteration <= 3finalize because the iteration limit is reached

  • Otherwise → improve

Graph structure

  • Start → evaluate

  • evaluate routes conditionally to:

    • improve

    • finalize

  • improve loops back to evaluate

  • finalize ends the graph

Example run

The graph is initialized with:

  • content: "AI is cool"

  • quality: 0

  • feedback: ""

  • final: ""

  • iteration: 0

The workflow repeatedly evaluates and improves the content until it either:

  • achieves a good enough quality score, or

  • hits the maximum number of iterations.

The example ends with feedback like:

approved after 1 iteration with a score of 7

Overall idea

This is a conditional loop in a graph, where content is repeatedly evaluated and improved until it passes the quality threshold or reaches the iteration cap.

98. Hands-on ~ Multipath Routing

The passage explains multi-path routing in LangGraph, where a task is directed down one of several routes based on its properties.

Main idea

  1. Analyze the task first using an LLM-based node.

  2. The analysis node determines:

    • Urgency: urgent or normal

    • Complexity: complex or simple

  3. These labels are then used by a routing function to choose one of four paths:

    • urgent_complex

    • urgent_simple

    • normal_complex

    • normal_simple

Handlers

Each route has a corresponding handler:

  • Urgent + Complex → senior team

  • Urgent + Simple → quick response

  • Normal + Complex → specialist

  • Normal + Simple → standard path

Graph structure

The graph is built by:

  • adding the analysis node

  • adding the four handler nodes

  • setting the analysis node as the start

  • using a conditional edge to route tasks based on urgency and complexity

  • connecting each handler to the end node

Example outcomes

The passage gives examples such as:

  • “Server is down, need immediate fix” → urgent, complex → senior team

  • “Update the documentation for the API” → normal, simple → standard path

  • “Fix the typo on the homepage” → urgent, simple → quick response

Key takeaway

Once conditional routing is understood, it can be used to build workflows of nearly any complexity. The core mechanics stay the same; only the number of branches grows.

99. Hands-on ~ Cycles and Loops - Self-Correcting Code Writer

The passage explains how LangGraph supports cycles and loops, enabling agents to retry and improve instead of following only a straight-line workflow.

Main example: self-correcting code generator

A graph is built where the agent:

  1. Generates code

  2. Validates the code

  3. If validation fails, loops back and tries again

  4. Stops when the code works or when a maximum iteration limit is reached

Key components

  • State (CodeGenState) includes:

    • task

    • code

    • errors

    • iteration

    • max_iterations

    • success

  • Errors use a reducer (operator.add) so new errors are appended, giving the agent memory of past failures.

  • Iteration limits prevent infinite loops.

Nodes

  • generate_code: uses the LLM to produce or fix code based on prior errors.

  • validate_code: checks code using real Python compilation/execution, not the model’s judgment.

  • should_continue: decides whether to stop, loop again, or finalize.

  • finalize: clean exit node.

Workflow

The graph runs as:

generate → validate → (generate or finalize) → end

Demonstration

  • In a simple factorial task, the model succeeds on the first try.

  • With stricter validation, the loop runs multiple times and may stop at the max iteration limit if it still fails.

Core lesson

The important pattern is:

  • LLM for generation

  • deterministic validation for checking

  • graph logic for controlled retries

This is what makes LangGraph useful for building more intelligent, self-correcting agents.

100. Hands-on ~ Iterative Research Agent with Loops and Cycles

The demo describes an iterative research agent that repeatedly researches a topic, generates deeper follow-up questions, and continues until a maximum depth is reached, then synthesizes all findings into a final summary.

Core workflow

  1. Research the initial topic.

  2. Generate a deeper question from the latest findings.

  3. Repeat research using that new question.

  4. Use a router/condition to decide whether to continue or stop.

  5. Synthesize all collected findings into one final response.

State design

The agent’s state tracks:

  • topic

  • findings (accumulated with a reducer)

  • questions

  • iteration

  • max_depth

  • summary

Nodes in the graph

  • Research node: searches the topic or the latest generated question.

  • Generate questions node: creates a deeper follow-up question and updates iteration.

  • Synthesizer node: combines all findings into a final summary.

Control flow

A should_continue function checks whether the current iteration has reached max_depth:

  • If not, it loops back to research

  • If yes, it routes to synthesize

Example run

With:

  • topic: "quantum computing applications"

  • max_depth = 2

the graph:

  • researches the topic,

  • generates a deeper question,

  • researches again,

  • then synthesizes the results after reaching depth 2.

Main takeaway

The demo shows how a graph of collaborating nodes can incrementally explore a topic, refine questions, and produce a final synthesized answer in a controlled loop.

102. Hands-on ~ Human Input - Interrupt for Approval

This example explains how to build a human-in-the-loop workflow in LangGraph using a checkpointer (MemorySaver) so the graph can pause, accept human feedback, and then resume.

Workflow pattern

  1. Interrupt

  2. Review

  3. Modify

  4. Resume

Main pieces

  • State fields:

    • request: original user request

    • draft: LLM-generated draft

    • approved: human approval boolean

    • feedback: human revision notes

    • final: final output

  • Nodes:

    • create_draft: generates a draft from the request

    • wait_for_approval: pause point before human review

    • finalize: either keeps the draft or revises it based on approval/feedback

Graph flow

start -> draft -> approval -> finalize -> end

Execution process

  1. Run the graph with a unique thread_id.

  2. Use get_state(config) to inspect the paused state.

  3. Use update_state(…​) to add human approval or feedback.

  4. Resume with invoke(None, config) to continue from the pause point.

Key idea

The checkpointer saves intermediate state, so LangGraph can stop before approval and later continue exactly where it left off.

Purpose

This pattern is useful when you want a human to review and possibly revise AI-generated output before it is finalized.

103. Full Human in the Loop Workflow

The passage explains an iterative human review workflow using LangGraph, where a document can go through multiple review and revision cycles before final approval.

Key points:

  • State includes:

    • document

    • review_comments

    • revision_count

    • status

  • Workflow nodes:

    • submit for review: marks the document as waiting for human review

    • apply feedback: uses reviewer comments to revise the document and increments the revision count

    • route after review: sends the flow to either:

      • finalize if approved

      • apply_feedback if changes are still needed

    • finalize: ends the workflow cleanly

  • Graph behavior:

    • The flow loops through review and revision repeatedly:

      • submit → review decision → apply feedback → submit → …​ → done

    • A checkpointer is used for memory

    • An interrupt before submit pauses the workflow so a human can inspect and update the state

  • Process example:

    1. Initial document is submitted

    2. Human adds comments like “add more technical depth” and “include examples”

    3. Status is set to needs_revision

    4. Workflow resumes, revises the document, and pauses again

    5. Eventually status is set to approved

    6. Workflow finalizes and returns the completed document

Main takeaway:

This pattern enables a controlled human-in-the-loop loop where AI can revise content repeatedly based on feedback until a human approves it.

104. Hands-on ~ Checkpointing Deep Dive

Checkpointing makes graph-based apps stateful instead of stateless. It saves a snapshot of the graph’s state after each node, which enables:

  • conversation memory across runs

  • recovery after crashes or failures

  • human-in-the-loop pause and resume

  • replay, rollback, and branching conversations

The example shows how to define chat state with a messages field using add_messages, build a simple StateGraph, and compile it with a checkpointer.

Two checkpointing backends are demonstrated:

  • MemorySaver for in-memory testing

  • SqliteSaver for durable persistence across restarts

With a thread ID in the config, the app can retrieve previous state, continue a conversation, and inspect state history. SQLite persistence lets the same conversation be recovered after the app restarts.

Checkpoint history is useful for debugging, auditing, and rewinding. Branching is also possible by copying a prior state into new thread IDs, allowing independent paths like alternative planning or what-if exploration.

Overall, checkpointing turns temporary execution state into durable, inspectable, reusable memory.

105. Checkpoint Internals Deep Dive

The passage explains how a graph checkpoint stores and exposes execution state.

Main points

  • A simple two-node graph is built with:

    • messages: a list with an add reducer, so new messages append

    • step: a string

  • The nodes are:

    • analyze: sends messages to the LLM and sets step = "analyze"

    • summarize: summarizes the current state

Inspecting checkpoints

  • After running the graph, get_state() shows:

    • current state values

    • current step

    • messages

    • next node to run

  • get_state_history() shows the full checkpoint trail, including:

    • step number

    • source

    • writes

    • message count

    • checkpoint ID

    • timestamps

What a checkpoint contains

A checkpoint includes:

  1. State values — actual data like messages and step

  2. Next nodes — what will run next

  3. State config — thread ID and checkpoint ID

  4. Parent config — link to the previous checkpoint

  5. Metadata — source, step, and execution info

  6. Timestamp — when it was created

Why checkpoints matter

  • They are saved:

    • before execution starts

    • after each node runs

    • at interrupt points

  • This enables pausing and resuming, especially for human-in-the-loop workflows.

Mental model

Think of checkpoints as a linked list of snapshots:

  • each checkpoint records the graph’s state at a moment in time

  • each one points to its parent

  • the latest checkpoint represents the current state

Overall, the text shows how checkpoints preserve both current state and full execution history.