6. API Setup and Verification (OpenAI and Anthropic)
-
Create two API keys: one from OpenAI (Account → Settings → API Keys → Create new secret key; copy/save it) and one from Anthropic (Console/Platform → Manage → API Keys → Create secret key).
-
Set up a new project directory (e.g., Lang course) and initialize it with the uv package manager (
uv init). Create and activate a virtual environment (uv venv, thensource .venv/bin/activateon Mac; Windows differs). -
Install dependencies via
uv add:langchain,langchain-core,langgraph,langchain-openai,langchain-anthropic, andpython-dotenv. -
Create a
.envfile and storeOPENAI_API_KEYandANTHROPIC_API_KEYthere. -
In
main.py, load environment variables withload_dotenv(), import the needed LangChain/LangGraph classes, and print package versions (usingimportlib.metadatato fetch versions). -
Verify everything works by instantiating
ChatOpenAI(e.g.,gpt-4o-mini) andChatAnthropic(e.g., Claude), invoking each with a simple prompt (like “setup complete in one word”), and confirming both return responses—indicating keys, dependencies, and the development environment are correctly set up.
pip install -U langchain langchain-core langgraph langchain-openai langchain-anthropic
8. LangChain Core Concepts - LCEL and Runnable Chains - Hands-on
-
The file demonstrates LangChain core concepts: LCEL (LangChain Expression Language) and runnables, using a simple “basic chain” example.
-
Imports added:
-
ChatPromptTemplatefromlangchain_core.promptsto create a prompt template with runtime variable interpolation (e.g.,{question}). -
StrOutputParserfromlangchain_core.output_parsersto parse the model output into a plain string. -
Uses ChatOpenAI (no OpenAI/Anthropic base client imports needed for the demo).
-
-
A function
demo_basic_chain()is created to build and run the chain:-
Prompt component: a template like “You are a helpful assistant, answer in one sentence: {question}”.
-
Model component:
ChatOpenAI(model="gpt-4o-mini", temperature=0.7). -
Parser component:
StrOutputParser().
-
-
The components are composed using the pipe operator (
|) (LCEL syntax):
chain = prompt | model | parser
This means: prompt → model → parser. -
The chain is executed via the runnable interface using
.invoke():
chain.invoke({"question": "What is LangChain?"})
The input key must match the template variable name (question). For multiple variables, they’d all be provided in the same input dict. -
The result is printed (a one-sentence answer) and the chain can be returned from the function.
-
The next planned step is a batch execution demo to run the chain over multiple inputs.
9. LCEL - Batch Execution Demo
-
Demonstrates how to do batch execution with an LCEL/LangChain runnable chain for multiple inputs.
-
Builds a chain using:
-
a ChatPromptTemplate like: “Translate to French the text {text}”
-
a model (GPT-4o mini)
-
an output parser (instantiated, then piped)
-
composed with the pipe operator:
prompt | model | parser
-
-
Prepares
inputsas a list of dictionaries, each using the same key as the prompt variable (text), e.g.:-
{"text": "hello, how are you?"} -
{"text": "what is your name?"} -
{"text": "where is the nearest restaurant?"}
-
-
Runs all items at once via the runnable’s
.batch(inputs)method (not “invoke batch”). -
Iterates through
zip(inputs, results)to print each English input alongside its French translation output. -
Concludes that runnable chains expose methods like
batchto efficiently process many inputs in one call, returning a corresponding list of results.
10. Demo Stream Realtime Output with LCEL
The text explains how to add real-time text streaming to a LangChain workflow. It defines a new “demo streaming” function that:
-
Builds a
ChatPrompt(e.g., “write a haiku about {topic}”). -
Creates a chat model (using
ChatOpenAI, but any compatible model works). -
Adds an output parser and composes them into a chain.
-
Streams results by iterating over
chain.stream({ "topic": "nature" }), printing each returned chunk immediately (with flush) to mimic live output like ChatGPT.
It then runs the function to show the haiku appearing quickly in streamed chunks, noting how easy streaming is with LangChain and encouraging experimentation.
11. Demo Schema Inspection
The text explains a LangChain demo function for inspecting a chain’s input and output schemas.
-
It builds a standard chain: Prompt template → Chat model (e.g., GPT-4o-mini, temperature set) → Output parser, composed inline.
-
It then shows how to query the chain’s schemas:
-
Use
chain.input_schemaandchain.output_schema, and retrieve their JSON structure viamodel_json_schema()(noting that an older schema access method is deprecated).
-
-
Running the function prints the schemas:
-
Input schema: an object with required field(s) like
text(typestring), reflecting the prompt template’s expected inputs. -
Output schema: simplified to a
string, reflecting the string output parser’s result.
-
-
The main point: LangChain lets you see the “bones” of what a runnable accepts and returns, helping understand how components compose internally without manual inspection.
16. New Model Initiation and Available Models in 2026
-
The video explains a newer, “universal” way to initialize chat models in LangChain: using
init_chat_model(imported from LangChain chat models) instead of directly instantiating provider-specific classes likeChatOpenAI. -
The older approach (e.g., importing a provider wrapper and creating
ChatOpenAI(…)after loading env vars withload_dotenv) still works, butinit_chat_modelis recommended going forward for a unified setup across providers. -
With
init_chat_model, you pass parameters such as the model name/provider (e.g., GPT variants or Claude),temperature, andmax_tokens, and it exposes additional configurable fields. -
The video also lists example “available models” circa 2026 and their typical use cases:
-
OpenAI: GPT‑4.0 (balanced/flagship), GPT‑4.0 mini (fast/cheap), GPT‑5.2 (latest, very large ~400K context).
-
Anthropic: Claude Opus 4.5 (deep reasoning), Claude Sonnet 4.5 (balanced), Claude Haiku 3.5 (fast/cheap).
-
Ollama / local open-source: Llama 3.2, Mistral (free/local testing).
-
19. Working with LLMs in LangChain - Multi Providers Configuration
The passage explains how to build a working with LLMs.py demo in
LinkedChain v1 that showcases several core ideas:
-
Configuring chat models with
init_chat_model, so you can easily swap providers like OpenAI and Anthropic. -
Calling models with
invoketo get responses, such as answering “What’s the capital of France?” with “Paris.” -
Comparing multiple models by storing them in a dictionary and looping through them with the same prompt.
-
Using message objects like
SystemMessageandHumanMessagefromlinkedchain_core.messagesto better structure conversation context. -
Understanding message metadata, since messages contain more than just text: role, content, and extra fields like response metadata and token usage.
-
Creating multi-turn conversations by appending the model’s AI response and follow-up human messages to the same message list.
-
Using system prompts for behavior control, like instructing the model to act like a pirate.
Overall, the goal is to demonstrate a full overview of working with LLMs in LinkedChain, including provider setup, model swapping, message-based conversation flow, and maintaining conversational context for multi-turn interactions.
22. Prompt Templates and Messages - Deep Dive
Here’s a concise summary of the content:
-
Chat prompt templates are reusable “cookie-cutter” structures for prompts. They contain placeholders/variables that get filled in at runtime, such as:
-
adjective = funny -
topic = cats -
Result: “Tell me a funny joke about cats.”
-
-
Templates improve reusability and organization because you define the structure once and reuse it with different values.
-
Multi-message templates separate prompts into different message roles, such as:
-
System message: sets the AI’s behavior/persona (e.g., “You are a helpful tutor. Always be encouraging.”)
-
Human message: contains the user’s input/question (e.g., “Explain recursion.”)
-
-
This structure helps ground the model by defining role, tone, and behavior in the system message while still allowing flexible user input.
-
The different message types are:
-
System: behavior/persona
-
Human: user query/task
-
AI: previous model responses
-
Tool: outputs/results from tools, APIs, or databases
-
-
The conversation flow is typically system → human → AI → human → AI, and so on.
-
Few-shot prompting is introduced as a way to teach the model by example. By giving a few examples, the AI learns the pattern and can apply it to new inputs.
-
Prompt composition means building complex prompts from reusable parts:
-
Example: combine a system prompt (“You are a role…”) with a user prompt (the actual question)
-
This makes prompts modular, reusable, and easier to maintain
-
-
Overall, the key benefits of these approaches are:
-
Reusability
-
Modularity
-
Consistency
-
Easier maintenance
-
The section ends by preparing to demonstrate these ideas in code.
23. Hands-on ~ Prompt Messages
The passage explains several LangChain chat prompt and message concepts:
-
ChatPromptTemplate from template strings
-
You create a chat prompt template using a string with variables in curly braces.
-
Example:
"Tell me a {adjective} joke about {topic}" -
Calling
format_messages()replaces the variables and returns a HumanMessage object, e.g."Tell me a funny joke about chickens".
-
-
Multi-message chat prompt templates
-
You can build prompts from a list of messages using
from_messages(). -
These can include system and human messages.
-
Example:
-
System: “You are a helpful system that translates input language to output language.”
-
Human: “Translate the following text: {text}”
-
-
After formatting, the messages are passed to the model with
invoke()to get a response, such as translating text into French.
-
-
Message types
-
LangChain supports several message classes:
-
HumanMessage -
AIMessage -
SystemMessage -
ToolMessage -
ChatMessage
-
-
These can all be combined in a list and sent as context to the model.
-
-
Few-shot chat message prompt templates
-
You can provide example input-output pairs, such as:
-
input: happy → output: sad
-
input: tall → output: short
-
-
A few-shot prompt template uses these examples to guide the model.
-
Example prompt: “Give the opposite of each word.”
-
When invoked with
happy, the model returnssad.
-
-
Reusable prompt components
-
Prompts can be built by combining reusable parts like a system prompt and a user/human prompt.
-
Formatting the combined prompt yields separate system and human messages.
-
Overall, the passage is a practical walkthrough showing how LangChain constructs, formats, and sends chat prompts and messages to LLMs.
24. Hands-on ~ Prompt Templates Code Run Through and Testing
Here is a concise, all-in-one walkthrough of core LangChain prompt fundamentals. It combines several concepts into one place so you can see how they work together in practice.
Main topics covered
-
Basic chat prompt templates
-
Shows how to create a simple
ChatPromptTemplatewith variables liketextandlanguage. -
Uses
format_messages()to fill in the template and send the prompt to the model.
-
-
Multi-message templates
-
Demonstrates building prompts with multiple message roles:
-
SystemMessage -
HumanMessage
-
-
Used to create conversational-style prompts with explicit roles.
-
-
Message types demo
-
A small example showing how to work with different message message objects manually.
-
-
Message placeholders
-
Introduces
MessagesPlaceholder, a new concept for dynamic conversation history. -
Lets you inject prior chat history into a prompt using a
historyvariable. -
Example shows a simulated conversation where the model can infer the user’s name from prior messages.
-
-
Few-shot prompting
-
Demonstrates using examples in prompts to guide model behavior.
-
Includes example templates and a wrapper that assembles the full few-shot prompt.
-
-
Prompt composition
-
Shows how to reuse prompt parts, such as a reusable system prompt and a task prompt.
-
Combines them into different prompt combinations for different personas or tasks.
-
Execution examples shown
The file runs several demos to prove the concepts work:
-
Simple template demo
-
Multi-message / role-based prompt demo
-
Message placeholder demo with chat history
-
Few-shot example demo
-
Prompt composition demo
Core takeaway
This file is meant to reinforce the building blocks of LangChain:
-
templates
-
message roles
-
chat history placeholders
-
few-shot examples
-
prompt reuse/composition
It’s a fundamentals-focused reference showing how to construct and combine prompts in LangChain v1.
25. Hands-on ~ Why Output Parsers and Structured Outputs
Output parsers convert raw LLM string responses into structured, usable data.
Key points:
-
LLMs often return plain strings, which are hard to work with directly.
-
Parsers help extract structured data like JSON, lists, and objects.
-
This makes downstream processing easier, supports error handling, and improves type safety.
Types mentioned:
-
String output parser: extracts clean text from an AI message, useful when you only need plain text.
-
JSON output parser: converts JSON-like model output into a Python
dict, so fields can be accessed directly. -
Pydantic output parser: uses a schema to validate and structure outputs, ensuring types and fields are correct.
Why Pydantic parsers are useful:
-
They enforce schema validation.
-
They provide type hints and autocomplete support.
-
They handle errors more gracefully.
-
They are better suited for production apps with complex or nested data.
Modern approach:
-
In newer LangChain versions, structured output is simpler and more recommended.
-
Instead of manually setting up parsers, instructions, and prompt templates, you can often use a one-line structured output method with a schema.
Example:
-
A movie review schema can extract:
-
title: Inception
-
rating: 9
-
summary: Inception is a masterpiece
-
Overall, output parsers act as the bridge between LLM output and application logic.
26. Hands-on ~ Output Parsers and Structured Outputs
The demo compares four output parsing approaches in LangChain:
-
StringOutputParser
-
Simplest parser.
-
Takes LLM text output and returns a plain string.
-
Example: prompt the model to write a short poem about nature, chain it through the parser, and the result is a string.
-
-
JSONOutputParser
-
Returns structured JSON.
-
Example: prompt for a person description like “25-year-old developer named Alex,” and the parser outputs fields such as
nameandagein JSON format.
-
-
PydanticOutputParser
-
Recommended structured parsing approach before newer built-in structured output.
-
You define a
PydanticBaseModelwith fields likename,age, andoccupation. -
The parser generates formatting instructions from the model schema and uses them in the prompt.
-
The LLM output is then parsed into a Pydantic object.
-
-
Structured Output (
with_structured_output)-
The simplest and most convenient approach.
-
You define a schema class, such as a
MovieReviewmodel with fields liketitle,review, andrating. -
Then you bind that schema directly to the LLM using
with_structured_output. -
When invoked with a review text, the model returns a structured object automatically.
-
Main takeaway
The progression is:
-
String output → plain text
-
JSON output → structured JSON
-
Pydantic output → schema-based structured parsing
-
Structured output → easiest and most practical, since the schema is bound directly to the model
Overall, the point is that structured output makes LLM responses more predictable and easier to work with because the returned data matches a defined schema.
30. Project 1 - Final Touches
- LangSmith
This section introduces LangSmith and shows how to use it to add logging, observability, and tracing to a LangChain-based Q&A bot.
Main points
-
LangSmith is a platform for monitoring AI apps using production data.
-
It provides:
-
detailed tracing
-
trend metrics
-
evaluations
-
prompt engineering support
-
deployment support
-
-
The focus here is on tracing and observability.
Setup steps
-
Create a LangSmith account
-
Go to Settings and generate an API key
-
Configure the project in code using environment variables like:
-
LANGSMITH_TRACING=true -
optional project name
-
Q&A bot implementation
-
Builds a production-ready question-answering bot with structured output
-
Uses:
-
ChatPromptTemplate -
ChatOpenAI -
Pydantic/BaseModel -
typing tools like
ListandOptional
-
-
Defines a structured response schema with:
-
answer -
confidence -
reasoning -
follow_up_questions -
sources_needed
-
Bot design
-
A
SmartBotclass is created with:-
a configurable model name
-
low temperature for more consistent responses
-
-
A prompt template gives the model system and human messages
-
The prompt and model are chained together into a runnable pipeline
Error handling and tracing
-
The
askfunction:-
processes questions
-
returns structured responses
-
falls back to a safe error response if something fails
-
-
LangSmith tracing is added using:
-
@traceable -
LangSmith client
-
-
Batch processing is also traced
Demo and debugging
-
The bot is tested with:
-
regular Q&A
-
error cases
-
batch requests
-
-
A
client.flush()call is used to ensure traces are properly sent to LangSmith and avoid ingestion errors
What LangSmith shows
For each run, LangSmith lets you inspect:
-
question input
-
structured answer
-
confidence and reasoning
-
follow-up questions
-
source requirement
-
latency
-
token usage
-
cost
-
metadata
-
success/failure status
Overall takeaway
This section demonstrates how to build a structured, production-ready Q&A bot and use LangSmith to monitor it effectively in real time. It also sets up the foundation for later topics like chains, RAG, and memory.