Get started
Introduction
LangChain is a framework designed for creating applications that utilize language models. It focuses on developing applications that are context-aware, capable of reasoning, and can be connected to various contextual sources.
The framework is composed of:
-
LangChain Libraries: These are available in Python and JavaScript and include interfaces, integrations, and built-in components for creating chains and agents.
-
LangChain Templates: A set of reference architectures for different tasks that can be deployed easily.
-
LangServe: A tool for deploying LangChain chains as REST APIs.
-
LangSmith: A platform that assists developers in debugging, testing, evaluating, and monitoring LangChain-based chains.
The framework offers the following benefits:
-
Components: These are modular tools that can be used with language models, whether within the LangChain framework or independently.
-
Off-the-shelf chains: Pre-built configurations designed to perform higher-level tasks, allowing for quick setup and customization.
The LangChain libraries consist of several packages, including langchain-core
, langchain-community
, and langchain
itself, which contain the core abstractions, third-party integrations, and cognitive architecture components respectively.
To get started with LangChain, the documentation provides guidance on installation, a Quickstart guide, and security best practices. The documentation is focused on the Python library of LangChain, with a separate link provided for the JavaScript version.
LangChain also introduces the LangChain Expression Language (LCEL), which is a declarative method to compose chains that can scale from simple to complex applications without code changes.
Furthermore, LangChain provides interfaces and integrations for modules such as Model I/O, Retrieval, and Agents, which cover interacting with language models, accessing data, and enabling models to choose tools based on directives.
The ecosystem includes use cases with walkthroughs for common applications like question answering and chatbots, integrations with other tools, guides for best practices, an API reference for detailed documentation, a developer’s guide, and a community section for engagement with other developers and contributors.
Quickstart
The provided text outlines the process of setting up and using the OpenAI API for various tasks, including creating chatbots and retrieval systems using Python. Here is a summary of the steps and concepts discussed:
-
Installation and Initialization:
-
Install the OpenAI Python package.
-
Obtain an API key and set it as an environment variable or pass it directly to the
ChatOpenAI
class upon instantiation. -
Initialize the
ChatOpenAI
model and use it to interact with the OpenAI API.
-
-
Using Prompts and Output Parsers:
-
Create prompt templates to guide the language model’s responses, ensuring they match the tone of a technical documentation writer.
-
Combine prompt templates with the language model to form a chain.
-
Parse the chat model’s output into a string using
StrOutputParser
.
-
-
Retrieval Chain:
-
Set up a retrieval system to provide the language model with relevant context for answering questions.
-
Load and index documents into a vector store using an embedding model and a retriever.
-
Create a retrieval chain that takes a question, retrieves relevant documents, and generates an answer from the language model.
-
-
Conversation Retrieval Chain:
-
Adapt the retrieval system to handle conversations and follow-up questions.
-
Implement a retriever that considers the entire conversation history to generate a search query.
-
Update the language model chain to use the retriever and provide coherent answers in a chatbot context.
-
-
Agent:
-
Build an agent that decides the steps to take when interacting with a user.
-
Set up tools for the agent to use, such as a retriever and a search tool.
-
Deploy the agent using predefined prompts and an executor to handle user inputs and provide responses.
-
Throughout the text, the reader is encouraged to refer to the LangChain documentation for a deeper understanding of the concepts and for more advanced configurations. The examples provided demonstrate the creation of a system capable of answering questions with context, handling conversations, and building a functional chatbot using the OpenAI API and additional tools and libraries.
LangChain Expression Language
LCEL is a framework that allows for the easy construction of complex workflows by chaining together basic components, which can include features like streaming, parallelism, and logging.
The first example provided demonstrates how to generate a joke on a specified topic by chaining a prompt template, a language model, and an output parser. This is done using the |
operator, akin to a Unix pipe, to pass the output of one component as the input to the next. The process involves creating a prompt using the user’s topic, invoking a model (such as ChatOpenAI) to generate a response based on the prompt, and then parsing the model’s output into a string.
In the second example, a more complex chain is built for retrieval-augmented generation (RAG). This involves setting up an in-memory document store with relevant texts, using a retriever to fetch documents based on a query, and then constructing a prompt that combines the retrieved documents (as context) with the user’s question. This prompt is then fed into a language model to generate a relevant answer, which is finally parsed into a string. The RunnableParallel
and RunnablePassthrough
components are used to manage the parallel inputs (context and question) for the prompt template.
Both examples illustrate how LCEL can streamline the process of creating sophisticated workflows by integrating different components and managing the flow of data between them.
Modules
Model I/O
Concepts
LangChain is a framework designed for interfacing with language models, providing tools to easily work with different types of models, construct input prompts, and handle outputs. It supports two primary types of models: LLMs for text completion tasks, like GPT-3, and Chat Models for conversational interactions, like GPT-4 or Anthropic’s Claude-2. Each model type has distinct input/output formats and may require different prompting strategies.
LLMs accept string prompts and return string completions, while Chat Models take a list of messages as input and output an AI-generated message. Messages in Chat Models have a role (e.g., Human, AI, System, Function, or Tool) and content, with additional parameters for provider-specific information.
LangChain provides utilities for creating prompts, known as Prompt Templates, which help transform user inputs into the right format for the model. Different templates are available for various message types, supporting interoperability between LLMs and Chat Models.
Output Parsers are used to transform the raw output from models into a more usable form, such as strings or structured data. This includes simple string output parsers and specialized parsers for functions or actions determined by the model output.
The framework emphasizes the need for tailored prompts for different models due to their unique characteristics and the importance of considering the best practices for each model type when developing applications.
More
Templates
The provided content describes various templates available at python.langchain.com for different purposes, including chatbots, data extraction, retrieval augmentation, summarization, and safety measures. Here’s a summary of each category:
Popular Templates: Popular choices among users for building chatbots, extracting structured data, and creating agents with OpenAI functions, local tooling, or specific platforms like Anthropic and You.com.
Advanced Retrieval: Templates focusing on sophisticated retrieval techniques, such as reranking, iterative search, parent document retrieval, semi-structured data retrieval, and temporal data retrieval.
Advanced Retrieval - Query Transformation: Methods that transform user queries to enhance retrieval, including hypothetical document embeddings, query rewrites, "step-back" questioning, RAG-Fusion, and multi-query retrieval.
Advanced Retrieval - Query Construction: Techniques for constructing queries in domain-specific languages from natural language, enabling interaction with structured databases through language like Elastic Search queries or Cypher statements.
OSS Models: Templates utilizing open-source models for privacy-centric applications, including local retrieval, SQL question answering with various implementations, and chatbot building.
Extraction: Templates designed to structure data extraction following a user-defined schema, leveraging OpenAI and Anthropic functions, or extracting specific data types like biotech plate data.
Summarization and Tagging: Templates for summarizing or categorizing documents, such as using Anthropic’s Claude2 for document summarization.
Agents: Templates to create chatbots capable of taking actions, employing OpenAI function calling or platforms like Anthropic and You.com.
Safety and Evaluation: Templates for moderating or evaluating LLM outputs, including guardrail implementations and feedback systems for chatbot responses.