Getting Started with LangChain
AI Engineering
LangChain is one of the most popular open-source libraries for AI Engineers. Its goal is to abstract away the complexity of building AI software, provide easy-to-use building blocks, and facilitate switching between AI service providers.
In this chapter, we will introduce LangChain by building a simple LLM-powered assistant. We'll provide examples for both OpenAI's gpt-4o-mini
and Meta's llama3.2
via Ollama!
⚠️ We will be using OpenAI for this example, which allows us to run everything via API. If you would like to use Ollama instead, please see the Ollama version of this example.
Initializing OpenAI's gpt-4o-mini
We start by initializing our LLM. We will use OpenAI's gpt-4o-mini
model, if you need an API key, you can get one from OpenAI's website.
We will take an article
draft and use LangChain to generate various useful items around this article. We'll be creating:
- An article title.
- An article description.
- Structured output to get feedback on our article similar to what we might get from a human editor.
- A thumbnail/hero image for our article.
Here we input our article to start with. Currently this is using an article from the Aurelio AI learning page.
Preparing our Prompts
LangChain has several prompt classes and methods for organizing or constructing our prompts. We will cover these in more detail in later examples, but for now, we'll cover the essentials we need here.
Prompts for chat agents are at a minimum broken up into three components, those are:
-
System prompt: This provides instructions to our LLM on how it must behave, what its objective is, etc.
-
User prompt: This is written input from the user.
-
AI prompt: This is the AI-generated output. When representing a conversation, previous generations will be inserted back into the next prompt and become part of the broader chat history.
LangChain provides us with templates for each of these prompt types. We can insert different inputs into the templates and modify the prompt based on the provided inputs.
Let's initialize our system and user prompt first:
We can display what our formatted human prompt would look like after inserting a value into the article
parameter:
We have our system and user prompts, we can merge both into our full chat prompt using the ChatPromptTemplate
:
By default, the ChatPromptTemplate
will read the input_variables
from each of the prompt templates inserted and allow us to use those input variables when formatting the full chat prompt template:
ChatPromptTemplate
also prefixes each individual message with its role, e.g., System:
, Human:
, or AI:
.
We can combine our first_prompt
template and the llm
object we defined earlier to create a simple LLM chain. This chain will perform the steps prompt formatting > llm generation > get output.
We'll be using LangChain Expression Language (LCEL) to construct our chain. This syntax can look strange, but we will cover it in detail later in the course. For now, all we need to know is that we define our inputs with the first dictionary segment (ie {"article": lambda x: x["article"]}
). Then, we use the pipe operator (|
) to say that we are feeding the output from the left of the pipe as input into the right of the pipe.
This first chain will create our article title, which we can run with invoke
:
We have our article_title
. To continue, our next step is to summarize the article using both the article
and newly generated article_title
values, from which we will output a new summary
variable:
The third step will consume our first article
variable and provide several output fields, focusing on helping the user improve a part of their writing. We can specify how the LLM will use structured outputs as we output multiple fields, keeping the generated fields aligned with our requirements.
We create a pydantic object describing the output format we need. This format description is then passed to our model using the with_structured_output
method:
Now we put all of this together in another chain:
Now, we want this article to look appealing, so we need to grab an image based on our article! However, the prompt for the article image cannot be over 1000 letters
, so this has to be short in case we want to add anything in, such as style
, later on.
The generate_and_display
function will generate the article image once we have the prompt from our image prompt.
We have all of our image generation components ready; we chain them together again with LCEL:
And now, we invoke
our final chain:
With that, we've built LLM chains that can help us build and write articles. We've understood a few of the basics of LangChain, introduced LangChain Expression Language (LCEL), and built a multi-modal article-helper pipeline.

- When to Use LangChain
Getting Started with LangChain
- AI Observability with LangSmith
- Prompt Templating and Techniques in LangChain
- Conversational Memory in LangChain
- Introduction to LangChain Agents
- LangChain Agent Executor Deep Dive
- LangChain Expression Language (LCEL)
- LangChain Streaming
- Capstone Project: AI Agent App