Aurelio logo
Updated on January 31, 2025

Getting Started with LangChain

AI Engineering

LangChain is one of the most popular open-source libraries for AI Engineers. Its goal is to abstract away the complexity of building AI software, provide easy-to-use building blocks, and facilitate switching between AI service providers.

In this chapter, we will introduce LangChain by building a simple LLM-powered assistant. We'll provide examples for both OpenAI's gpt-4o-mini and Meta's llama3.2 via Ollama!


⚠️ We will be using OpenAI for this example, which allows us to run everything via API. If you would like to use Ollama instead, please see the Ollama version of this example.


Initializing OpenAI's gpt-4o-mini

We start by initializing our LLM. We will use OpenAI's gpt-4o-mini model, if you need an API key, you can get one from OpenAI's website.

python
import os
from getpass import getpass

os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY") or getpass(
"Enter OpenAI API Key: "
)

openai_model = "gpt-4o-mini"
python
from langchain_openai import ChatOpenAI

# For normal accurate responses
llm = ChatOpenAI(temperature=0.0, model=openai_model)

# For unique creative responses
creative_llm = ChatOpenAI(temperature=0.9, model=openai_model)

We will take an article draft and use LangChain to generate various useful items around this article. We'll be creating:

  1. An article title.
  2. An article description.
  3. Structured output to get feedback on our article similar to what we might get from a human editor.
  4. A thumbnail/hero image for our article.

Here we input our article to start with. Currently this is using an article from the Aurelio AI learning page.

python
article = """
\
We believe AI's short—to mid-term future belongs to agents and that the long-term future
of *AGI* may evolve from agentic systems. Our definition of agents covers any
neuro-symbolic system in which we merge neural AI (such as an LLM) with semi-traditional
software.

With agents, we allow LLMs to integrate with code — allowing AI to search the web,
perform math, and essentially integrate into anything we can build with code. It should
be clear the scope of use cases is phenomenal where AI can integrate with the broader
world of software.

In this introduction to AI agents, we will cover the essential concepts that make them
what they are and why that will make them the core of real-world AI in the years to
come.

---

## Neuro-Symbolic Systems

Neuro-symbolic systems consist of both neural and symbolic computation, where:

- Neural refers to LLMs, embedding models, or other neural network-based models.
- Symbolic refers to logic containing symbolic logic, such as code.

Both neural and symbolic AI originate from the early philosophical approaches to AI:
connectionism (now neural) and symbolism. Symbolic AI is the more traditional AI.
Diehard symbolists believed they could achieve true AGI via written rules, ontologies,
and other logical functions.

The other camp were the connectionists. Connectionism emerged in 1943 with a theoretical
neural circuit but truly kicked off with Rosenblatt's perceptron paper in 1958 [1][2].
Both of these approaches to AI are fascinating but deserve more time than we can give
them here, so we will leave further exploration of these concepts for a future chapter.

Most important to us is understanding where symbolic logic outperforms neural-based
compute and vice-versa.

| Neural | Symbolic |
| --- | --- |
| Flexible, learned logic that can cover a huge range of potential scenarios. | Mostly hand-written rules which can be very granular and fine-tuned but hard to scale. |
| Hard to interpret why a neural system does what it does. Very difficult or even impossible to predict behavior. | Rules are written and can be understood. When unsure why a particular ouput was produced we can look at the rules / logic to understand. |
| Requires huge amount of data and compute to train state-of-the-art neural models, making it hard to add new abilities or update with new information. | Code is relatively cheap to write, it can be updated with new features easily, and latest information can often be added often instantaneously. |
| When trained on broad datasets can often lack performance when exposed to unique scenarios that are not well represented in the training data. | Easily customized to unique scenarios. |
| Struggles with complex computations such as mathematical operations. | Perform complex computations very quickly and accurately. |

Pure neural architectures struggle with many seemingly simple tasks. For example, an LLM
*cannot* provide an accurate answer if we ask it for today's date.

Retrieval Augmented Generation (RAG) is commonly used to provide LLMs with up-to-date
knowledge on a particular subject or access to proprietary knowledge.

### Giving LLMs Superpowers

By 2020, it was becoming clear that neural AI systems could not perform tasks symbolic
systems typically excelled in, such as arithmetic, accessing structured DB data, or
making API calls. These tasks require discrete input parameters that allow us to process
them reliably according to strict written logic.

In 2022, researchers at AI21 developed Jurassic-X, an LLM-based "neuro-symbolic
architecture." Neuro-symbolic refers to merging the "neural computation" of large
language models (LLMs) with more traditional (i.e. symbolic) computation of code.

Jurassic-X used the Modular Reasoning, Knowledge, and Language (MRKL) system [3]. The
researchers developed MRKL to solve the limitations of LLMs, namely:

- Lack of up-to-date knowledge, whether that is the latest in AI or something as simple
as today's date.
- Lack of proprietary knowledge, such as internal company docs or your calendar
bookings.
- Lack of reasoning, i.e. the inability to perform operations that traditional software
is good at, like running complex mathematical operations.
- Lack of ability to generalize. Back in 2022, most LLMs had to be fine-tuned to perform
well in a specific domain. This problem is still present today but far less prominent as
the SotA models generalize much better and, in the case of MRKL, are able to use tools
relatively well (although we could certainly take the MRKL solution to improve tool use
performance even today).

MRKL represents one of the earliest forms of what we would now call an agent; it is an
LLM (neural computation) paired with executable code (symbolic computation).

## ReAct and Tools

There is a misconception in the broader industry that an AI agent is an LLM contained
within some looping logic that can generate inputs for and execute code functions. This
definition of agents originates from the huge popularity of the ReAct agent framework
and the adoption of a similar structure with function/tool calling by LLM providers such
as OpenAI, Anthropic, and Ollama.

![ReAct agent flow with the Reasoning-Action loop [4]. When the action chosen specifies
to use a normal tool, the tool is used and the observation returned for another
iteration through the Reasoning-Action loop. To return a final answer to the user the
LLM must choose action "answer" and provide the natural language response, finishing
the loop.](/images/posts/ai-agents/ai-agents-00.png)

<small>ReAct agent flow with the Reasoning-Action loop [4]. When the action chosen
specifies to use a normal tool, the tool is used and the observation returned for
another iteration through the Reasoning-Action loop. To return a final answer to the
user the LLM must choose action "answer" and provide the natural language response,
finishing the loop.</small>

Our "neuro-symbolic" definition is much broader but certainly does include ReAct agents
and LLMs paired with tools. This agent type is the most common for now, so it's worth
understanding the basic concept behind it.

The **Re**ason **Act**ion (ReAct) method encourages LLMs to generate iterative
*reasoning* and *action* steps. During *reasoning,* the LLM describes what steps are to
be taken to answer the user's query. Then, the LLM generates an *action,* which we parse
into an input to some executable code, which we typically describe as a tool/function
call.

![ReAct method. Each iteration includes a Reasoning step followed by an Action (tool
call) step. The Observation is the output from the previous tool call. During the final
iteration the agent calls the answer tool, meaning we generate the final answer for the
user.](/images/posts/ai-agents/ai-agents-01.png)

<small>ReAct method. Each iteration includes a Reasoning step followed by an Action
(tool call) step. The Observation is the output from the previous tool call. During the
final iteration the agent calls the answer tool, meaning we generate the final answer
for the user.</small>

Following the reason and action steps, our action tool call returns an observation. The
logic returns the observation to the LLM, which is then used to generate subsequent
reasoning and action steps.

The ReAct loop continues until the LLM has enough information to answer the original
input. Once the LLM reaches this state, it calls a special *answer* action with the
generated answer for the user.

## Not only LLMs and Tool Calls

LLMs paired with tool calling are powerful but far from the only approach to building
agents. Using the definition of neuro-symbolic, we cover architectures such as:

- Multi-agent workflows that involve multiple LLM-tool (or other agent structure)
combinations.
- More deterministic workflows where we may have set neural model-tool paths that may
fork or merge as the use case requires.
- Embedding models that can detect user intents and decide tool-use or LLM
selection-based selection in vector space.

These are just a few high-level examples of alternative agent structures. Far from being
designed for niche use cases, we find these alternative options to frequently perform
better than the more common ReAct or Tool agents. We will cover all of these examples
and more in future chapters.

---

Agents are fundamental to the future of AI, but that doesn't mean we should expect that
future to come from agents in their most popular form today. ReAct and Tool agents are
great and handle many simple use cases well, but the scope of agents is much broader,
and we believe thinking beyond ReAct and Tools is key to building future AI.

---

## References

[1] The curious case of Connectionism (2019) [https://www.degruyter.com/document/doi/10.1515/opphil-2019-0018/html](https://www.degruyter.com/document/doi/10.1515/opphil-2019-0018/html)

[2] F. Rosenblatt, [The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain](https://www.ling.upenn.edu/courses/cogs501/Rosenblatt1958.pdf) (1958), Psychological Review

[3] E. Karpas et al. [MRKL Systems: A Modular, Neuro-Symbolic Architecture That Combines Large Language Models, External Knowledge Sources and Discrete Reasoning](https://arxiv.org/abs/2205.00445) (2022), AI21 Labs
"""

Preparing our Prompts

LangChain has several prompt classes and methods for organizing or constructing our prompts. We will cover these in more detail in later examples, but for now, we'll cover the essentials we need here.

Prompts for chat agents are at a minimum broken up into three components, those are:

  • System prompt: This provides instructions to our LLM on how it must behave, what its objective is, etc.

  • User prompt: This is written input from the user.

  • AI prompt: This is the AI-generated output. When representing a conversation, previous generations will be inserted back into the next prompt and become part of the broader chat history.

text
You are a helpful AI assistant, you will do XYZ. | SYSTEM PROMPT

User: Hi, what is the capital of Australia? | USER PROMPT
AI: It is Canberra | AI PROMPT
User: When is the best time to visit? | USER PROMPT

LangChain provides us with templates for each of these prompt types. We can insert different inputs into the templates and modify the prompt based on the provided inputs.

Let's initialize our system and user prompt first:

python
from langchain.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate

# Defining the system prompt (how the AI should act)
system_prompt = SystemMessagePromptTemplate.from_template(
"You are an AI assistant that helps generate article titles."
)

# the user prompt is provided by the user, in this case however the only dynamic
# input is the article
user_prompt = HumanMessagePromptTemplate.from_template(
"""You are tasked with creating a name for a article.
The article is here for you to examine {article}

The name should be based of the context of the article.
Be creative, but make sure the names are clear, catchy,
and relevant to the theme of the article.

Only output the article name, no other explanation or
text can be provided.""",
input_variables=["article"]
)

We can display what our formatted human prompt would look like after inserting a value into the article parameter:

python
user_prompt.format(article="TEST STRING")
python
HumanMessage(content='You are tasked with creating a name for a article.\nThe article is here for you to examine TEST STRING\n\nThe name should be based of the context of the article. \nBe creative, but make sure the names are clear, catchy, \nand relevant to the theme of the article.\n\nOnly output the article name, no other explanation or\ntext can be provided.', additional_kwargs={}, response_metadata={})

We have our system and user prompts, we can merge both into our full chat prompt using the ChatPromptTemplate:

python
from langchain.prompts import ChatPromptTemplate

first_prompt = ChatPromptTemplate.from_messages([system_prompt, user_prompt])

By default, the ChatPromptTemplate will read the input_variables from each of the prompt templates inserted and allow us to use those input variables when formatting the full chat prompt template:

python
print(first_prompt.format(article="TEST STRING"))
text
System: You are an AI assistant that helps generate article titles.
Human: You are tasked with creating a name for a article.
The article is here for you to examine TEST STRING

The name should be based of the context of the article.
Be creative, but make sure the names are clear, catchy,
and relevant to the theme of the article.

Only output the article name, no other explanation or
text can be provided.

ChatPromptTemplate also prefixes each individual message with its role, e.g., System:, Human:, or AI:.

We can combine our first_prompt template and the llm object we defined earlier to create a simple LLM chain. This chain will perform the steps prompt formatting > llm generation > get output.

We'll be using LangChain Expression Language (LCEL) to construct our chain. This syntax can look strange, but we will cover it in detail later in the course. For now, all we need to know is that we define our inputs with the first dictionary segment (ie {"article": lambda x: x["article"]}). Then, we use the pipe operator (|) to say that we are feeding the output from the left of the pipe as input into the right of the pipe.

python
chain_one = (
{"article": lambda x: x["article"]}
| first_prompt
| creative_llm
| {"article_title": lambda x: x.content}
)

This first chain will create our article title, which we can run with invoke:

python
article_title_msg = chain_one.invoke({"article": article})
article_title_msg
python
{'article_title': '"Unlocking the Future: The Rise of Neuro-Symbolic AI Agents"'}

We have our article_title. To continue, our next step is to summarize the article using both the article and newly generated article_title values, from which we will output a new summary variable:

python
second_user_prompt = HumanMessagePromptTemplate.from_template(
"""You are tasked with creating a description for
the article. The article is here for you to examine:

---

{article}

---

Here is the article title '{article_title}'.

Output the SEO friendly article description. Do not output
anything other than the description.""",
input_variables=["article", "article_title"]
)

second_prompt = ChatPromptTemplate.from_messages([
system_prompt,
second_user_prompt
])
python
chain_two = (
{
"article": lambda x: x["article"],
"article_title": lambda x: x["article_title"]
}
| second_prompt
| llm
| {"summary": lambda x: x.content}
)
python
article_description_msg = chain_two.invoke({
"article": article,
"article_title": article_title_msg["article_title"]
})
article_description_msg
python
{'summary': 'Explore the transformative potential of neuro-symbolic AI agents in our comprehensive introduction. This article delves into the integration of neural and symbolic systems, highlighting how these agents enhance the capabilities of large language models (LLMs) by enabling them to perform complex tasks, access real-time data, and reason effectively. Discover the significance of agents in shaping the future of artificial intelligence, their diverse applications, and the innovative architectures that go beyond traditional frameworks. Join us on this journey to understand why neuro-symbolic systems are poised to become the cornerstone of real-world AI solutions.'}

The third step will consume our first article variable and provide several output fields, focusing on helping the user improve a part of their writing. We can specify how the LLM will use structured outputs as we output multiple fields, keeping the generated fields aligned with our requirements.

python
third_user_prompt = HumanMessagePromptTemplate.from_template(
"""You are tasked with creating a new paragraph for the
article. The article is here for you to examine:

---

{article}

---

Choose one paragraph to review and edit. During your edit,
ensure you provide constructive feedback to the user so they
can learn where to improve their own writing.""",
input_variables=["article"]
)

# prompt template 3: creating a new paragraph for the article
third_prompt = ChatPromptTemplate.from_messages([
system_prompt,
third_user_prompt
])

We create a pydantic object describing the output format we need. This format description is then passed to our model using the with_structured_output method:

python
from pydantic import BaseModel, Field

class Paragraph(BaseModel):
original_paragraph: str = Field(description="The original paragraph")
edited_paragraph: str = Field(description="The improved edited paragraph")
feedback: str = Field(description=(
"Constructive feedback on the original paragraph"
))

structured_llm = creative_llm.with_structured_output(Paragraph)

Now we put all of this together in another chain:

python
# chain 3: inputs: article / output: article_para
chain_three = (
{"article": lambda x: x["article"]}
| third_prompt
| structured_llm
| {
"original_paragraph": lambda x: x.original_paragraph,
"edited_paragraph": lambda x: x.edited_paragraph,
"feedback": lambda x: x.feedback
}
)
python
out = chain_three.invoke({"article": article})
out
python
{'original_paragraph': 'Neuro-symbolic systems consist of both neural and symbolic computation, where:\n\n- Neural refers to LLMs, embedding models, or other neural network-based models.\n- Symbolic refers to logic containing symbolic logic, such as code.\n\nBoth neural and symbolic AI originate from the early philosophical approaches to AI: connectionism (now neural) and symbolism. Symbolic AI is the more traditional AI. Diehard symbolists believed they could achieve true AGI via written rules, ontologies, and other logical functions.',
'edited_paragraph': 'Neuro-symbolic systems uniquely combine neural and symbolic computation, encompassing both aspects:\n\n- "Neural" pertains to Large Language Models (LLMs), embedding models, and other neural network frameworks.\n- "Symbolic" signifies the logic and structures inherent in traditional programming and symbolic representation.\n\nBoth forms of AI trace their origins back to foundational philosophical schools: connectionism, which has evolved into today\'s neural AI, and symbolism, the cornerstone of conventional AI. Proponents of symbolic AI, often referred to as diehard symbolists, firmly believe that achieving true AGI is possible through meticulously defined rules, ontologies, and logical functions.',
'feedback': "Your paragraph provides a solid introduction to neuro-symbolic systems, but it could be more concise and engaging. Consider restructuring some sentences for clarity and flow. For example, the distinction between 'neural' and 'symbolic' could be emphasized without repetition. Using quotes around terms like 'Neural' and 'Symbolic' enhances readability, and adding examples can make the concepts more accessible to readers unfamiliar with the terms. Overall, aim for tighter organization and more variation in sentence structure to maintain reader interest."}

Now, we want this article to look appealing, so we need to grab an image based on our article! However, the prompt for the article image cannot be over 1000 letters, so this has to be short in case we want to add anything in, such as style, later on.

python
from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper
from langchain_core.prompts import PromptTemplate

image_prompt = PromptTemplate(
input_variables=["article"],
template=(
"Generate a prompt with less then 500 characters to generate an image "
"based on the following article: {article}"
)
)

The generate_and_display function will generate the article image once we have the prompt from our image prompt.

python
from skimage import io
import matplotlib.pyplot as plt
from langchain_core.runnables import RunnableLambda

def generate_and_display_image(image_prompt):
image_url = DallEAPIWrapper().run(image_prompt)
image_data = io.imread(image_url)

# And update the display code to:
plt.imshow(image_data)
plt.axis('off')
plt.show()

# we wrap this in a RunnableLambda for use with LCEL
image_gen_runnable = RunnableLambda(generate_and_display_image)

We have all of our image generation components ready; we chain them together again with LCEL:

python
# chain 4: inputs: article, article_para / outputs: new_suggestion_article
chain_four = (
{"article": lambda x: x["article"]}
| image_prompt
| llm
| (lambda x: x.content)
| image_gen_runnable
)

And now, we invoke our final chain:

python
chain_four.invoke({"article": article})

png

With that, we've built LLM chains that can help us build and write articles. We've understood a few of the basics of LangChain, introduced LangChain Expression Language (LCEL), and built a multi-modal article-helper pipeline.