In this chapter, we will introduce LangChain's Agents, adding the ability to use tools
such as search and calculators to complete tasks that normal LLMs cannot fufil. We will
be using OpenAI's gpt-4o-mini.
Introduction to Tools
Tools augment our LLMs with code execution. A tool is simply a function formatted so
that our agent can understand how to use it and then execute it. Let's start by creating
a few simple tools.
We can use the @tool decorator to create an LLM-compatible tool from a standard Python
function — this function should include a few things for optimal performance:
A docstring describing what the tool does and when it should be used. Our LLM/agent
will read this and use it to decide when and how to use the tool.
Clear parameter names that ideally tell the LLM what each parameter is. If the
parameter names aren't clear, we ensure the docstring explains what the parameter is for
and how to use it.
Both parameter and return type annotations.
python
from langchain_core.tools import tool
@tool
def add(x: float, y: float) -> float:
"""Add 'x' and 'y'."""
return x + y
@tool
def multiply(x: float, y: float) -> float:
"""Multiply 'x' and 'y'."""
return x * y
@tool
def exponentiate(x: float, y: float) -> float:
"""Raise 'x' to the power of 'y'."""
return x ** y
@tool
def subtract(x: float, y: float) -> float:
"""Subtract 'x' from 'y'."""
return y - x
With the @tool decorator, we transform our function into a StructuredTool object,
which we can see below:
We can see the tool name, description, and arg schema:
python
print(f"{add.name=}\n{add.description=}")
text
add.name='add'
add.description="Add 'x' and 'y'."
python
add.args_schema.model_json_schema()
python
{
'description': "Add 'x' and 'y'.",
'properties': {
'x': {'title': 'X', 'type': 'number'},
'y': {'title': 'Y', 'type': 'number'}
},
'required': ['x', 'y'],
'title': 'add',
'type': 'object'
}
python
exponentiate.args_schema.model_json_schema()
python
{
'description': "Raise 'x' to the power of 'y'.",
'properties': {
'x': {'title': 'X', 'type': 'number'},
'y': {'title': 'Y', 'type': 'number'}
},
'required': ['x', 'y'],
'title': 'exponentiate',
'type': 'object'
}
To invoke the tool, we take the JSON string output by our LLM, parse it into a
dictionary, and then feed the key-value pairs into our tool function as kwargs, similar
to the below:
python
import json
llm_output_string = "{\"x\": 5, \"y\": 2}" # this is the output from the LLM
llm_output_dict = json.loads(llm_output_string) # load as dictionary
llm_output_dict
text
{'x': 5, 'y': 2}
We then pass this dictionary into the tool function as kwargs (keyword arguments) as
indicated by the ** operator — the ** operator unpacks key-value pairs into keyword
arguments.
python
exponentiate.func(**llm_output_dict)
text
25
We've covered the basics of tools and how they work. Now, let's move on to creating the
agent itself.
Creating an Agent
We're going to construct a simple tool-calling agent. We will construct the agent using
LangChain Epression Language (LCEL). We will cover LCEL more in the
next chapter, but for now - all we need
to know is that we construct our agent using syntax and components like so:
text
agent = (
<input parameters, including chat history and user query>
| <prompt>
| <LLM with tools>
)
We need this agent to remember previous interactions within the conversation. To do
that, we will use the ChatPromptTemplate with a system message, a placeholder for our
chat history, a placeholder for the user query, and a placeholder for the agent
scratchpad.
The agent scratchpad is where the agent writes its notes as it works through multiple
internal thought and tool-use steps to produce a final output for the user.
python
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
When creating an agent we need to add conversational memory to make the agent remember
previous interactions. We'll be using the older ConversationBufferMemory class rather
than the newer RunnableWithMessageHistory — the reason being that we will also be
using the older create_tool_calling_agent and AgentExecutor method and class.
⚠️ In the agent executor chapter,
we will use the newer RunnableWithMessageHistory class to build a custom
AgentExecutor.
python
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(
memory_key="chat_history", # must align with MessagesPlaceholder variable_name
return_messages=True # to return Message objects
)
text
LangChainDeprecationWarning: Please see the migration guide at:
Now we will initialize our agent. For that, we need:
llm: as already defined
tools: to be defined (just a list of our previously defined tools)
prompt: as already defined
memory: as already defined
python
from langchain.agents import create_tool_calling_agent
tools = [add, subtract, multiply, exponentiate]
agent = create_tool_calling_agent(
llm=llm, tools=tools, prompt=prompt
)
Our agent is only one component of our agent execution loop. So, when calling the
agent.invoke method, our LLM will generate a single response and proceed no further.
The invoke method will not run any tools, and no further iterations will be performed.
We can see this by asking a query that should trigger a tool call:
python
agent.invoke({
"input": "what is 10.7 multiplied by 7.68?",
"chat_history": memory.chat_memory.messages,
"intermediate_steps": [] # agent will append its internal steps here
Here, we can see the LLM has generated that we should use the multiply tool, and
provide the input of {"x": 10.7, "y": 7.68}. However, this method will not execute
the tool function itself. For that, we need an agent execution loop to handle the
execution logic of iterating through generation and tool-calling steps.
We use the AgentExecutor class to handle the execution loop:
python
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True
)
Now, let's try the same query with the executor. Note that the intermediate_steps
parameter we added before is no longer needed — the executor creates this parameter
internally.
python
agent_executor.invoke({
"input": "what is 10.7 multiplied by 7.68?",
"chat_history": memory.chat_memory.messages,
})
text
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
Invoking: `multiply` with `{'x': 10.7, 'y': 7.68}`
[0m[38;5;200m[1;3m82.17599999999999[0m[32;1m[1;3m10.7 multiplied by 7.68 is
approximately 82.18.[0m
[1m> Finished chain.[0m
python
{
'input': 'what is 10.7 multiplied by 7.68?',
'chat_history': [
HumanMessage(content='what is 10.7 multiplied by 7.68?', additional_kwargs={}, response_metadata={}),
AIMessage(content='10.7 multiplied by 7.68 is approximately 82.18.', additional_kwargs={}, response_metadata={})
],
'output': '10.7 multiplied by 7.68 is approximately 82.18.'
}
We can see that the executor invoked our multiply tool, producing the observation of
82.175999.... The executor then provided this observation to our LLM, which generated
a final response of:
text
10.7 multiplied by 7.68 is approximately 82.18.
Our LLM generated this final response based on the original query and the tool output
(i.e. the observation). We can confirm that this answer is accurate:
python
10.7*7.68
text
82.17599999999999
Let's test our agent with some memory and tool use. First, we tell it our name; then, we
will perform a few tool calls to see if the agent can recall our name.
First, give the agent our name:
python
agent_executor.invoke({
"input": "My name is Josh",
"chat_history": memory
})
text
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3mNice to meet you, Josh! How can I assist you today?[0m
[1m> Finished chain.[0m
python
{
'input': 'My name is Josh',
'chat_history': [
HumanMessage(content='what is 10.7 multiplied by 7.68?', additional_kwargs={}, response_metadata={}),
AIMessage(content='10.7 multiplied by 7.68 is approximately 82.18.', additional_kwargs={}, response_metadata={}),
HumanMessage(content='My name is Josh', additional_kwargs={}, response_metadata={}),
AIMessage(content='Nice to meet you, Josh! How can I assist you today?', additional_kwargs={}, response_metadata={})
],
'output': 'Nice to meet you, Josh! How can I assist you today?'
}
Now let's try and get the agent to perform multiple tool calls within a single execution
loop:
python
agent_executor.invoke({
"input": "What is nine plus 10, minus 4 * 2, to the power of 3",
"chat_history": memory
})
text
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3m
Invoking: `add` with `{'x': 9, 'y': 10}`
[0m[36;1m[1;3m19.0[0m[32;1m[1;3m
Invoking: `multiply` with `{'x': 4, 'y': 2}`
[0m[38;5;200m[1;3m8.0[0m[32;1m[1;3m
Invoking: `exponentiate` with `{'x': 2, 'y': 3}`
[0m[36;1m[1;3m8.0[0m[32;1m[1;3m
Invoking: `subtract` with `{'x': 19, 'y': 8}`
[0m[33;1m[1;3m-11.0[0m[32;1m[1;3mThe result of \( 9 + 10 - 4 \times 2^3 \) is \(-11\).[0m
[1m> Finished chain.[0m
python
{
'input': 'What is nine plus 10, minus 4 * 2, to the power of 3',
'chat_history': [
HumanMessage(content='what is 10.7 multiplied by 7.68?', additional_kwargs={}, response_metadata={}),
AIMessage(content='10.7 multiplied by 7.68 is approximately 82.18.', additional_kwargs={}, response_metadata={}),
HumanMessage(content='My name is Josh', additional_kwargs={}, response_metadata={}),
AIMessage(content='Nice to meet you, Josh! How can I assist you today?', additional_kwargs={}, response_metadata={}),
HumanMessage(content='What is nine plus 10, minus 4 * 2, to the power of 3', additional_kwargs={}, response_metadata={}),
AIMessage(content='The result of \\( 9 + 10 - 4 \\times 2^3 \\) is \\(-11\\).', additional_kwargs={}, response_metadata={})
],
'output': 'The result of \\( 9 + 10 - 4 \\times 2^3 \\) is \\(-11\\).'
}
Let's confirm that the answer is accurate:
python
9+10-(4*2)**3
text
-493
Perfect, now let's see if the agent can still recall our name:
python
agent_executor.invoke({
"input": "What is my name",
"chat_history": memory
})
text
[1m> Entering new AgentExecutor chain...[0m
[32;1m[1;3mYour name is Josh.[0m
[1m> Finished chain.[0m
python
{
'input': 'What is my name',
'chat_history': [
HumanMessage(content='what is 10.7 multiplied by 7.68?', additional_kwargs={}, response_metadata={}),
AIMessage(content='10.7 multiplied by 7.68 is approximately 82.18.', additional_kwargs={}, response_metadata={}),
HumanMessage(content='My name is Josh', additional_kwargs={}, response_metadata={}),
AIMessage(content='Nice to meet you, Josh! How can I assist you today?', additional_kwargs={}, response_metadata={}),
HumanMessage(content='What is nine plus 10, minus 4 * 2, to the power of 3', additional_kwargs={}, response_metadata={}),
AIMessage(content='The result of \\( 9 + 10 - 4 \\times 2^3 \\) is \\(-11\\).', additional_kwargs={}, response_metadata={}),
HumanMessage(content='What is my name', additional_kwargs={}, response_metadata={}),
AIMessage(content='Your name is Josh.', additional_kwargs={}, response_metadata={})
],
'output': 'Your name is Josh.'
}
The agent has successfully recalled our name. Let's move on to another agent example.
SerpAPI Weather Agent
In this example, we'll use the same agent and executor setup as before, but we'll add
the SerpAPI service to allow our agent to search
the web for information.
To use this tool, you need an API key. The free plan allows you to make up to 100
searches per month.
These custom tools will read your IP address to estimate your location, get the current
date and time, and then send this information to SerpAPI to find the weather in our
area.
python
import requests
from datetime import datetime
@tool
def get_location_from_ip():
"""Get the geographical location based on the IP address."""
We can create our prompt template, skipping the chat_history as we will only send a
single message, meaning the agent will not be conversational. However, if preferred,
we can add it simply using the MessagesPlaceholder.
python
prompt = ChatPromptTemplate.from_messages([
("system", "you're a helpful assistant"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])
Now we create our full tools list, our agent, and the agent_executor: