Until 2021, to use an AI model for a specific use-case we would need to fine-tune the model weights themselves. That would require huge amounts of training data and significant compute to fine-tune any reasonably performing model.
Instruction fine-tuned Large Language Models (LLMs) changed this fundamental rule of applying AI models to new use-cases. Rather than needing to either train a model from scratch or fine-tune an existing model, these new LLMs could adapt incredibly well to a new problem or use-case with nothing more than a prompt change.
Prompts allow us to completely change the functionality of an AI pipeline. Through natural language we simply tell our LLM what it needs to do, and with the right AI pipeline and prompting, it often works.
LangChain naturally has many functionalities geared towards helping us build our prompts. We can build very dynamic prompting pipelines that modifying the structure and content of what we feed into our LLM based on essentially any parameter we would like. In this example, we'll explore the essentials to prompting in LangChain and apply this in a demo Retrieval Augmented Generation (RAG) pipeline.
⚠️ We will be using OpenAI for this example allowing us to run everything via API. If you would like to use Ollama instead, please see the Ollama version of this example.
Basic Prompting
We'll start by looking at the various parts of our prompt. For RAG use-cases we'll typically have three core components however this is very use-cases dependant and can vary significantly. Nonetheless, for RAG we will typically see:
Rules for our LLM: this part of the prompt sets up the behavior of our LLM, how it should approach responding to user queries, and simply providing as much information as possible about what we're wanting to do as possible. We typically place this within the system prompt of an chat LLM.
Context: this part is RAG-specific. The context refers to some external information that we may have retrieved from a web search, database query, or often a vector database. This external information is the Retrieval Augmentation part of RAG. For chat LLMs we'll typically place this inside the chat messages between the assistant and user.
Question: this is the input from our user. In the vast majority of cases the question/query/user input will always be provided to the LLM (and typically through a user message). However, the format and location of this being provided often changes.
Answer: this is the answer from our assistant, again this is very typical and we'd expect this with every use-case.
The below is an example of how a RAG prompt may look:
text
Answer the question based on the context below, }
if you cannot answer the question using the }---> (Rules) For Our Prompt
provided information answer with "I don't know" }
Context: Aurelio AI is an AI lab and studio }
focused on the fields of Natural Language Processing (NLP) }
and information retrieval using modern tooling }---> Context AI has
such as Large Language Models (LLMs), }
vector databases, and LangChain. }
Question: Does Aurelio AI do anything related to LangChain? }---> User Question
Answer: }---> AI Answer
Here we can see how the AI will appoach our question, as you can see we have a formulated response, if the context has the answer, then use the context to answer the question, if not, say I don't know, then we also have context and question which are being passed into this similarly to paramaters in a function.
python
prompt = """
Answer the user's query based on the context below.
If you cannot answer the question using the
provided information answer with "I don't know".
Context: {context}
"""
LangChain uses a ChatPromptTemplate object to format the various prompt types into a single list which will be passed to our LLM:
When we call the template it will expect us to provide two variables, the context and the query. Both of these variables are pulled from the strings we wrote, as LangChain interprets curly-bracket syntax (ie {context} and {query}) as indicating a dynamic variable that we expect to be inserted at query time. We can see that these variables have been picked up by our template object by viewing it's input_variables attribute:
python
prompt_template.input_variables
text
['context', 'query']
We can also view the structure of the messages (currently prompt templates) that the ChatPromptTemplate will construct by viewing the messages attribute:
python
prompt_template.messages
python
[
SystemMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=['context'],
input_types={},
partial_variables={},
template=(
'\nAnswer the user\'s query based on the context below.\n'
'If you cannot answer the question using the provided information answer '
'with "I don\'t know".\n\nContext: {context}\n'
)
),
additional_kwargs={}
),
HumanMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=['query'],
input_types={},
partial_variables={},
template='{query}'
),
additional_kwargs={}
)
]
From this, we can see that each tuple provided when using ChatPromptTemplate.from_messages becomes an individual prompt template itself. Within each of these tuples, the first value defines the role of the message, which is typically system, human, or ai. Using these tuples is shorthand for the following, more explicit code:
python
from langchain.prompts import SystemMessagePromptTemplate, HumanMessagePromptTemplate
Here we define our LLM and because we're using it for a question-answer use-case we want it's answer to be as grounded in reality as possible. To do that, we ofcourse prompt it to not make up any information via the If you cannot answer the question using the provided information answer with "I don't know" line, but we also use the model's temperature setting.
The temperature parameter controls the randomness of the LLM's output. A temperature of 0.0 makes an LLM's output more determinstic which in theory should lead to a lower likelihood of hallucination.
Now, the question here may be, why would we ever not use temperature=0.0? The answer to that is that sometimes a little bit of randomness can useful. Randomness tends to translate to text that feels more human and creative, so if we'd like an LLM to help us write an article or even a poem, that lack of determinism becomes a feature rather than a bug.
For now, we'll stick with our more deterministic LLM. We'll setup the pipeline to consume two variables when our LLM pipeline is called, query and context, we'll feed them into our chat prompt template, and then invoke our LLM with our formatted messages.
Although that sounds complicated, all we're doing is connecting our prompt_template and llm. We do this with LangChain Expression Language (LCEL), which uses the | operator to connect our each component.
python
pipeline = prompt_template | llm
Now let's define a query and some relevant context and invoke our pipeline.
python
context = """Aurelio AI is an AI company developing tooling for AI
engineers. Their focus is on language AI with the team having strong
expertise in building AI agents and a strong background in
information retrieval.
The company is behind several open source frameworks, most notably
Semantic Router and Semantic Chunkers. They also have an AI
Platform providing engineers with tooling to help them build with
AI. Finally, the team also provides development services to other
organizations to help them bring their AI tech to market.
Aurelio AI became LangChain Experts in September 2024 after a long
track record of delivering AI solutions built with the LangChain
Our output is a AIMessage object, which contains the content of the message alongside other metadata:
python
AIMessage(
content=(
'Aurelio AI is an AI company that develops tooling for AI engineers, focusing '
'on language AI. They have expertise in building AI agents and information '
'retrieval. The company is known for several open source frameworks, including '
'Semantic Router and Semantic Chunkers, and offers an AI Platform that '
'provides engineers with tools to build with AI. Additionally, they provide '
'development services to help other organizations bring their AI technology to '
'market.'
),
additional_kwargs={'refusal': None},
response_metadata={
'token_usage': {
'completion_tokens': 81,
'prompt_tokens': 184,
'total_tokens': 265,
'completion_tokens_details': {
'accepted_prediction_tokens': 0,
'audio_tokens': 0,
'reasoning_tokens': 0,
'rejected_prediction_tokens': 0
},
'prompt_tokens_details': {
'audio_tokens': 0,
'cached_tokens': 0
}
},
'model_name': 'gpt-4o-mini-2024-07-18',
'system_fingerprint': 'fp_bba3c8e70b',
'finish_reason': 'stop',
'logprobs': None
},
id='run-a5814d6c-dfe5-432f-b17f-dc57b9b761e9-0',
usage_metadata={
'input_tokens': 184,
'output_tokens': 81,
'total_tokens': 265,
'input_token_details': {
'audio': 0,
'cache_read': 0
},
'output_token_details': {
'audio': 0,
'reasoning': 0
}
}
)
Our LLM pipeline is able to consume the information from the context and use it to answer the user's query. Ofcourse, we would not usually be feeding in both a question and an answer into an LLM manually. Typically, the context would be retrieved from a vector database, via web search, or from elsewhere. We will cover this use-case in full and build a functional RAG pipeline in a future chapter.
Few Shot Prompting
Many State-of-the-Art (SotA) LLMs are incredible at instruction following. Meaning that it requires much less effort to get the intended output or behavior from these models than is the case for older LLMs and smaller LLMs.
We're using a one billion parameter LLM, that's where the :1b- part of llama3.2:1b-instruct-fp16 comes from. Now, that may seem like a lot but in the world of LLMs this is a tiny model. Because of it's size it can be efficiently and even used easily on a lot of consumer hardware. However, it is also less capable than other models like gpt-4o or claude-3.5-sonnet.
Using our tiny LLM does mean we need to put in a little extra work to get to generate what we'd like it to generate. Before creating an example let's first see how to use LangChain's few shot prompting objects. We will provide multiple examples and we'll feed them in as sequential human and ai messages so we setup the template like this:
Using this we can provide different sets of examples or even different individual example_prompt templates to the FewShotChatMessagePromptTemplate object to build our prompt structure. Let's try an real example where we might use few-shot prompting.
Few-Shot Example
Using our tiny LLM limits it's ability, so when asking for specific behaviors or structured outputs it can struggle. For example, we'll ask the LLM to summarize the key points about Aurelio AI using markdown and bullet points. Let's see what happens.
python
new_system_prompt = """
Answer the user's query based on the context below.
If you cannot answer the question using the
provided information answer with "I don't know".
Always answer in markdown format. When doing so please
provide headers, short summaries, follow with bullet
out = pipeline.invoke({"query": query, "context": context}).content
print(out)
text
# Overview of Aurelio AI
Aurelio AI is an AI company that specializes in developing tools and solutions for AI
engineers, particularly in the realm of language AI.
## Key Focus Areas
- **Language AI**: Expertise in building AI agents and information retrieval systems.
- **Open Source Frameworks**: Development of frameworks like Semantic Router and
Semantic Chunkers.
- **AI Platform**: Provides tooling for engineers to facilitate AI development.
- **Development Services**: Assists organizations in bringing their AI technologies to
market.
## Achievements
- Became LangChain Experts in September 2024, showcasing their proficiency in the
LangChain ecosystem.
In conclusion, Aurelio AI is dedicated to empowering AI engineers through innovative
tools, frameworks, and expert services in the field of language AI.
We can display our markdown nicely with IPython like so:
python
from IPython.display import display, Markdown
display(Markdown(out))
This is good but not quite the format we wanted. We could try improving our initial prompting instructions, but when this doesn't work we can move on to our few-shot prompting. We want to build something like this:
text
Answer the user's query based on the context below, }
if you cannot answer the question using the }
provided information answer with "I don't know" }
}---> (Rules)
Always answer in markdown format. When doing so please }
provide headers, short summaries, follow with bullet }
points, then conclude. Here are some examples: }
User: Can you explain gravity? }
AI: ## Gravity }
}
Gravity is one of the fundamental forces in the universe. }
}
### Discovery }---> (Example 1)
}
* Gravity was first discovered by... }
}
**To conclude**, Gravity is a fascinating topic and has been... }
}
User: What is the capital of France? }
AI: ## France }
}
The capital of France is Paris. }
}---> (Example 2)
### Origins }
}
* The name Paris comes from the... }
}
**To conclude**, Paris is highly regarded as one of the... }
Context: {context} }---> (Context)
We have already defined our example_prompt so now we just change our examples to use some examples of a user asking a question and the LLM answering in the exact markdown format we need.
python
examples = [
{
"input": "Can you explain gravity?",
"output": (
"## Gravity\n\n"
"Gravity is one of the fundamental forces in the universe.\n\n"
"### Discovery\n\n"
"* Gravity was first discovered by Sir Isaac Newton in the late 17th "
"century.\n"
"* It was said that Newton theorized about gravity after seeing an apple "
"fall from a tree.\n\n"
"### In General Relativity\n\n"
"* Gravity is described as the curvature of spacetime.\n"
"* The more massive an object is, the more it curves spacetime.\n"
"* This curvature is what causes objects to fall towards each other.\n\n"
"### Gravitons\n\n"
"* Gravitons are hypothetical particles that mediate the force of gravity.\n"
"* They have not yet been detected.\n\n"
"**To conclude**, Gravity is a fascinating topic and has been studied "
"extensively since the time of Newton.\n\n"
)
},
{
"input": "What is the capital of France?",
"output": (
"## France\n\n"
"The capital of France is Paris.\n\n"
"### Origins\n\n"
"* The name Paris comes from the Latin word \"Parisini\" which referred to "
"a Celtic people living in the area.\n"
"* The Romans named the city Lutetia, which means \"the place where the "
"river turns\".\n"
"* The city was renamed Paris in the 3rd century BC by the Celtic-speaking "
"Parisii tribe.\n\n"
"**To conclude**, Paris is highly regarded as one of the most beautiful "
"cities in the world and is one of the world's greatest cultural and "
"economic centres.\n\n"
)
}
]
We feed these into our FewShotChatMessagePromptTemplate object:
out = pipeline.invoke({"query": query, "context": context}).content
out
text
## Aurelio AI Overview
Aurelio AI is an AI company focused on developing tools and services for AI engineers,
particularly in the realm of language AI.
### Key Areas of Focus
- **Language AI**: Specializes in creating AI solutions that understand and generate
human language.
- **Open Source Frameworks**: Developed notable frameworks like Semantic Router and
Semantic Chunkers.
- **AI Platform**: Provides engineers with tools to facilitate AI development.
- **Development Services**: Offers services to help organizations bring their AI
technologies to market.
### Expertise
- The team has strong expertise in building AI agents.
- They possess a solid background in information retrieval.
### Recognition
- Became LangChain Experts in September 2024, showcasing their proficiency in the
LangChain ecosystem.
**To conclude**, Aurelio AI is dedicated to empowering AI engineers through innovative
tools, frameworks, and development services in the language AI space.
We can see that by adding a few examples to our prompt, ie few-shot prompting, we can get much more control over the exact structure of our LLM response. As the size of our LLMs increases, the ability of them to follow instructions becomes much greater and they tend to require less explicit prompting as we have shown here. However, even for SotA models like gpt-4o few-shot prompting is still a valid technique that can be used if the LLM is struggling to follow our intended instructions.
Chain of Thought Prompting
We'll take a look at one more commonly used prompting technique called chain of thought (CoT). CoT is a technique that encourages the LLM to think through the problem step by step before providing an answer. The idea being that by breaking down the problem into smaller steps, the LLM is more likely to arrive at the correct answer and we are less likely to see hallucinations.
To implement CoT we don't need any specific LangChain objects, instead we are simply modifying how we instruct our LLM within the system prompt. We will ask the LLM to list the problems that need to be solved, to solve each problem individually, and then to arrive at the final answer.
Let's first test our LLM without CoT prompting.
python
no_cot_system_prompt = """
Be a helpful assistant and answer the user's question.
You MUST answer the question directly without any other
Nowadays most LLMs are trained to use CoT prompting by default, so we actually need to instruct it not to do so for this example which is why we added "You MUST answer the question directly without any other text or explanation." to our system prompt.
python
query = (
"How many keystrokes are needed to type the numbers from 1 to 500?"
The total number of keystrokes needed to type the numbers from 1 to 500 is 1,500.
The actual answer is 1392, but the LLM without CoT just hallucinates and gives us a guess. Now, we can add explicit CoT prompting to our system prompt to see if we can get a better result.
python
# Define the chain-of-thought prompt template
cot_system_prompt = """
Be a helpful assistant and answer the user's question.
To answer the question, you must:
- List systematically and in precise detail all
subproblems that need to be solved to answer the
question.
- Solve each sub problem INDIVIDUALLY and in sequence.
- Finally, use everything you have worked through to
Thus, the total number of keystrokes needed to type the numbers from 1 to 500 is
**1392**.
Now we get a much better result! Our LLM provides us with a final answer of 1392 which is correct. Finally, as mentioned most LLMs are now trained to use CoT prompting by default. So let's see what happens if we don't explicitly tell the LLM to use CoT.
python
system_prompt = """
Be a helpful assistant and answer the user's question.
result = pipeline.invoke({"query": query}).content
display(Markdown(result))
text
To calculate the total number of keystrokes needed to type the numbers from 1 to 500,
we can break it down by the number of digits in the numbers.
1. **Numbers from 1 to 9**:
- There are 9 numbers (1 to 9).
- Each number has 1 digit.
- Total keystrokes = 9 * 1 = 9.
2. **Numbers from 10 to 99**:
- There are 90 numbers (10 to 99).
- Each number has 2 digits.
- Total keystrokes = 90 * 2 = 180.
3. **Numbers from 100 to 499**:
- There are 400 numbers (100 to 499).
- Each number has 3 digits.
- Total keystrokes = 400 * 3 = 1200.
4. **Number 500**:
- There is 1 number (500).
- It has 3 digits.
- Total keystrokes = 1 * 3 = 3.
Now, we can sum all the keystrokes:
- From 1 to 9: 9 keystrokes
- From 10 to 99: 180 keystrokes
- From 100 to 499: 1200 keystrokes
- From 500: 3 keystrokes
Total keystrokes = 9 + 180 + 1200 + 3 = 1392.
Therefore, the total number of keystrokes needed to type the numbers from 1 to 500 is
**1392**.
We almost get the exact same result. The formatting isn't quite as nice but the CoT behavior is clearly there, and the LLM produces the correct final answer!
CoT is useful not only for simple question-answering like this, but is also a fundamental component of many agentic systems which will often use CoT steps paired with tool use to solve very complex problems, this is what we see in OpenAI's current flagship model o1. We'll see later in the course how we can do this ourselves.
That concludes our intro to prompting and prompt templates in LangChain. We'll continue to build on this in the following LangChain articles covering conversational agents, tool use, and more.