Guidelines
Guidelines
Guidelines are the most powerful customization feature within Parlant. So although they are quite simple in principle, there is a lot to say about them.
What Are Guidelines
Guidelines allow us to shape an agent's behavior in two key scenarios: when out-of-the-box responses don't meet our expectations, or when we simply want to ensure consistent behavior across all interactions.
For example, an AI Travel Agent's out-of-the-box response to a customer wanting to book a room may be, "Sure, I can help you book a room. When will you be staying?"
If we wanted the response to be more energetic and appreciative of the customer's business, yet we liked how it's immediately leading with a question, we could add guidelines to these effects, making our agent consistently answer along the lines of, "I'm so happy you chose our hotel! Let's get you the best room for your needs, right away. When will you be staying?"
The Structure of Guidelines
In Parlant, each guideline is composed of two parts: the condition and the action. The action is the actual instruction part of the guideline. For example, "Offer a discount." The condition is the part the specifies the condition in which the action should take place. For example, "It is a holiday". In this example, the guideline would be:
- Condition: It is a holiday
- Action: Offer a discount
When speaking informally about guidelines, we often describe them in when/then form: When <CONDITION>, Then <ACTION>, or in this case, When it is a holiday, Then offer a discount.
How Parlant Uses Guidelines
LLMs are a magnificent creation, built on the principle of statistical attention in text. Yet this attention span is finite—and when it comes to following instructions, they need some support.
Behind the scenes, Parlant ensures that agent responses are precisely guided, by dynamically managing the LLM's context. Before each response, Parlant loads only the guidelines relevant to the conversation's current state. This real-time context management maximizes the alignment of each response with intended behavior.
Another important trick that Parlant employs here is supervising the agent's outputs before it reaches the customer, to ensure to the utmost degree that guidelines were correctly adhered to. To achieve this, researchers working on Parlant have devised a prompting mechanism called Conformal Guidance. This page will be updated with a link to the research paper when it's out. For now, you can find more information on this under Coherence.
Managing Guidelines
Parlant is built to make guideline management as simple as possible.
Normally, guidelines are added when business experts request behavioral changes in the agent. In such cases, developers can use Parlant to make those changes and iterate quickly and reliably with business experts.
Here's a practical example:
When Sales requests: "The agent should first ask about customer pain points before discussing our solution," implementing this takes just a minute by adding the following guideline:
- Condition: The customer has yet to specify their current pain points
- Action: Seek to understand their pain points before talking about our solution
Once added, Parlant automatically ensures this guideline is followed consistently across all conversations.
Here's an example of how to add a guideline:
- CLI
- Python
- TypeScript
$ parlant guideline create \
--agent-id AGENT_ID \
--condition CONDITION \
--action ACTION
from parlant.client import (
GuidelineContent,
GuidelinePayload,
ParlantClient,
Payload,
)
client = ParlantClient(base_url=SERVER_ADDRESS)
# Start evaluating the guideline's impact
evaluation = client.evaluations.create(
agent_id=AGENT_ID,
payloads=[
Payload(
kind="guideline",
guideline=GuidelinePayload(
content=GuidelineContent(
condition=CONDITION,
action=ACTION,
),
operation="add",
coherence_check=True,
connection_proposition=True,
)
)
],
)
# Wait for the evaluation to complete and get the invoice
invoices = client.evaluations.retrieve(
evaluation.id,
wait_for_completion=60, # Wait up to 60 seconds
).invoices
# Only continue if the guideline addition was approved
if all(invoice.approved for invoice in invoices):
client.guidelines.create(AGENT_ID, invoices)
else:
print("Guideline was not approved:")
print(invoices)
import { ParlantClient } from 'parlant-client';
const client = new ParlantClient({ environment: SERVER_ADDRESS });
// Start evaluating the guideline's impact
const evaluation = await client.evaluations.create({
agentId: AGENT_ID,
payloads: [{
kind: "guideline",
guideline: {
content: {
condition: CONDITION,
action: ACTION,
},
operation: "add",
coherenceCheck: true,
connectionProposition: true,
}}]
});
// Wait for the evaluation to complete and get the invoice
const { invoices } = await client.evaluations.retrieve(evaluation.id, {
waitForCompletion: 60, // Wait up to 60 seconds
});
// Only continue if the guideline addition was approved
if (invoices.every(invoice => invoice.approved)) {
await client.guidelines.create(AGENT_ID, { invoices });
} else {
console.error("Guideline was not approved:");
console.dir(invoices, { depth: null });
}
The examples above start the creation process with something called an evaluation, which has a bit of complexity to it. Let's now take a look at what that is and why it's there.
Maintaining Coherence
When managing guidelines in natural language, contradictions can easily creep in—not because of poor planning, but simply because we're human. We express our intentions in natural language, and sometimes what seems clear to us might have subtle ambiguities.
For instance, adding a guideline When user has items in cart, Then lead them to check out with express shipping ASAP" could conflict with an existing guideline, When user is a first-time visitor with items in cart, Then build trust by highlighting our return policy and inviting them to ask questions.. While both guidelines make perfect sense on their own, they create an unclear priority for cart interactions with new customers. This is exactly the kind of nuanced overlap that's hard to spot when writing guidelines, especially as your set grows.
This is where Parlant's evaluations come in. It helps catch these inconsistencies before they affect your agent's behavior, and in fact is a prerequisite whenever we add guidelines (though it can be explicitly ignored if you're sure you know what you're doing). To learn how the evaluation system works and how to control its nuances, check out the Coherence section.
To learn more about how Parlant helps you ensure that guidelines are coherent and correctly followed in real-time, please refer to the Coherence section.
How Many Guidelines Can You Have?
While Parlant can theoretically handle thousands of guidelines, current LLM technology imposes practical constraints. Today's LLMs—still relatively slow, limited, and expensive—mean that more guidelines translate to higher costs, longer response times, and potential accuracy trade-offs. We're constantly improving Parlant as LLM capabilities advance, but working within these limitations is key.
Our current recommendation: stay under 200 total guidelines. This might seem modest, but don't underestimate the impact—even 20 thoughtfully designed guidelines can transform agent behavior in practical applications. It's about precision, not volume.
For deeper insights into our strategy and Parlant's design principles, including our industry and technology forecasts, explore our Design Philosophy.
Formulating Guidelines
Formulating Guidelines
Think of an LLM as a highly knowledgeable stranger who's just walked into your business. They might have years of general experience, but they don't know your specific context, preferences, or way of doing things. Yet, this stranger is eager to help and will always try to respond—even when uncertain.
This is where guidelines come in. They're your way of channeling this endless enthusiasm and broad knowledge into focused, appropriate responses. But crafting effective guidelines is a bit of an art—just like with people.
The Art of Guidance
Consider a customer service scenario. As a very naive example, we might be tempted to write a guideline like:
- Condition: Customer is unhappy
- Action: Make them feel better
While well-intentioned, this is an example of a guideline that is just too vague. The LLM might interpret this in countless ways, from offering discounts it can't actually provide to making jokes that might be inappropriate for your brand. Instead, consider:
- Condition: Customer expresses dissatisfaction with our service
- Action: Acknowledge their frustration specifically, express sincere empathy, and ask for details about their experience so we can address it properly.
Notice how this guideline is both specific and bounded.
Context is Key
Guidelines have two crucial parts: Condition and Action. Think of Condition as the "when" and Action as the "what." Both require attentive crafting:
Too broad:
- Condition: Customer asks about products
- Action: Recommend something they might like
- Condition: Customer asks about product recommendations without specifying preferences
- Action: Ask about their specific needs, previous experience with similar products, and any particular features they're looking for before making recommendations
Finding the Right Balance
In principle, we're looking for guidelines that are "just right"—neither over nor under specified. Consider these iterations for a technical support agent:
Too vague:
- Condition: Customer has a technical problem
- Action: Help them fix it
Too rigid:
- Condition: Customer reports an error message
- Action: First ask for their operating system version, then their browser version, then their last system update date
Just right:
- Condition: Customer reports difficulty accessing our platform
- Action: Express understanding of their situation, ask for key details about their setup (OS and browser), and check if they've tried some concrete troubleshooting steps
Remember, the LLM will usually take your guidelines quite literally. If you tell it to "always suggest premium features," it might do so even when talking to a customer who's complaining about pricing. Always consider the broader context and potential edge cases when formulating your guidelines.
If in doubt, err on the side of vagueness. The goal isn't to script every possible interaction but to provide clear, contextual guidance that shapes the LLM's natural abilities into reliable, appropriate responses for your specific use case.