Skip to main content

Build modern, enterprise-grade GenAI interactions—while keeping your sanity.

Parlant is the open-source conversation modeling engine for controlled, compliant, and purposeful GenAI conversations.



Tame the chaos of generation.

LLMs aren't great at handling many instructions at once, due to task attention drift. This quickly becomes a showstopper for production.

Parlant solves this with a condition-based instruction management system, so that your instructions (including tool calls) are only triggered and fed into the underlying model when the expected conditions are met.

Dynamically providing only the relevant set of instructions at each point dramatically improves your LLM's focus, seamlessly reducing the risk of perplexity and inconsistency—even as the conversation progresses.

Using a sophisticated evaluation engine, Parlant understands when an action was already done and doesn't need to be repeated. It knows when a previously-applied action has shifted context and needs to re-apply. It distinguishes between different parts of an action and tracks them independently. It even understands when a condition depends on the agent's next intention, which it actively predicts.

This is a game-changing approach for agentic Conversational AI , as it lets you scale your agent's complexity confidently as your needs grow.

import parlant.sdk as p

async def start_conversation_server():
async with p.Server() as server:
agent = await server.create_agent(
name="Otto Carmen", description="You work at a car dealership"
)

sell_car_journey = await agent.create_journey(
title="Sell a car",
conditions=[
"The customer wants to buy a new car",
"The customer expressed general interest in new cars",
],
description=dedent("""\
Proactively help the customer decide what new car to get.

1. First vibe and seek to clarify their situation and needs.
Chit-chat about their current car, what they like or don't,
as well as whether they have children or pets.
2. Ask them about budget and preferences.
3. Once needs are clarified, recommend relevant categories
or specific models for consideration.
4. Once a choice is made, ask them to confirm the order"""),
)

await sell_car_journey.create_guideline(
condition="the customer doesn't know what car they want",
action="tell them that the current on-sale car is popular now",
tools=[get_on_sale_car],
)

Take control of your agent.

Instead of relying on the LLM to guess how you want the conversation to go, Parlant lets you dictate and enforce the exact behavior you want.

Define semantic relationships between guidelines, like prioritization, dependency, entailment, and more. This is useful when you want to ensure that certain guidelines are always followed first, that some guidelines are only triggered when specific other ones are active, or to ensure that certain instructions are mutually exclusive.

@p.tool
async def human_handoff_to_sales(context: p.ToolContext) -> p.ToolResult:
await notify_sales(context.customer_id, context.session_id)

return p.ToolResult(
data="Session handed off to sales team",
control={"mode": "manual"},
)

offer_on_sale_car = await journey.create_guideline(
condition="the customer indicates they're on a budget",
action="offer them a car that is on sale",
tools=[get_on_sale_cars],
)

transfer_to_sales = await journey.create_guideline(
condition="the customer clearly stated they wish to buy a specific car",
action="transfer them to the sales team",
tools=[human_handoff_to_sales],
)

await transfer_to_sales.prioritize_over(offer_on_sale_car)

Trust your outputs.

The outputs of LLMs are adaptive and fluid, but often impossible to predict and subtly inaccurate. This makes them unfit for message generation in high-stakes, customer-facing use cases.

But service conversations are repetitive, and it turns out you can take advantage of this quality to avoid even subtle output hallucinations by avoiding generated messages altogether, or, at most, by confining the generation to specific, controlled parts of the message.

Parlant supports utterance templates. How it works: the LLM drafts a fluid message, and Parlant matches and renders an utterance . Templates are in Jinja2 format, and are fed with both contextual and tool-driven data.

This means your customer sees a message that is approved, correct, and appropriate. Want to get closer to the true fluidity of the underlying LLM? Iteratively grow your utterance bank with time and experimentation.

@p.tool
async def get_on_sale_car(context: p.ToolContext) -> p.ToolResult:
car_model = "Tesla Model Y"

return p.ToolResult(
data=car_model, # Feed the fluid draft
utterance_fields={"car_on_sale": car_model,} # Feed the template
)

await agent.create_utterance(
# Use tool-driven field data to fill in the template
template=dedent("""\
The {{car_on_sale}} is currently on sale!

Want to hear more about it?"""),
)

await agent.create_utterance(
# Access built-in contextual fields using the std. prefix
template=dedent("""\
Hey {{std.customer.name}}!

What can I help you with today?"""),
)

await agent.create_utterance(
# Use generative fields to infer details
# from context in a controlled manner
template="Sorry to hear about {{generative.customer_issue}}."
)


Who Uses Parlant?

Parlant is used to deliver complex conversational agents that reliably follow your protocols in use cases such as:

Regulated financial services

Regulated financial services

Legal assistance

Legal assistance

Brand-sensitive customer service

Brand-sensitive customer service

Healthcare communications

Healthcare communications

Government and civil services

Government and civil services

Personal advocacy and representation

Personal advocacy and representation



Have Questions? Let's Talk!

Whether you're exploring or already building, we're happy to chat about how Parlant can help your use case.

Contact Team
Team members 1
Team members 2
Team members 3
Team members 4
Team members 5


Contributing

Join the open-source movement to get conversational GenAI agents under control!

Explore