Skip to main content

Installation

Parlant Logo

Getting Started with Parlant

Parlant is an open-source Agentic Behavior Modeling Engine for LLM agents, built to help developers quickly create customer-engaging, business-aligned conversational agents with control, clarity, and confidence.

It gives you all the structure you need to build customer-facing agents that behave exactly as your business requires:

  • Journeys: Define clear customer journeys and how your agent should respond at each step.

  • Behavioral Guidelines: Easily craft agent behavior; Parlant will match the relevant elements contextually.

  • Tool Use: Attach external APIs, data fetchers, or backend services to specific interaction events.

  • Domain Adaptation: Teach your agent domain-specific terminology and craft personalized responses.

  • Canned Responses: Use response templates to eliminate hallucinations and guarantee style consistency.

  • Explainability: Understand why and when each guideline was matched and followed.

Installation

Parlant is available on both GitHub and PyPI and works on multiple platforms (Windows, Mac, and Linux).

Please note that Python 3.10 and up is required for Parlant to run properly.

pip install parlant
Development Branch

If you're feeling adventurous and want to try out new features, you can also install the latest development version directly from GitHub.

pip install git+https://github.com/emcie-co/parlant@develop

Creating Your First Agent

Once installed, you can use the following code to spin up an initial, sample agent. You'll flesh out its behavior later.

# main.py

import asyncio
import parlant.sdk as p

async def main():
async with p.Server() as server:
agent = await server.create_agent(
name="Otto Carmen",
description="You work at a car dealership",
)

asyncio.run(main())
ASYNC/AWAIT?

You'll notice Parlant follows the asynchronous programming paradigm with async and await. This is a powerful feature of Python that lets you to write code that can handle many tasks at once, allowing your agent to handle more concurrent requests in production.

If you're new to async programming, check out the official Python documentation for a quick introduction.

Parlant uses OpenAI as the default NLP provider, so you need to ensure you have OPENAI_API_KEY set in your environment.

Then, run the program!

export OPENAI_API_KEY="<YOUR_API_KEY>"
python main.py
Using Other LLM Providers

Parlant supports multiple LLM providers by default, accessible via the p.NLPServices class. You can also add your own provider by implementing the p.NLPService interface, which you can learn how to do in the Custom NLP Models section.

To use one of the built-in-providers, you can specify it when creating the server. For example:

async with p.Server(nlp_service=p.NLPServices.cerebras) as server:
...

Note that you may need to install an additional "extra" package for some providers. For example, to use the Cerebras NLP service:

pip install parlant[cerebras]

Having said that, Parlant is observed to work best with OpenAI and Anthropic models, as these models are highly consistent in generating high-quality completions with valid JSON schemas—so we recommend using one of those if you're just starting out.

Testing Your Agent

To test your installation, head over to http://localhost:8800 and start a new session with the agent.

Post installation demo

Creating Your First Guideline

Guidelines are the core of Parlant's behavior model. They allow you to define how your agent should respond to specific user inputs or conditions. Parlant cleverly manages guideline context for you, so you can add as many guidelines as you need without worrying about context overload or other scale issues.

# main.py

import asyncio
import parlant.sdk as p

async def main():
async with p.Server() as server:
agent = await server.create_agent(
name="Otto Carmen",
description="You work at a car dealership",
)

##############################
## Add the following: ##
##############################
await agent.create_guideline(
# This is when the guideline will be triggered
condition="the customer greets you",
# This is what the guideline instructs the agent to do
action="offer a refreshing drink",
)

asyncio.run(main())

Now re-run the program:

python main.py

Refresh http://localhost:8800, start a new session, and greet the agent. You should expect to be offered a drink!

Using the Official React Widget

If your frontend project is built with React, the fastest and easiest way to start is to use the official Parlant React widget to integrate with the server.

Here's a basic code example to get started:

import React from 'react';
import ParlantChatbox from 'parlant-chat-react';

function App() {
return (
<div>
<h1>My Application</h1>
<ParlantChatbox
server="PARLANT_SERVER_URL"
agentId="AGENT_ID"
/>
</div>
);
}

export default App;

For more documentation and customization, see the GitHub repo: https://github.com/emcie-co/parlant-chat-react.

npm install parlant-chat-react

Installing Client SDK(s)

To create a custom frontend app that interacts with the Parlant server, we recommend installing our native client SDKs. We currently support Python and TypeScript (also works with JavaScript).

Python

pip install parlant-client

TypeScript/JavaScript

npm install parlant-client
tip

You can review our tutorial on integrating a custom frontend here: Custom Frontend Integration.

For other languages—they are coming soon! Meanwhile you can use the REST API directly.