Skip to main content

Tools & APIs

Tools & APIs in Parlant

Parlant provides a principled approach to tool usage, tightly integrated with the guideline system. The tool-calling implementation is built from the ground up for two main reasons: both to provide a stable abstraction layer across different LLM versions and providers, and to enable seamless integration with guideline-based behavior control.

In addition, tools are run in sandboxed mode—in a different process—to protect the Parlant server from crashing or malfunctioning because of logic or security issues in tool code.

Getting Started with Tool Services

The best way to get started is to clone the tool service starter repo:

$ git clone https://github.com/emcie-co/parlant-tool-service-starter

Inside the starter repo, run the following commands to install and run the tool service:

$ poetry install
$ poetry run python parlant_tool_service_starter/service.py

Now head over to http://localhost:8089/docs and try out the Call Tool endpoint, providing the following arguments:

  • name: get_random_number
  • body:
{
"agent_id": "string",
"session_id": "string",
"arguments": {}
}

You should see a new random number appearing in the tool result every time you run call it.

Reviewing the Tool Service's Code

Here's what you'll find under parlant_tool_service_starter/service.py:

import asyncio
from random import random

from parlant.sdk import (
PluginServer,
ToolContext,
ToolResult,
tool,
)

PORT = 8089


@tool
async def get_random_number(context: ToolContext) -> ToolResult:
return ToolResult(data=int(random() * 1000))


TOOLS = [
get_random_number,
]


async def main() -> None:
async with PluginServer(tools=TOOLS, port=PORT):
pass


if __name__ == "__main__":
asyncio.run(main())

There are two important parts to this code snippet.

  1. The main function, which starts something called a PluginServer. A plugin server, as its name suggests, is a part of Parlant's SDK which lets you plug into your Parlant server. In this case, we use it to expose tools, which means it can function as a tool service for agents to use.
  2. The @tool function definition. Let's dive into that a bit more in depth for just a minute.

Tool Functions

  1. Function name: Here the function is defined as async def get_random_number. The function can be named however you want, but it's recommended to prioritize readability when choosing a name, since Parlant agents will try to understand what it does based on its name.
  2. Sync/Async: Here the function is defined as async which allows you to run other async functions without blocking the execution. However, it's optional, so feel free to remove the async keyword if you don't need it.
  3. ToolContext: This gives you access to the agent, session, and customer ID for the current execution context. You can use this, for example, to fetch data based on the specific customer you're interacting with.
  4. In addition, ToolContext allows you to emit intermediate messages. This is useful if your tool takes a long time to run, and you want the agent to say something like, "Just a second, I'm checking something." You can emit tools and status updates at any point throughout your tool call execution.
  5. ToolResult: Here you can return both the actual data that the agent sees and takes into consideration when replying, as well as metadata which can be inspected on the client side. metadata is great for when you want to provide additional information on the data returned; for example, listing the sources from which some answer was generated, to display as footnotes on the frontend. We will release a guide on this soon!

Registring the Service with Parlant

The next step is to give Parlant agents access to your server. Assuming you already have a parlant-server running, go ahead and run the following command in your shell:

$ parlant service create \
--name my-service
--kind sdk \
--url http://localhost:8089

Understanding Tool Usage

In Parlant, tools are always associated with specific guidelines. This means a tool only executes when its associated guideline becomes relevant to the conversation. This design creates a clear chain of intent: guidelines determine when and why tools are used, rather than leaving it to the LLM's judgment.

In addition, business logic (encoded in tools) can be separated and worked on independently from presentation (or user-interface) concerns, which are the behavior that's controlled by the guidelines themselves. This allows you to have developers work out the logic in code, with full control—offering these tools in the "tool shed" of Parlant—for business experts to then go and utilize in their guidelines.

As an analogy, you can think of Guidelines and Tools like Widgets and Event Handlers in Graphical UI frameworks. A GUI Button has an onClick handler which we can associate with some API function to say, "When this button is clicked, run this function.". In the same way, in Parlant—which is essentally a Conversational UI framework—the Guideline is like the Button, the Tool is like the API function, and the association connects the two to say, "When this guideline is applied, run this tool."

Here's a concrete example to illustrate these concepts:

  • Condition: The user asks about service features
  • Action: Understand their intent and consult documentation to answer
  • Tools Associations: [query_docs(user_query)]

Here, the documentation query tool only runs after the guideline instructs that we should be consulting documentation for this user interaction.

Associating Between a Guideline and a Tool

To allow a guideline to run a tool, here's what you do:

$ parlant guideline tool-enable \
--agent-id AGENT_ID \
--id GUIDELINE_ID \
--service SERVICE_NAME \
--tool TOOL_NAME

Dynamic Behavior with Tool Outputs

Tool outputs can influence which guidelines become relevant, creating dynamic, data-driven behavior that remains under guideline control. Here's how this works in practice:

Consider a banking agent handling transfers. When a user requests a transfer, a guideline with the condition the user wants to make a transfer activates the get_user_account_balance() tool to check available funds. This tool returns the current balance, which can then trigger additional guidelines based on its value.

For instance, if the balance is below $500, we could have a low-balance guideline activate, instructing the agent to say something like: "I see your balance is getting low. Are you sure you want to proceed with this transfer? This transaction might put you at risk of overdraft fees."

This interplay between tools and guidelines ensures that every tool execution has clear context and intent, while tool outputs can trigger precisely defined behavioral responses—all within the guideline framework.

Best Practices

A Note on Natural Language Programming

While LLMs excel at conversational tasks, they struggle with complex logical operations and multi-step planning. Recent research in LLM architectures shows that even advanced models have difficulty with consistent logical reasoning and sequential decision-making. The "planning problem" in LLMs—breaking down complex tasks into ordered steps—remains a significant challenge, as highlighted in papers exploring Chain-of-Thought and related techniques.

Given these limitations, Parlant takes a pragmatic approach: keep logic in code, conversation in guidelines. Instead of embedding business logic in natural language guidelines, Parlant encourages a clean separation between conversational behavior and underlying business operations.

Consider your tools as your business logic API, and guidelines as your conversational interface. This separation creates cleaner, more maintainable, and more reliable systems.

Examples

1. E-commerce Product Recommendations:

DON'T
  • Guideline Action: If user mentions sports, check their purchase history. If they bought running gear, recommend premium shoes. If they're new, suggest starter kit.
  • Tool Associations: [get_product_catalog()]

This puts complex business logic in the guideline, relying on the LLM to handle multiple conditions consistently. Instead:

DO
  • Guideline Action: Offer personalized recommendations
  • Tool Associations: [get_personalized_recommendations(user_context)]

Keep the logic in your recommendation engine, where it belongs.

2. Financial Advisory:

DON'T
  • Guideline Action: Check account balance and recent transactions. If spending exceeds 80% of usual pattern, suggest budget review. If investment returns are down, recommend portfolio adjustment.
  • Tool Associations: [get_account_data()]

Financial analysis logic shouldn't rely on LLM interpretation.

DO
  • Guideline Action: Get personalized financial insights
  • Tool Associations: [get_financial_insights(account_id)]