Featured Posts
More posts:
New in Parlant 2.1: Utterance Templates
Utterance Templates empower you with precise control over your Parlant agent's responses. By restricting outputs to a predefined set of responses, you ensure your agent communicates with a consistent tone, style, and accuracy, aligning perfectly with your brand voice and service protocols while completely eliminating the risk of even subtle unwanted or hallucinated outputs.
April 30, 2025

New in Parlant 2.1: Utterance Templates
Utterance Templates empower you with precise control over your Parlant agent's responses. By restricting outputs to a predefined set of responses, you ensure your agent communicates with a consistent tone, style, and accuracy, aligning perfectly with your brand voice and service protocols while completely eliminating the risk of even subtle unwanted or hallucinated outputs.
April 30, 2025

Why Generic RAG Frameworks Can't Catch On
In the market for generic RAG frameworks, the different providers are fighting over who can provide 67% accuracy versus 65%. And when you run an off-the-shelf RAG framework on your use case, it will end up closer to 50% accuracy. Is this the best that the industry can do?
April 24, 2025

Why Generic RAG Frameworks Can't Catch On
In the market for generic RAG frameworks, the different providers are fighting over who can provide 67% accuracy versus 65%. And when you run an off-the-shelf RAG framework on your use case, it will end up closer to 50% accuracy. Is this the best that the industry can do?
April 24, 2025

Are Autoregressive LLMs Really Doomed? (A Commentary Upon Yann LeCun's Recent Key Note)
A commentary upon Yann LeCun's key note at AI Action Summit, along with some supplementary explanations on how LLMs work under the hood
February 9, 2025

Are Autoregressive LLMs Really Doomed? (A Commentary Upon Yann LeCun's Recent Key Note)
A commentary upon Yann LeCun's key note at AI Action Summit, along with some supplementary explanations on how LLMs work under the hood
February 9, 2025

What Is Autoregression in LLMs?
A peek under the hood into how LLMs work
February 9, 2025

Rethinking How We Build Customer-Facing AI Agents
A deep dive into today's prevalent methodologies, the challenges that come with each of them, and where the future may lie.
December 9, 2024

Rethinking How We Build Customer-Facing AI Agents
A deep dive into today's prevalent methodologies, the challenges that come with each of them, and where the future may lie.
December 9, 2024
