Burr – a stateful AI decision engine that allows developers to build structured, interactive AI workflows efficiently. In this article, we will:
Explore Burr’s stateful AI workflow
Build an AI-powered chatbot using Burr
Deploy the chatbot with structured transitions and state updates
Compare Burr with other AI orchestration tools
By the end, you’ll have a fully functional AI chatbot that remembers past interactions, generates responses dynamically, and operates in a structured manner.

Introduction
AI is changing rapidly, and decision-making AI applications has become more important. From chatbot to AI agent and automation systems, the need for smart AI who make good choices is growing. But, traditional AI systems not always work best for this. They are either too rule-based, which make them not flexible, or they use ML models which do not remember context.
That is where Burr comes. It be a open-source Python library that helps in making decision application by using stateful graph, which means every step AI takes can be tracked and controlled. It lets developer manage AI states, execute action and monitor the decisions AI makes. In this article, we looking at how Burr works, why it is better than other AI frameworks, and how to use it in real world applications.
Why AI Struggles in Decision Making?
Before we look at Burr, let’s first see what problems with traditional AI for making decisions:
- Rule-based AI is too stiff – It following strict rules and can’t adapt when something new happens.
- ML models forget things – They make prediction, but they don’t remember what happen before. AI needs memory to make better decisions.
- Existing AI lacks control – When you make AI agent, they often do not have way to track their decisions properly.
Burr fix these problems by making AI stateful and traceable, so every decision is stored and can be changed when needed.
What Makes Burr Different?
Main Features
- Graph-Based AI Structure
- Instead of just following rule, Burr uses stateful graphs to connect actions and decisions.
- Every step in decision process is a node, which AI can follow based on context.
- It Uses Immutable State
- AI state do not change directly, making it easy to debug and see past decision.
- This make sure that every AI action is predictable and traceable.
- AI That Can Be Monitored
- Unlike chatbot and AI model that don’t track its own decision, Burr let you monitor each action AI take.
- This useful for business and enterprise where mistakes need to be found and fixed fast.
- Can Work with Other AI Libraries
- Burr does not replace ML or LLM models, but it adds AI decision control.
- You can use it with LangChain, OpenAI API, Dagster and more to build complex AI.
How to Use Burr
Installing Burr
Before starting, install Burr with pip:
pip install burr
Check if installed properly:
import burr
print("Burr installed correct!")
pip install burr-core
Understanding burr
and burr.core
Imports in Python
When you import Burr, you get access to different modules and classes that help in building AI-powered workflows. Let’s break down the outputs from with code step by step.
import burr
and print(dir(burr))
When you run:
import burr
print(dir(burr))
you get: ['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__',
'__spec__', 'common', 'core', 'integrations', 'lifecycle', 'system', 'telemetry', 'visibility']
Breaking it Down
This shows that Burr is more than just core
it also includes various high-level modules:
Component | Description |
---|---|
common | Utility functions used across different Burr modules. |
core | The foundational module containing state management, graph execution, and application building (as seen earlier). |
integrations | Handles integrations with external services and libraries (e.g., databases, APIs). |
lifecycle | Manages execution flows, error handling, and state transitions in applications. |
system | Contains system-level functions, such as logging and environment detection. |
telemetry | Used for tracking application performance, logging execution stats, and debugging. |
visibility | Provides insights into application execution, helping with observability and debugging. |
Key Takeaway:
burr
is a high-level package that organizes various components (core
,integrations
,system
, etc.).- It includes performance tracking (telemetry) and external service integration features.
import burr.core
and print(list(burr.core))
When you run:
import burr.core
print(dir(burr.core))
you get: ['Action', 'Application', 'ApplicationBuilder', 'ApplicationContext', 'ApplicationGraph',
'Condition', 'Graph', 'GraphBuilder', 'Result', 'State', '__all__', '__builtins__', '__cached__',
'__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'action',
'application', 'default', 'expr', 'graph', 'persistence', 'serde', 'state', 'typing', 'validation', 'when']
Breaking it Down
This means burr.core
contains several key classes and functions that define how Burr works. Here’s what the main components do:
Component | Description |
---|---|
Action | Represents an operation that modifies or interacts with the system state. |
Application | The main structure of a Burr app that connects actions and workflows. |
ApplicationBuilder | A helper class to construct an Application easily. |
ApplicationContext | Provides the execution environment for a running Burr application. |
ApplicationGraph | Represents the structure of actions and their dependencies in the workflow. |
Condition | Used to define conditions in workflows, such as triggers for actions. |
Graph | Represents the dependencies between actions in a structured manner. |
GraphBuilder | Helps in constructing a Graph dynamically. |
Result | Stores and manages the outcome of action executions. |
State | Stores and tracks shared data throughout the execution of the application. |
action | A decorator used to define stateful operations in Burr. |
graph | Contains utilities to manage dependencies between actions. |
persistence | Manages state persistence, ensuring workflows can store and recall data. |
serde | Handles serialization and deserialization of data objects. |
validation | Provides validation tools to ensure correct data inputs. |
when | A utility for defining conditional logic in Burr workflows. |
Key Takeaway:
burr.core
provides low-level components for defining actions, managing state, and structuring AI workflows.- It contains fundamental building blocks like
Action
,Application
, andState
.
Summary
Command | What It Shows | Purpose |
---|---|---|
dir(burr.core) | Core classes like Action , Application , Graph , State | Shows low-level components for workflow management |
dir(burr) | Modules like core , integrations , lifecycle , telemetry | Shows high-level modules for full application support |
Creating an AI Chatbot with Burr
Now, let’s build a fully interactive AI chatbot using Burr’s stateful decision-making capabilities.
Step 1: Implementing AI Response Logic
We define a mock AI response function that simulates a language model’s response:
from burr.core import action, State, ApplicationBuilder
# Mock LLM function for AI response (Replace this with actual logic)
def generate_ai_response(chat_history):
"""Simulate AI-generated responses based on chat history."""
return "I am an AI Assistant. How can I assist you today?"
This function will be used to generate AI responses dynamically based on previous interactions stored in the chatbot’s state.
Step 2: Capturing User Input & Updating Chat History
Now, we define an action to capture user input and store it in chat history:
@action(reads=[], writes=["user_input", "chat_history"])
def capture_input(state: State, user_input: str) -> State:
"""Captures user input and updates chat history."""
chat_item = {"role": "user", "content": user_input}
return state.update(user_input=user_input).append(chat_history=chat_item)
What This Does: Reads user input dynamically
Updates the state with the latest message
Stores conversation history for future AI responses
Step 3: AI Responds Based on Chat History
The chatbot reads past interactions and generates an AI response:
@action(reads=["chat_history"], writes=["response", "chat_history"])
def respond(state: State) -> State:
"""AI generates a response based on chat history and updates state."""
response = generate_ai_response(state["chat_history"])
chat_item = {"role": "system", "content": response}
return state.update(response=response).append(chat_history=chat_item)
How It Works: Reads past chat messages
Generates AI response dynamically
Stores AI response in chat history
Step 4: Structuring the Chatbot with Burr’s Application Builder
We define the chatbot’s execution flow using Burr’s structured state management:
# Build the Burr application
app = (
ApplicationBuilder()
.with_actions(capture_input, respond) # Register actions
.with_transitions(
("capture_input", "respond"), # User input triggers AI response
("respond", "capture_input") # Loops back for continuous interaction
)
.with_state(chat_history=[]) # Initialize with an empty chat history
.with_entrypoint("capture_input") # Start from user input
.build()
)
What This Setup Does: Captures user input → AI generates response → Loops back
Maintains an ongoing conversation
Uses state transitions to guide decision flow
Step 5: Running the AI Chatbot
To test the chatbot, run:
# Run the chatbot with an example input
*_, state = app.run(halt_after=["respond"], inputs={"user_input": "Hello, AI!"})
# Print AI response
print("AI Response:", state["response"])
Expected Output:
AI Response: I am an AI Assistant. How can I assist you today?
The chatbot now remembers past interactions and responds dynamically!
Deploying the AI Chatbot as an API
Want to integrate this chatbot with a web app? Let’s deploy it using Flask!
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route("/chat", methods=["POST"])
def chat():
"""Handle user messages via API."""
data = request.json
*_, state = app.run(halt_after=["respond"], inputs={"user_input": data["message"]})
return jsonify({"response": state["response"]})
if __name__ == "__main__":
app.run(debug=True)
Now, you can send user messages to /chat
endpoint, and AI will respond!
Comparing Burr with Other AI Frameworks
Feature | Burr | LangChain | OpenAI API | Rasa |
---|---|---|---|---|
Graph-Based AI | ||||
Stateful AI Memory | ||||
Agentic AI Support | ||||
API Integration | ||||
Workflow Automation |
Why Choose Burr? Tracks AI decisions better than OpenAI API
More structured workflows than LangChain
Handles stateful AI execution for enterprise automation
Final Thoughts: Why Burr is the Future of AI Decision-Making?
Burr is the best choice for AI-driven applications that require structured decision-making, state tracking, and automation.
Why Developers Love Burr?
Stateful AI with structured decision flows
Seamless deployment for real-world use cases
Easy API integration for web & chatbot applications
Perfect for AI automation in finance, healthcare, & industrial IoT
Start building your AI-powered decision system today! GitHub – Burr
Would you like a guide on integrating Burr with GPT models? Let me know!
Here is the workflow of Burr: