Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions python/samples/agentchat_streamlit_team/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Streamlit Team Chat Example (Stop-and-Resume Pattern)

A Streamlit app demonstrating the **stop-and-resume** pattern for interactive multi-agent teams.

Unlike the [chainlit example](../agentchat_chainlit/) which uses `UserProxyAgent` with a blocking input function, this example uses `MaxMessageTermination(max_messages=1)` to run the assistant for exactly one turn, then returns control to the Streamlit UI. This avoids the async blocking issue that Streamlit has with `UserProxyAgent.input_func`.

## When to Use This Pattern

- Web UIs built with Streamlit, Gradio, or similar frameworks
- Any scenario where you can't block the main thread waiting for user input
- When you need to save/resume team state across page reruns

## Setup

```bash
pip install -r requirements.txt
```

Set your API key:
```bash
export OPENAI_API_KEY=sk-...
```

## Run

```bash
streamlit run app.py
```

## How It Works

1. User sends a message via `st.chat_input`
2. The team runs for **one turn** (`max_messages=1`) and stops
3. The assistant's response is displayed
4. The team state is preserved in `st.session_state` so the conversation continues
5. When the user sends another message, the team resumes from where it left off

## Alternative: UserProxyAgent with ChainLit

If you need real-time human-in-the-loop interaction (where the agent asks questions mid-conversation), see the [ChainLit example](../agentchat_chainlit/) which supports `UserProxyAgent` natively.

## Related

- [Human-in-the-loop tutorial](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/human-in-the-loop.html)
- [Termination conditions](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/termination.html)
80 changes: 80 additions & 0 deletions python/samples/agentchat_streamlit_team/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Copyright (c) Microsoft. All rights reserved.
# Streamlit example: Team with AssistantAgent using stop-and-resume pattern.

import asyncio
import yaml
import streamlit as st

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_core.models import ChatCompletionClient


def init_session():
"""Initialize Streamlit session state."""
if "messages" not in st.session_state:
st.session_state["messages"] = []
if "model_client" not in st.session_state:
with open("model_config.yaml", "r") as f:
config = yaml.safe_load(f)
st.session_state["model_client"] = ChatCompletionClient.load_component(config)
if "team" not in st.session_state:
assistant = AssistantAgent(
name="assistant",
model_client=st.session_state["model_client"],
system_message="You are a helpful AI assistant. Keep responses concise.",
)
termination = MaxMessageTermination(max_messages=1)
st.session_state["team"] = RoundRobinGroupChat(
[assistant],
termination_condition=termination,
)


async def run_team(task: str):
"""Run the team for one turn and return the result."""
result = await st.session_state["team"].run(task=task)
return result


def main():
st.set_page_config(page_title="Team Chat (Stop-and-Resume)", page_icon="🤖")
st.title("Team Chat: Stop-and-Resume Pattern 🤖")

init_session()

# Display chat history
for msg in st.session_state["messages"]:
with st.chat_message(msg["role"]):
st.markdown(msg["content"])

prompt = st.chat_input("Type a message...")
if prompt:
# Show user message
st.session_state["messages"].append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)

# Run team for one turn using a new event loop
loop = asyncio.new_event_loop()
try:
result = loop.run_until_complete(run_team(prompt))
finally:
loop.close()

# Display assistant response
for msg in result.messages:
if hasattr(msg, "content") and msg.source == "assistant":
content = msg.content
if content and content != prompt:
st.session_state["messages"].append(
{"role": "assistant", "content": content}
)
with st.chat_message("assistant"):
st.markdown(content)
break # Only show the first assistant response


if __name__ == "__main__":
main()
3 changes: 3 additions & 0 deletions python/samples/agentchat_streamlit_team/model_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
model: gpt-4o-mini
api_type: openai
# Set your API key via environment variable: export OPENAI_API_KEY=sk-...
4 changes: 4 additions & 0 deletions python/samples/agentchat_streamlit_team/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
autogen-agentchat>=0.4
autogen-ext[openai]>=0.4
streamlit>=1.30
pyyaml