Skip to content

Commit 3b2a720

Browse files
committed
Merge commit '33831cbb799c48737f1674494629d82aa3ca9ba6' into aamirj/agentframeworksample
2 parents 937a155 + 33831cb commit 3b2a720

File tree

6 files changed

+811
-5
lines changed

6 files changed

+811
-5
lines changed
Lines changed: 273 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,273 @@
1+
# Agent Framework vs ChatPrompt Comparison
2+
3+
This document compares the implementation differences between the agent-framework integration and the ChatPrompt approach in the Microsoft Teams Python SDK.
4+
5+
## Overview
6+
7+
Both approaches provide AI capabilities for Microsoft Teams bots, but with different programming models and abstractions:
8+
9+
- **agent-framework** (`main.py`): Uses the standalone agent-framework library with a simpler, more intuitive API
10+
- **ChatPrompt** (`chat-prompt.py`): Uses the built-in microsoft.teams.ai ChatPrompt with more explicit configuration
11+
12+
## Key Differences
13+
14+
### 1. Setup & Imports
15+
16+
#### Agent Framework
17+
```python
18+
from agent_framework import ChatAgent, ChatMessageStore, MCPStreamableHTTPTool
19+
from agent_framework.azure import AzureOpenAIChatClient
20+
```
21+
22+
#### ChatPrompt
23+
```python
24+
from microsoft.teams.ai import ChatPrompt, Function, ListMemory
25+
from microsoft.teams.openai import OpenAICompletionsAIModel
26+
from microsoft.teams.mcpplugin import McpClientPlugin
27+
28+
# Requires model initialization
29+
model = OpenAICompletionsAIModel()
30+
```
31+
32+
**Key Difference**: Agent framework auto-initializes the client, while ChatPrompt requires explicit model creation.
33+
34+
---
35+
36+
### 2. Basic Message Handling
37+
38+
#### Agent Framework
39+
```python
40+
agent = ChatAgent(
41+
chat_client=AzureOpenAIChatClient(),
42+
instructions="You are a friendly but hilarious pirate robot.",
43+
)
44+
result = await agent.run(text)
45+
await ctx.reply(result.text)
46+
```
47+
48+
#### ChatPrompt
49+
```python
50+
prompt = ChatPrompt(model)
51+
chat_result = await prompt.send(
52+
input=text,
53+
instructions="You are a friendly but hilarious pirate robot.",
54+
)
55+
if chat_result.response.content:
56+
message = MessageActivityInput(text=chat_result.response.content).add_ai_generated()
57+
await ctx.send(message)
58+
```
59+
60+
**Key Differences**:
61+
- Agent framework: `agent.run()` returns result with `.text` property
62+
- ChatPrompt: `prompt.send()` returns result with `.response.content` property
63+
- ChatPrompt requires manual construction of `MessageActivityInput` with AI-generated marker
64+
65+
---
66+
67+
### 3. Function/Tool Calling
68+
69+
#### Agent Framework
70+
```python
71+
def get_weather(
72+
location: Annotated[str, Field(description="The location to get the weather for.")],
73+
) -> str:
74+
"""Get the weather for a given location."""
75+
return f"The weather in {location} is sunny"
76+
77+
agent = ChatAgent(
78+
chat_client=AzureOpenAIChatClient(),
79+
instructions="...",
80+
tools=[get_weather, get_menu_specials], # Pass functions directly
81+
)
82+
```
83+
84+
#### ChatPrompt
85+
```python
86+
class GetWeatherParams(BaseModel):
87+
location: Annotated[str, Field(description="The location to get the weather for.")]
88+
89+
def get_weather(params: GetWeatherParams) -> str:
90+
"""Get the weather for a given location."""
91+
return f"The weather in {params.location} is sunny"
92+
93+
prompt = ChatPrompt(model)
94+
prompt.with_function(
95+
Function(
96+
name="get_weather",
97+
description="Get the weather for a given location.",
98+
parameter_schema=GetWeatherParams,
99+
handler=get_weather,
100+
)
101+
)
102+
```
103+
104+
**Key Differences**:
105+
- Agent framework: Functions use type annotations directly; parameters are individual function arguments
106+
- ChatPrompt: Requires Pydantic model for parameters; function receives single params object
107+
- Agent framework: Pass functions to `tools` list directly
108+
- ChatPrompt: Wrap functions in `Function` objects with explicit configuration using `.with_function()`
109+
110+
---
111+
112+
### 4. Streaming
113+
114+
#### Agent Framework
115+
```python
116+
async for update in agent.run_stream(text):
117+
ctx.stream.emit(update.text)
118+
```
119+
120+
#### ChatPrompt
121+
```python
122+
chat_result = await prompt.send(
123+
input=text,
124+
instructions="...",
125+
on_chunk=lambda chunk: ctx.stream.emit(chunk),
126+
)
127+
128+
# Must emit final AI marker
129+
if chat_result.response.content:
130+
ctx.stream.emit(MessageActivityInput().add_ai_generated())
131+
```
132+
133+
**Key Differences**:
134+
- Agent framework: Uses async iteration pattern with `run_stream()`
135+
- ChatPrompt: Uses callback pattern with `on_chunk` parameter
136+
- ChatPrompt requires manual emission of final AI-generated marker
137+
138+
---
139+
140+
### 5. Structured Output
141+
142+
#### Agent Framework
143+
```python
144+
class SentimentResult(BaseModel):
145+
sentiment: Literal["positive", "negative"]
146+
147+
result = await agent.run(text, response_format=SentimentResult)
148+
149+
if result.value:
150+
await ctx.reply(str(result.value))
151+
```
152+
153+
#### ChatPrompt
154+
```python
155+
class SentimentResult(BaseModel):
156+
sentiment: Literal["positive", "negative"]
157+
158+
# NOTE: ChatPrompt does not support structured output natively
159+
chat_result = await prompt.send(
160+
input=text,
161+
instructions="""
162+
Respond with ONLY a JSON object in this format: {"sentiment": "positive"}
163+
Do not include any other text.
164+
""",
165+
)
166+
167+
# Manual parsing required
168+
if chat_result.response.content:
169+
await ctx.reply(chat_result.response.content)
170+
```
171+
172+
**Key Differences**:
173+
- Agent framework: Native support via `response_format` parameter, returns typed `.value`
174+
- ChatPrompt: **No native support** - requires workaround using instructions and manual JSON parsing
175+
176+
---
177+
178+
### 6. Conversation Memory
179+
180+
#### Agent Framework
181+
```python
182+
memory = ChatMessageStore()
183+
184+
agent = ChatAgent(
185+
chat_client=AzureOpenAIChatClient(),
186+
instructions="...",
187+
chat_message_store_factory=lambda: memory,
188+
)
189+
```
190+
191+
#### ChatPrompt
192+
```python
193+
memory_store: dict[str, ListMemory] = {}
194+
195+
def get_or_create_memory(conversation_id: str) -> ListMemory:
196+
if conversation_id not in memory_store:
197+
memory_store[conversation_id] = ListMemory()
198+
return memory_store[conversation_id]
199+
200+
memory = get_or_create_memory(ctx.activity.conversation.id)
201+
prompt = ChatPrompt(model, memory=memory)
202+
```
203+
204+
**Key Differences**:
205+
- Agent framework: Uses `ChatMessageStore` with factory pattern
206+
- ChatPrompt: Uses `ListMemory` passed directly to constructor; requires manual conversation tracking
207+
- ChatPrompt requires developer to manage conversation-specific memory instances
208+
209+
---
210+
211+
### 7. MCP (Model Context Protocol) Integration
212+
213+
#### Agent Framework
214+
```python
215+
learn_mcp = MCPStreamableHTTPTool("microsoft-learn", "https://learn.microsoft.com/api/mcp")
216+
agent = ChatAgent(
217+
chat_client=AzureOpenAIChatClient(),
218+
instructions="...",
219+
tools=[learn_mcp], # MCP tools in same list as regular tools
220+
)
221+
```
222+
223+
#### ChatPrompt
224+
```python
225+
mcp_plugin = McpClientPlugin()
226+
mcp_plugin.use_mcp_server("https://learn.microsoft.com/api/mcp")
227+
228+
prompt = ChatPrompt(model, memory=memory, plugins=[mcp_plugin])
229+
```
230+
231+
**Key Differences**:
232+
- Agent framework: MCP tools are treated as regular tools, added to `tools` list
233+
- ChatPrompt: MCP requires plugin system, added to `plugins` list separately from functions
234+
235+
---
236+
237+
## Summary Table
238+
239+
| Feature | Agent Framework | ChatPrompt |
240+
|---------|----------------|------------|
241+
| **Setup** | Auto-initialized client | Manual model creation |
242+
| **Tool Definition** | Type-annotated functions | Pydantic models + Function wrapper |
243+
| **Streaming** | Async iteration | Callback pattern |
244+
| **Structured Output** | Native via `response_format` | ❌ Not supported (workaround needed) |
245+
| **Memory** | `ChatMessageStore` + factory | `ListMemory` + manual tracking |
246+
| **MCP Integration** | Tools list | Plugins system |
247+
| **Response Access** | `result.text` | `chat_result.response.content` |
248+
| **Verbosity** | Less verbose | More explicit configuration |
249+
250+
## Recommendations
251+
252+
**Use Agent Framework when**:
253+
- You want simpler, more intuitive APIs
254+
- You need structured output support
255+
- You prefer async iteration for streaming
256+
- You want unified tool/MCP handling
257+
258+
**Use ChatPrompt when**:
259+
- You need fine-grained control over the AI pipeline
260+
- You're already using the microsoft.teams.ai ecosystem
261+
- You want to use the plugin system
262+
- You prefer explicit configuration over conventions
263+
264+
## Migration Tips
265+
266+
If migrating from ChatPrompt to Agent Framework:
267+
268+
1. **Functions**: Remove Pydantic parameter models, use type annotations directly
269+
2. **Memory**: Replace `ListMemory` with `ChatMessageStore` and use factory pattern
270+
3. **Streaming**: Replace `on_chunk` callbacks with `async for` iteration
271+
4. **MCP**: Move MCP from plugins to tools list
272+
5. **Response handling**: Change `.response.content` to `.text`
273+
6. **Structured output**: Use `response_format` parameter instead of instruction-based workaround

examples/agent-framework-integration/pyproject.toml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,13 @@ requires-python = ">=3.12,<3.14"
77
dependencies = [
88
"agent-framework-core>=1.0.0b251114",
99
"dotenv>=0.9.9",
10+
"microsoft-teams-ai",
1011
"microsoft-teams-apps",
11-
"microsoft-teams-devtools"
12+
"microsoft-teams-devtools",
13+
"microsoft-teams-openai",
1214
]
1315

1416
[tool.uv.sources]
1517
microsoft-teams-apps = { workspace = true }
18+
microsoft-teams-openai = { workspace = true }
19+
microsoft-teams-ai = { workspace = true }

0 commit comments

Comments
 (0)