Skip to content

Dhaval strands example #4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 41 additions & 0 deletions docker/strands-agent/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Stage 1: Builder stage with dependencies
# checkov:skip=CKV_DOCKER_2: Kubernetes handles health checks via probes instead of Docker HEALTHCHECK
FROM python:3.12.10-alpine3.21 AS builder

# Create a non-root user and group in the builder stage
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

# Set working directory
WORKDIR /app

# Copy the entire source code
COPY src/agentic_platform/agent/strands_agent/requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

# Stage 2: Server stage that inherits from builder
# nosemgrep: missing-image-version
FROM builder AS server

# Set working directory
WORKDIR /app

# Copy source now that the dependencies are installed
COPY --chown=appuser:appgroup src/agentic_platform/core/ agentic_platform/core/
COPY --chown=appuser:appgroup src/agentic_platform/tool/ agentic_platform/tool/
COPY --chown=appuser:appgroup src/agentic_platform/agent/strands_agent/ agentic_platform/agent/strands_agent/

# Set the working directory to where the server.py is located
WORKDIR /app/

# Set PYTHONPATH to include the app directory
ENV PYTHONPATH=/app:$PYTHONPATH

# Expose the port your FastAPI app will run on
EXPOSE 8000

# Switch to the non-root user
USER appuser

# Command to run the FastAPI server using uvicorn
CMD ["uvicorn", "agentic_platform.agent.strands_agent.server:app", "--host", "0.0.0.0", "--port", "8000"]
63 changes: 63 additions & 0 deletions k8s/helm/values/applications/strands-agent-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Default values for strands-agent.
# This is a YAML-formatted file.

# Specify the namespace where this service will be deployed
# Leave empty to use the namespace specified in the helm command
namespace: "default"

# Replica count for scaling
replicaCount: 1

# These values will be pulled from an overlay file.
aws:
region: ""
account: ""

image:
repository: "agentic-platform-strands-agent"
tag: latest
pullPolicy: Always

nameOverride: "strands-agent"
fullnameOverride: "strands-agent"

service:
type: ClusterIP
port: 80
targetPort: 8000

env:
- name: PYTHONPATH
value: /app

# Resource allocation
resources:
requests:
cpu: 100m # 0.1 CPU core (10% of a core)
memory: 256Mi # 256 megabytes
limits:
memory: 512Mi # 512 megabytes

# Ingress configuration
ingress:
enabled: true
path: "/strands-agent"

# Service account for permissions
serviceAccount:
name: "strands-agent-sa"
create: true
irsaConfigKey: "AGENT_ROLE_ARN"

# IRSA role configuration
irsaConfigKey: "AGENT_ROLE_ARN"

# Agent secret configuration
agentSecret:
configKey: "AGENT_SECRET_ARN"

# Default values if keys aren't found in central config
configDefaults:
LITELLM_API_ENDPOINT: "http://litellm.default.svc.cluster.local:80"
RETRIEVAL_GATEWAY_ENDPOINT: "http://retrieval-gateway.default.svc.cluster.local:80"
MEMORY_GATEWAY_ENDPOINT: "http://memory-gateway.default.svc.cluster.local:80"
195 changes: 191 additions & 4 deletions labs/module3/notebooks/5_agent_frameworks.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
"source": [
"# 🤖 Building Autonomous Agents: Exploring Agent Frameworks:\n",
"\n",
"In this module, we'll examine how different agent frameworks implement autonomous agents, focusing specifically on LangChain/LangGraph, PydanticAI, and CrewAI. We'll explore how these frameworks handle orchestration, tool use, and agent coordination while leveraging our existing abstractions.\n",
"In this module, we'll examine how different agent frameworks implement autonomous agents, focusing specifically on LangChain/LangGraph, PydanticAI, CrewAI, and Strands. We'll explore how these frameworks handle orchestration, tool use, and agent coordination while leveraging our existing abstractions.\n",
"\n",
"Objectives:\n",
"* Get hands on with high-level frameworks like LangChain/LangGraph, PydanticAI, and CrewAI\n",
"* Get hands on with high-level frameworks like LangChain/LangGraph, PydanticAI, CrewAI, and Strands\n",
"* Learn how to integrate our tool calling, memory, and conversation abstractions with each framework\n",
"* Implement examples showing how to maintain consistent interfaces across frameworks\n",
"* Understand when to use each framework based on their strengths and application needs\n",
Expand Down Expand Up @@ -421,14 +421,201 @@
"print(conversation.model_dump_json(indent=2, serialize_as_any=True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Strands\n",
"Strands is a modern agent framework that provides a clean, simple API for building agents with native LiteLLM integration. It's designed to be lightweight and easy to use while still providing powerful agent capabilities.\n",
"\n",
"Key features of Strands:\n",
"* Native LiteLLM integration for model flexibility\n",
"* Simple, intuitive API\n",
"* Built-in tool ecosystem via strands-tools\n",
"* Lightweight and performant\n",
"\n",
"Let's explore how to use Strands with our existing abstractions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# First, let's create a simple Strands agent\n",
"from strands import Agent as StrandsAgent\n",
"from strands.models.litellm import OpenAIModel\n",
"from strands_tools import calculator\n",
"\n",
"# Create an OpenAI model for Strands (avoids LiteLLM proxy conflicts)\n",
"# Note: Using OpenAIModel prevents Bedrock model name conflicts with the proxy\n",
"model = OpenAIModel(\n",
" model_id=\"us.anthropic.claude-3-sonnet-20240229-v1:0\",\n",
" max_tokens=1000,\n",
" temperature=0.0\n",
")\n",
"\n",
"# Create a simple agent with built-in calculator tool\n",
"agent = StrandsAgent(model=model, tools=[calculator])\n",
"\n",
"# Test the agent\n",
"response = agent(\"What is 15 * 23?\")\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's integrate our custom tools with Strands. Strands can work with regular Python functions, making it easy to integrate our existing tools."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Import our custom tools\n",
"from agentic_platform.core.tool.sample_tools import weather_report, handle_calculation\n",
"\n",
"# Create agent with our custom tools\n",
"strands_agent = StrandsAgent(model=model, tools=[weather_report, handle_calculation])\n",
"\n",
"# Test with weather query\n",
"response = strands_agent(\"What's the weather like in New York?\")\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Create our Strands Abstraction Layer\n",
"Like with the other frameworks, we want to wrap Strands in our own abstractions to maintain interoperability."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Simple wrapper for Strands that integrates with our memory system\n",
"from strands import Agent as StrandsAgent\n",
"from strands.models.litellm import LiteLLMModel as StrandsLiteLLMModel\n",
"\n",
"class StrandsAgentWrapper:\n",
" \n",
" def __init__(self, tools: List[Callable], base_prompt: BasePrompt):\n",
" # Create LiteLLM model with our prompt configuration\n",
" temp: float = base_prompt.hyperparams.get(\"temperature\", 0.5)\n",
" max_tokens: int = base_prompt.hyperparams.get(\"max_tokens\", 1000)\n",
" \n",
" self.model = StrandsLiteLLMModel(\n",
" model_id=f\"bedrock/{base_prompt.model_id}\",\n",
" params={\n",
" \"max_tokens\": max_tokens,\n",
" \"temperature\": temp,\n",
" }\n",
" )\n",
" \n",
" # Create the Strands agent\n",
" self.agent = StrandsAgent(\n",
" model=self.model, \n",
" tools=tools,\n",
" system_prompt=base_prompt.system_prompt\n",
" )\n",
" \n",
" def invoke(self, request: AgenticRequest) -> AgenticResponse:\n",
" # Get or create conversation\n",
" conversation: SessionContext = memory_client.get_or_create_conversation(request.session_id)\n",
" \n",
" # Add user message to conversation\n",
" conversation.add_message(request.message)\n",
" \n",
" # Extract text from the message for Strands\n",
" user_text = \"\"\n",
" if request.message.content:\n",
" for content in request.message.content:\n",
" if hasattr(content, 'text') and content.text:\n",
" user_text = content.text\n",
" break\n",
" \n",
" # Call Strands agent\n",
" response_text = self.agent(user_text)\n",
" \n",
" # Create response message\n",
" response_message = Message(\n",
" role=\"assistant\",\n",
" content=[TextContent(type=\"text\", text=response_text)]\n",
" )\n",
" \n",
" # Add to conversation\n",
" conversation.add_message(response_message)\n",
" \n",
" # Save conversation\n",
" memory_client.upsert_conversation(conversation)\n",
" \n",
" # Return response\n",
" return AgenticResponse(\n",
" session_id=conversation.session_id,\n",
" message=response_message\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test our wrapped Strands agent\n",
"from agentic_platform.core.tool.sample_tools import weather_report, handle_calculation\n",
"\n",
"# Define our agent prompt\n",
"class StrandsAgentPrompt(BasePrompt):\n",
" system_prompt: str = '''You are a helpful assistant.'''\n",
" user_prompt: str = '''{user_message}'''\n",
"\n",
"# Build our prompt\n",
"user_message: str = 'What is the weather in Seattle?'\n",
"prompt: BasePrompt = StrandsAgentPrompt()\n",
"\n",
"# Instantiate the agent\n",
"tools: List[Callable] = [weather_report, handle_calculation]\n",
"my_strands_agent: StrandsAgentWrapper = StrandsAgentWrapper(base_prompt=prompt, tools=tools)\n",
"\n",
"# Create the agent request\n",
"request: AgenticRequest = AgenticRequest.from_text(text=user_message)\n",
"\n",
"# Invoke the agent\n",
"response: AgenticResponse = my_strands_agent.invoke(request)\n",
"\n",
"print(response.message.model_dump_json(indent=2))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Check our conversation\n",
"conversation: SessionContext = memory_client.get_or_create_conversation(response.session_id)\n",
"print(conversation.model_dump_json(indent=2, serialize_as_any=True))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Conclusion\n",
"This concludes module 3 on autonomous agents. In this lab we:\n",
"1. Explored 2 of the many agent frameworks available today\n",
"2. Demonstrated how to make agent frameworks interoperable and create 2 way door decisions with proper abstraction in code. \n",
"1. Explored 3 of the many agent frameworks available today: LangChain/LangGraph, PydanticAI, and Strands\n",
"2. Demonstrated how to make agent frameworks interoperable and create 2 way door decisions with proper abstraction in code\n",
"3. Showed how different frameworks have different strengths - LangGraph for complex workflows, PydanticAI for type safety, and Strands for simplicity\n",
"\n",
"In the next module we'll be discussing some more advanced concepts of agents. Specifically multi-agent systems and model context protocol (MCP)"
]
Expand Down
4 changes: 4 additions & 0 deletions src/agentic_platform/agent/strands_agent/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
strands-agents[litellm]>=0.1.6
strands-agents-tools>=0.1.9
fastapi>=0.115.6
uvicorn>=0.34.0
38 changes: 38 additions & 0 deletions src/agentic_platform/agent/strands_agent/server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
from fastapi import FastAPI
import uvicorn

from agentic_platform.core.middleware.configure_middleware import configuration_server_middleware
from agentic_platform.core.models.api_models import AgenticRequest, AgenticResponse
from agentic_platform.core.decorator.api_error_decorator import handle_exceptions
from agentic_platform.agent.strands_agent.strands_agent_controller import StrandsAgentController
import logging

# Get logger for this module
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)

app = FastAPI(title="Strands Agent API")

# Essential middleware.
configuration_server_middleware(app, path_prefix="/api/strands-agent")

# Essential endpoints
@app.post("/invoke", response_model=AgenticResponse)
@handle_exceptions(status_code=500, error_prefix="Strands Agent API Error")
async def invoke(request: AgenticRequest) -> AgenticResponse:
"""
Invoke the Strands agent.
Keep this app server very thin and push all logic to the controller.
"""
return StrandsAgentController.invoke(request)

@app.get("/health")
async def health():
"""
Health check endpoint for Kubernetes probes.
"""
return {"status": "healthy"}

# Run the server with uvicorn.
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000) # nosec B104 - Binding to all interfaces within container is intended
Loading