The official Python SDK for the Hanzo AI platform, providing unified access to 100+ LLM providers through a single OpenAI-compatible API interface.
- Unified API: Single interface for 100+ LLM providers (OpenAI, Anthropic, Google, Meta, etc.)
- OpenAI Compatible: Drop-in replacement for OpenAI SDK
- Enterprise Features: Cost tracking, rate limiting, observability
- Local AI Support: Run models locally with node infrastructure
- Model Context Protocol (MCP): Advanced tool use and context management
- Agent Framework: Build and orchestrate AI agents
- Memory Management: Persistent memory and RAG capabilities
- Network Orchestration: Distributed AI compute capabilities
pip install hanzoai
pip install "hanzoai[all]"
git clone https://github.com/hanzoai/python-sdk.git
cd python-sdk
make setup
from hanzoai import Hanzo
# Initialize client
client = Hanzo(api_key="your-api-key")
# Chat completion
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# Use Claude
response = client.chat.completions.create(
model="claude-3-opus-20240229",
messages=[{"role": "user", "content": "Hello!"}]
)
# Use local models
response = client.chat.completions.create(
model="llama2:7b",
messages=[{"role": "user", "content": "Hello!"}]
)
python-sdk/
├── pkg/
│ ├── hanzo/ # CLI and orchestration tools
│ ├── hanzo-mcp/ # Model Context Protocol implementation
│ ├── hanzo-agents/ # Agent framework
│ ├── hanzo-network/ # Distributed network capabilities
│ ├── hanzo-memory/ # Memory and RAG
│ ├── hanzo-aci/ # AI code intelligence
│ ├── hanzo-repl/ # Interactive REPL
│ └── hanzoai/ # Core SDK
Command-line interface for AI operations:
# Chat with AI
hanzo chat
# Start local node
hanzo node start
# Manage router
hanzo router start
# Interactive REPL
hanzo repl
Advanced tool use and context management:
from hanzo_mcp import create_mcp_server
server = create_mcp_server()
server.register_tool(my_tool)
server.start()
Build and orchestrate AI agents:
from hanzo_agents import Agent, Swarm
agent = Agent(
name="researcher",
model="gpt-4",
instructions="You are a research assistant"
)
swarm = Swarm([agent])
result = await swarm.run("Research quantum computing")
Distributed AI compute:
from hanzo_network import LocalComputeNode, DistributedNetwork
node = LocalComputeNode(node_id="node-001")
network = DistributedNetwork()
network.register_node(node)
Persistent memory and RAG:
from hanzo_memory import MemoryService
memory = MemoryService()
await memory.store("key", "value")
result = await memory.retrieve("key")
# Install Python 3.10+
make install-python
# Setup virtual environment
make setup
# Install development dependencies
make dev
# Run all tests
make test
# Run specific package tests
make test-hanzo
make test-mcp
make test-agents
# Run with coverage
make test-coverage
# Format code
make format
# Run linting
make lint
# Type checking
make type-check
# Build all packages
make build
# Build specific package
cd pkg/hanzo && uv build
- Hanzo CLI Documentation
- MCP Documentation
- Agents Documentation
- Network Documentation
- Memory Documentation
See the API documentation for detailed API reference.
# API Configuration
HANZO_API_KEY=your-api-key
HANZO_BASE_URL=https://api.hanzo.ai
# Router Configuration
HANZO_ROUTER_URL=http://localhost:4000/v1
# Node Configuration
HANZO_NODE_URL=http://localhost:8000/v1
# Logging
HANZO_LOG_LEVEL=INFO
Create ~/.hanzo/config.yaml
:
api:
key: your-api-key
base_url: https://api.hanzo.ai
router:
url: http://localhost:4000/v1
node:
url: http://localhost:8000/v1
workers: 4
logging:
level: INFO
# Build image
docker build -t hanzo-sdk .
# Run container
docker run -p 8000:8000 hanzo-sdk
# Start all services
docker-compose up
# Start specific service
docker-compose up router
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Make changes and test
- Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open Pull Request
- Follow PEP 8
- Use type hints
- Write tests for new features
- Update documentation
- Run
make lint
before committing
Operation | Latency | Throughput |
---|---|---|
Chat Completion | 50ms | 20 req/s |
Embedding | 10ms | 100 req/s |
Local Inference | 200ms | 5 req/s |
- Use streaming for long responses
- Enable caching for repeated queries
- Use batch operations when possible
- Configure appropriate timeouts
- API keys are encrypted at rest
- All communications use TLS 1.3+
- Regular security audits
- SOC 2 Type II certified
Report security issues to [email protected]
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- OpenAI for the API specification
- Anthropic for Claude integration
- The open-source community
- Documentation: https://docs.hanzo.ai
- Discord: https://discord.gg/hanzo
- Email: [email protected]
- GitHub Issues: https://github.com/hanzoai/python-sdk/issues
- Multi-modal support (images, audio, video)
- Enhanced caching strategies
- WebSocket streaming
- Browser SDK
- Mobile SDKs (iOS, Android)
Built with ❤️ by the Hanzo team