A simple FastAPI server with endpoints for ingesting and fetching runs.
POST /runsendpoint to create new runsGET /runs/{id}endpoint to retrieve run information by UUID
# 1. Install dependencies
poetry install
# 2. Start database services (required before running the server)
make db-up
# 3. Run migrations and start the server
make serverThe API will be available at http://localhost:8000
# Create a new run
curl -X POST http://localhost:8000/runs \
-H "Content-Type: application/json" \
-d '[
{
"trace_id": "944ce838-b5c5-4628-8f23-089fbda8b9e3",
"name": "Weather Query",
"inputs": {"query": "What is the weather in San Francisco?"},
"outputs": {"response": "It is currently 65°F and sunny in San Francisco."},
"metadata": {"model": "gpt-4", "temperature": 0.7, "tokens": 42}
}
]'Response:
{
"ids": ["<generated-uuid>"]
}export PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1
# Get a run by ID (replace <run-id> with an actual UUID)
curl -X GET http://localhost:8000/runs/<run-id>Response:
{
"id": "<run-id>",
"trace_id": "944ce838-b5c5-4628-8f23-089fbda8b9e3",
"name": "Weather Query",
"inputs": {"query": "What is the weather in San Francisco?"},
"outputs": {"response": "It is currently 65°F and sunny in San Francisco."},
"metadata": {"model": "gpt-4", "temperature": 0.7, "tokens": 42}
}This project uses Poetry for dependency management.
# Install dependencies
poetry install
# Activate the virtual environment
poetry shellThis project uses PostgreSQL for data storage and MinIO for object storage. Docker Compose is used to manage these services.
# Start database services (PostgreSQL and MinIO)
make db-up
# Stop database services
make db-down
# Run database migrations
make db-migrate
# Revert the most recent migration
make db-downgrade# Start the server with migrations applied
make server
# Or manually start the server
poetry run uvicorn ls_py_handler.main:app --reloadThis project uses Ruff for linting and formatting Python code.
# Format code
make format
# Check code for linting issues
make lint
# Automatically fix linting issues when possible
make lint-fixOnce the server is running, you can access the auto-generated API documentation at:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
The project uses pytest for testing and includes a dedicated test environment configuration.
# Run tests (this automatically sets up the test environment)
make testThe test command will:
- Set up a clean test environment (drop and recreate the test database and S3 bucket)
- Run migrations on the test database
- Execute all tests with the test environment settings
The test environment uses:
- A separate database (
postgres_test) - A separate S3 bucket (
runs-test) - Environment variables from
.env.test
You can manually set up the test environment without running tests:
# Just set up the test environment
make test-setupThe application uses environment-specific configuration:
- Development: Uses the default
.envfile - Testing: Uses the
.env.testfile whenRUN_HANDLER_ENV=testis set
This allows tests to run with isolated resources without affecting your development environment.
The project includes tools for performance benchmarking and memory profiling to help identify bottlenecks and optimize resource usage.
Performance benchmarks measure execution time of key operations using pytest-benchmark. The benchmarks are designed to isolate the API request handling time from data preparation and JSON serialization.
# Run performance benchmarks
make benchmarkThis will:
- Set up a clean test environment
- Run the benchmark tests
- Save the results to
.benchmarksdirectory for comparison with future runs
Example benchmark scenarios:
- Processing 500 runs with 10KB of data per field
- Processing 50 runs with 100KB of data per field
Memory profiling using pytest-memray helps identify memory usage.
# Run memory profiling
make memprofileMemory profiling results will show:
- Peak memory usage for different operations
- Memory allocation patterns
- Memory-intensive functions and call paths