A web application that provides a Claude Code agent interface with integrated Databricks tools. Users interact with Claude through a chat interface, and the agent can execute SQL queries, manage pipelines, upload files, and more on their Databricks workspace.
✅ Event Loop Fix Implemented
We've implemented a workaround for
claude-agent-sdkissue #462 that was preventing the agent from executing Databricks tools in FastAPI contexts.Solution: The agent now runs in a fresh event loop in a separate thread, with
contextvarsproperly copied to preserve Databricks authentication. See EVENT_LOOP_FIX.md for details.Status: ✅ Fully functional - agent can execute all Databricks tools successfully
┌─────────────────────────────────────────────────────────────────────────────┐
│ Web Application │
├─────────────────────────────────────────────────────────────────────────────┤
│ React Frontend (client/) FastAPI Backend (server/) │
│ ┌─────────────────────┐ ┌─────────────────────────────────┐ │
│ │ Chat UI │◄──────────►│ /api/invoke_agent │ │
│ │ Project Selector │ SSE │ /api/projects │ │
│ │ Conversation List │ │ /api/conversations │ │
│ └─────────────────────┘ └─────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ Claude Code Session │
├─────────────────────────────────────────────────────────────────────────────┤
│ Each user message spawns a Claude Code agent session via claude-agent-sdk │
│ │
│ Built-in Tools: MCP Tools (Databricks): Skills: │
│ ┌──────────────────┐ ┌─────────────────────────┐ ┌───────────┐ │
│ │ Read, Write, Edit│ │ execute_sql │ │ sdp │ │
│ │ Glob, Grep, Skill│ │ create_or_update_pipeline │ dabs │ │
│ └──────────────────┘ │ upload_folder │ │ sdk │ │
│ │ execute_code │ │ ... │ │
│ │ ... │ └───────────┘ │
│ └─────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────┐ │
│ │ databricks-mcp-server │ │
│ │ (in-process SDK tools) │ │
│ └─────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ Databricks Workspace │
├─────────────────────────────────────────────────────────────────────────────┤
│ SQL Warehouses │ Clusters │ Unity Catalog │ Workspace │
└─────────────────────────────────────────────────────────────────────────────┘
When a user sends a message, the backend creates a Claude Code session using the claude-agent-sdk:
from claude_agent_sdk import ClaudeAgentOptions, query
options = ClaudeAgentOptions(
cwd=str(project_dir), # Project working directory
allowed_tools=allowed_tools, # Built-in + MCP tools
permission_mode='bypassPermissions', # Auto-accept all tools including MCP
resume=session_id, # Resume previous conversation
mcp_servers=mcp_servers, # Databricks MCP server config
system_prompt=system_prompt, # Databricks-focused prompt
setting_sources=['user', 'project'], # Load skills from .claude/skills
)
async for msg in query(prompt=message, options=options):
yield msg # Stream to frontendKey features:
- Session Resumption: Each conversation stores a
claude_session_idfor context continuity - Streaming: All events (text, thinking, tool_use, tool_result) stream to the frontend in real-time
- Project Isolation: Each project has its own working directory with sandboxed file access
The app supports multi-user authentication using per-request credentials:
┌─────────────────────────────────────────────────────────────────────────────┐
│ Authentication Flow │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Production (Databricks Apps) Development (Local) │
│ ┌──────────────────────────┐ ┌──────────────────────────┐ │
│ │ Request Headers: │ │ Environment Variables: │ │
│ │ X-Forwarded-User │ │ DATABRICKS_HOST │ │
│ │ X-Forwarded-Access-Token │ │ DATABRICKS_TOKEN │ │
│ └────────────┬─────────────┘ └────────────┬─────────────┘ │
│ │ │ │
│ └──────────────┬─────────────────────┘ │
│ ▼ │
│ ┌──────────────────────────┐ │
│ │ set_databricks_auth() │ (contextvars) │
│ │ - host │ │
│ │ - token │ │
│ └────────────┬─────────────┘ │
│ ▼ │
│ ┌──────────────────────────┐ │
│ │ get_workspace_client() │ (used by all tools) │
│ │ - Returns client with │ │
│ │ context credentials │ │
│ └──────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
How it works:
-
Request arrives - The FastAPI backend extracts credentials:
- Production:
X-Forwarded-UserandX-Forwarded-Access-Tokenheaders (set by Databricks Apps proxy) - Development: Falls back to
DATABRICKS_HOSTandDATABRICKS_TOKENenv vars
- Production:
-
Auth context set - Before invoking the agent:
from databricks_tools_core.auth import set_databricks_auth, clear_databricks_auth set_databricks_auth(workspace_url, user_token) try: # All tool calls use this user's credentials async for event in stream_agent_response(...): yield event finally: clear_databricks_auth()
-
Tools use context - All Databricks tools call
get_workspace_client()which:- First checks contextvars for per-request credentials
- Falls back to environment variables if no context set
This ensures each user's requests use their own Databricks credentials, enabling proper access control and audit logging.
Databricks tools are loaded in-process using the Claude Agent SDK's MCP server feature:
from claude_agent_sdk import tool, create_sdk_mcp_server
# Tools are dynamically loaded from databricks-mcp-server
server = create_sdk_mcp_server(name='databricks', tools=sdk_tools)
options = ClaudeAgentOptions(
mcp_servers={'databricks': server},
allowed_tools=['mcp__databricks__execute_sql', ...],
)Tools are exposed as mcp__databricks__<tool_name> and include:
- SQL execution (
execute_sql,execute_sql_multi) - Warehouse management (
list_warehouses,get_best_warehouse) - Cluster execution (
execute_code) - Pipeline management (
create_or_update_pipeline,start_update, etc.) - File operations (
upload_to_workspace)
Skills provide specialized guidance for Databricks development tasks. They are markdown files with instructions and examples that Claude can load on demand.
Skill loading flow:
- On startup, skills are copied from
../databricks-skills/to./skills/ - When a project is created, skills are copied to
project/.claude/skills/ - The agent can invoke skills using the
Skilltool:skill: "sdp"
Skills include:
- databricks-bundles: DABs configuration
- databricks-app-apx: Full-stack apps with APX framework (FastAPI + React)
- databricks-app-python: Python apps with Dash, Streamlit, Flask
- databricks-python-sdk: Python SDK patterns
- databricks-mlflow-evaluation: MLflow evaluation and trace analysis
- databricks-spark-declarative-pipelines: Spark Declarative Pipelines (SDP) development
- databricks-synthetic-data-gen: Creating test datasets
Projects are stored in the local filesystem with automatic backup to PostgreSQL:
projects/
<project-uuid>/
.claude/
skills/ # Copied skills for this project
src/ # User's code files
...
Backup system:
- After each agent interaction, the project is marked for backup
- A background worker runs every 10 minutes
- Projects are zipped and stored in PostgreSQL (Lakebase)
- On access, missing projects are restored from backup
- Python 3.11+
- Node.js 18+
- uv package manager
- Databricks workspace with:
- SQL warehouse (for SQL queries)
- Cluster (for Python/PySpark execution)
- Unity Catalog enabled (recommended)
- PostgreSQL database (Lakebase) for project persistence — autoscale or provisioned
From the repository root:
cd databricks-builder-app
./scripts/setup.shThis will:
- Verify prerequisites (uv, Node.js, npm)
- Create a
.env.localfile from.env.example(if one doesn't already exist) - Install backend Python dependencies via
uv sync - Install sibling packages (
databricks-tools-core,databricks-mcp-server) - Install frontend Node.js dependencies
You must do this before running the app. The setup script creates a
.env.localfile from.env.example, but all values are placeholders. Open.env.localand fill in your actual values.
The .env.local file is gitignored and will never be committed. At a minimum, you need to set these:
# Required: Your Databricks workspace
DATABRICKS_HOST=https://your-workspace.cloud.databricks.com
DATABRICKS_TOKEN=dapi...
# Required: Database for project persistence (pick ONE option)
# Option A — Autoscale Lakebase (recommended, scales to zero):
LAKEBASE_ENDPOINT=projects/<project-name>/branches/production/endpoints/primary
LAKEBASE_DATABASE_NAME=databricks_postgres
# Option B — Provisioned Lakebase (fixed capacity):
# LAKEBASE_INSTANCE_NAME=your-lakebase-instance
# LAKEBASE_DATABASE_NAME=databricks_postgres
# Option C — Static connection URL (any type, simplest for local dev):
# LAKEBASE_PG_URL=postgresql://user:password@host:5432/database?sslmode=requireThe app auto-detects the mode based on which variable is set:
LAKEBASE_ENDPOINT→ autoscale mode (client.postgresAPI, host looked up automatically)LAKEBASE_INSTANCE_NAME→ provisioned mode (client.databaseAPI)LAKEBASE_PG_URL→ static URL mode (no OAuth token refresh)
See .env.example for the full list of available settings including LLM provider, skills configuration, and MLflow tracing. The app loads .env.local (not .env) at startup.
Getting your Databricks token:
- Go to your Databricks workspace
- Click your username → User Settings
- Go to Developer → Access Tokens → Generate New Token
- Copy the token value
./scripts/start_dev.shThis starts both the backend and frontend in one terminal.
You can also start them separately if you prefer:
# Terminal 1 — Backend
uvicorn server.app:app --reload --port 8000 --reload-dir server
# Terminal 2 — Frontend
cd client && npm run dev- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/docs
If you're routing Claude API calls through Databricks Model Serving instead of directly to Anthropic, create .claude/settings.json in the repository root (not in the app directory):
{
"env": {
"ANTHROPIC_MODEL": "databricks-claude-sonnet-4-5",
"ANTHROPIC_BASE_URL": "https://your-workspace.cloud.databricks.com/serving-endpoints/anthropic",
"ANTHROPIC_AUTH_TOKEN": "dapi...",
"ANTHROPIC_DEFAULT_OPUS_MODEL": "databricks-claude-opus-4-5",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "databricks-claude-sonnet-4-5"
}
}Notes:
ANTHROPIC_AUTH_TOKENshould be a Databricks PAT, not an Anthropic API keyANTHROPIC_BASE_URLshould point to your Databricks Model Serving endpoint- If this file doesn't exist, the app uses your
ANTHROPIC_API_KEYfrom.env.local
The app supports two authentication modes:
1. Local Development (Environment Variables)
- Uses
DATABRICKS_HOSTandDATABRICKS_TOKENfrom.env.local - All users share the same credentials
- Good for local development and testing
2. Production (Request Headers)
- Uses
X-Forwarded-UserandX-Forwarded-Access-Tokenheaders - Set automatically by Databricks Apps proxy
- Each user has their own credentials
- Proper multi-user isolation
Skills are loaded from ../databricks-skills/ and filtered by the ENABLED_SKILLS environment variable:
databricks-python-sdk: Patterns for using the Databricks Python SDKdatabricks-spark-declarative-pipelines: SDP/DLT pipeline developmentdatabricks-synthetic-data-gen: Creating test datasetsdatabricks-app-apx: Full-stack apps with React (APX framework)databricks-app-python: Python apps with Dash, Streamlit, Flask
Adding custom skills:
- Create a new directory in
../databricks-skills/ - Add a
SKILL.mdfile with frontmatter:--- name: my-skill description: "Description of the skill" --- # Skill content here
- Add the skill name to
ENABLED_SKILLSin.env.local
The app uses PostgreSQL (Lakebase) for:
- Project metadata
- Conversation history
- Message storage
- Project backups (zipped project files)
Migrations:
# Run migrations (done automatically on startup)
alembic upgrade head
# Create a new migration
alembic revision --autogenerate -m "description"This was a known issue with claude-agent-sdk in FastAPI contexts. We've implemented a fix:
- ✅ Agent runs in a fresh event loop in a separate thread
- ✅ Context variables (Databricks auth) are properly propagated
- ✅ All MCP tools work correctly
See EVENT_LOOP_FIX.md for technical details.
Check:
ENABLED_SKILLSenvironment variable in.env.local- Skill names match directory names in
../databricks-skills/ - Each skill has a
SKILL.mdfile with proper frontmatter - Check logs:
Copied X skills to ./skills
Check:
DATABRICKS_HOSTis correct (no trailing slash)DATABRICKS_TOKENis valid and not expired- Token has proper permissions (cluster access, SQL warehouse access, etc.)
- If using Databricks Model Serving, check
.claude/settings.jsonconfiguration
# Kill processes on ports 8000 and 3000
lsof -ti:8000 | xargs kill -9
lsof -ti:3000 | xargs kill -9# Build frontend
cd client && npm run build && cd ..
# Run with uvicorn
uvicorn server.app:app --host 0.0.0.0 --port 8000databricks-builder-app/
├── server/ # FastAPI backend
│ ├── app.py # Main FastAPI app
│ ├── db/ # Database models and migrations
│ │ ├── models.py # SQLAlchemy models
│ │ └── database.py # Session management
│ ├── routers/ # API endpoints
│ │ ├── agent.py # /api/agent/* (invoke, etc.)
│ │ ├── projects.py # /api/projects/*
│ │ └── conversations.py
│ └── services/ # Business logic
│ ├── agent.py # Claude Code session management
│ ├── databricks_tools.py # MCP tool loading from SDK
│ ├── user.py # User auth (headers/env vars)
│ ├── skills_manager.py
│ ├── backup_manager.py
│ └── system_prompt.py
├── client/ # React frontend
│ ├── src/
│ │ ├── pages/ # Main pages (ProjectPage, etc.)
│ │ └── components/ # UI components
│ └── package.json
├── alembic/ # Database migrations
├── scripts/ # Utility scripts
│ └── start_dev.sh # Development startup
├── skills/ # Cached skills (gitignored)
├── projects/ # Project working directories (gitignored)
├── pyproject.toml # Python dependencies
└── .env.example # Environment template
| Endpoint | Method | Description |
|---|---|---|
/api/me |
GET | Get current user info |
/api/health |
GET | Health check |
/api/system_prompt |
GET | Preview the system prompt |
/api/projects |
GET | List all projects |
/api/projects |
POST | Create new project |
/api/projects/{id} |
GET | Get project details |
/api/projects/{id} |
PATCH | Update project name |
/api/projects/{id} |
DELETE | Delete project |
/api/projects/{id}/conversations |
GET | List project conversations |
/api/projects/{id}/conversations |
POST | Create new conversation |
/api/projects/{id}/conversations/{cid} |
GET | Get conversation with messages |
/api/projects/{id}/files |
GET | List files in project directory |
/api/invoke_agent |
POST | Start agent execution (returns execution_id) |
/api/stream_progress/{execution_id} |
POST | SSE stream of agent events |
/api/stop_stream/{execution_id} |
POST | Cancel an active execution |
/api/projects/{id}/skills/available |
GET | List skills with enabled status |
/api/projects/{id}/skills/enabled |
PUT | Update enabled skills for project |
/api/projects/{id}/skills/reload |
POST | Reload skills from source |
/api/projects/{id}/skills/tree |
GET | Get skills file tree |
/api/projects/{id}/skills/file |
GET | Get skill file content |
/api/clusters |
GET | List available Databricks clusters |
/api/warehouses |
GET | List available SQL warehouses |
/api/mlflow/status |
GET | Get MLflow tracing status |
This section covers deploying the Builder App to Databricks Apps platform for production use.
Before deploying, ensure you have:
- Databricks CLI installed and authenticated
- Node.js 18+ for building the frontend
- A Lakebase instance in your Databricks workspace (for database persistence)
- Access to the full repository (not just this directory) since the app depends on sibling packages
# 1. Authenticate with Databricks CLI
databricks auth login --host https://your-workspace.cloud.databricks.com
# 2. Create the app (first time only)
databricks apps create my-builder-app
# 3. Configure app.yaml (copy and edit the example)
cp app.yaml.example app.yaml
# Edit app.yaml — set LAKEBASE_ENDPOINT (autoscale) or LAKEBASE_INSTANCE_NAME (provisioned)
# 4. (Provisioned Lakebase only) Add Lakebase as an app resource
# Skip this step if using autoscale — it connects via OAuth directly.
databricks apps add-resource my-builder-app \
--resource-type database \
--resource-name lakebase \
--database-instance <your-lakebase-instance-name>
# 5. Deploy
./scripts/deploy.sh my-builder-app
# 6. Grant database permissions to the app's service principal (see Section 7)# Install Databricks CLI
pip install databricks-cli
# Authenticate (interactive browser login)
databricks auth login --host https://your-workspace.cloud.databricks.com
# Verify authentication
databricks auth describeIf you have multiple profiles, set the profile before deploying:
export DATABRICKS_CONFIG_PROFILE=your-profile-name# Create a new app
databricks apps create my-builder-app
# Verify it was created
databricks apps get my-builder-appThe app requires a PostgreSQL database (Lakebase) for storing projects, conversations, and messages.
Autoscale Lakebase (recommended — scales to zero when idle):
- Go to your Databricks workspace → Catalog → Lakebase
- Click Create → select Autoscale
- Note the endpoint resource name (e.g.,
projects/my-app/branches/production/endpoints/primary) - Set in
app.yaml:LAKEBASE_ENDPOINT=projects/my-app/branches/production/endpoints/primary
Provisioned Lakebase (fixed capacity):
- Go to Catalog → Lakebase → Create → select Provisioned
- Note the instance name (e.g.,
my-lakebase-instance) - Set in
app.yaml:LAKEBASE_INSTANCE_NAME=my-lakebase-instance
Autoscale Lakebase: Skip this step. Autoscale connects via OAuth using LAKEBASE_ENDPOINT — no app resource needed.
Provisioned Lakebase: Add the instance as an app resource:
databricks apps add-resource my-builder-app \
--resource-type database \
--resource-name lakebase \
--database-instance <your-lakebase-instance-name>This automatically configures the database connection environment variables (PGHOST, PGPORT, PGUSER, PGPASSWORD, PGDATABASE).
Copy the example configuration and customize it:
cp app.yaml.example app.yamlEdit app.yaml with your settings:
command:
- "uvicorn"
- "server.app:app"
- "--host"
- "0.0.0.0"
- "--port"
- "$DATABRICKS_APP_PORT"
env:
# Required: Lakebase database (pick ONE option)
# Option A — Autoscale Lakebase (recommended):
- name: LAKEBASE_ENDPOINT
value: "projects/<project-name>/branches/production/endpoints/primary"
- name: LAKEBASE_DATABASE_NAME
value: "databricks_postgres"
# Option B — Provisioned Lakebase:
# - name: LAKEBASE_INSTANCE_NAME
# value: "<your-lakebase-instance-name>"
# - name: LAKEBASE_DATABASE_NAME
# value: "databricks_postgres"
# Skills to enable (comma-separated)
- name: ENABLED_SKILLS
value: "databricks-agent-bricks,databricks-python-sdk,databricks-spark-declarative-pipelines"
# MLflow tracing (optional)
- name: MLFLOW_TRACKING_URI
value: "databricks"
# - name: MLFLOW_EXPERIMENT_NAME
# value: "/Users/your-email@company.com/claude-code-traces"
# Other settings
- name: ENV
value: "production"
- name: PROJECTS_BASE_DIR
value: "./projects"Run the deploy script from the databricks-builder-app directory:
./scripts/deploy.sh my-builder-appThe deploy script will:
- Build the React frontend
- Package the server code
- Bundle sibling packages (
databricks-tools-core,databricks-mcp-server) - Copy skills from
databricks-skills/ - Upload everything to your Databricks workspace
- Deploy the app
Skip frontend build (if already built):
./scripts/deploy.sh my-builder-app --skip-buildAfter the first deployment, the app's service principal needs two things:
- A Lakebase OAuth role (so it can authenticate via OAuth tokens)
- PostgreSQL grants on the
builder_appschema (so it can create/read/write tables)
SP_CLIENT_ID=$(databricks apps get my-builder-app --output json | jq -r '.service_principal_client_id')
echo $SP_CLIENT_IDImportant: Do NOT use PostgreSQL
CREATE ROLEdirectly. Lakebase Autoscaling requires roles to be created through the Databricks API so the OAuth authentication layer recognizes them.
from databricks.sdk import WorkspaceClient
from databricks.sdk.service.postgres import Role, RoleRoleSpec, RoleAuthMethod, RoleIdentityType
w = WorkspaceClient()
# Replace with your branch path and SP client ID
branch = "projects/<project-id>/branches/<branch-id>"
sp_client_id = "<sp-client-id>"
w.postgres.create_role(
parent=branch,
role=Role(
spec=RoleRoleSpec(
postgres_role=sp_client_id,
auth_method=RoleAuthMethod.LAKEBASE_OAUTH_V1,
identity_type=RoleIdentityType.SERVICE_PRINCIPAL,
)
),
).wait()Or via CLI:
databricks postgres create-role \
"projects/<project-id>/branches/<branch-id>" \
--json '{
"spec": {
"postgres_role": "<sp-client-id>",
"auth_method": "LAKEBASE_OAUTH_V1",
"identity_type": "SERVICE_PRINCIPAL"
}
}'Provisioned Lakebase: This step is not needed — adding the instance as an app resource (Step 4) automatically configures authentication.
Connect to your Lakebase database as your own user (via psql or a notebook) and run:
-- Replace <sp-client-id> with the service_principal_client_id
-- 1. Allow the SP to create the builder_app schema
GRANT CREATE ON DATABASE databricks_postgres TO "<sp-client-id>";
-- 2. Create the schema and grant full access
CREATE SCHEMA IF NOT EXISTS builder_app;
GRANT USAGE ON SCHEMA builder_app TO "<sp-client-id>";
GRANT ALL PRIVILEGES ON SCHEMA builder_app TO "<sp-client-id>";
-- 3. Grant access to any existing tables/sequences (needed if you ran migrations locally)
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA builder_app TO "<sp-client-id>";
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA builder_app TO "<sp-client-id>";
-- 4. Ensure the SP has access to future tables/sequences created by other users
ALTER DEFAULT PRIVILEGES IN SCHEMA builder_app
GRANT ALL ON TABLES TO "<sp-client-id>";
ALTER DEFAULT PRIVILEGES IN SCHEMA builder_app
GRANT ALL ON SEQUENCES TO "<sp-client-id>";After granting permissions, redeploy the app so it can run migrations with the new role.
After successful deployment, the script will display your app URL:
App URL: https://my-builder-app-1234567890.aws.databricksapps.com
Your Databricks CLI authentication may be invalid or using the wrong profile:
# Check available profiles
databricks auth profiles
# Use a specific profile
export DATABRICKS_CONFIG_PROFILE=your-valid-profile
# Re-authenticate if needed
databricks auth login --host https://your-workspace.cloud.databricks.comThe frontend build is missing. The deploy script should build it automatically, but you can build manually:
cd client
npm install
npm run build
cd ..Skills are copied from the sibling databricks-skills/ directory. Ensure:
- You're running the deploy script from the full repository (not just this directory)
- The skill name in
ENABLED_SKILLSmatches a directory indatabricks-skills/ - The skill directory contains a
SKILL.mdfile
See Section 7: Grant Database Permissions for the complete setup.
Common causes:
| Error | Cause | Fix |
|---|---|---|
password authentication failed |
Lakebase OAuth role missing or created via SQL instead of API | Create the role via w.postgres.create_role() with LAKEBASE_OAUTH_V1 auth (Step 7b) |
permission denied for table |
SP lacks PostgreSQL grants on schema/tables | Run the GRANT statements (Step 7c) |
schema "builder_app" does not exist |
SP lacks CREATE on the database |
GRANT CREATE ON DATABASE databricks_postgres TO "<sp-client-id>" |
relation does not exist |
Migrations haven't run | Redeploy the app, or run alembic upgrade head locally |
Autoscale Lakebase pitfall: Do NOT use
CREATE ROLE ... LOGINin PostgreSQL directly. Lakebase Autoscaling requires roles to be created through the Databricks API so that OAuth token authentication works. Manually created roles getNO_LOGINauth and will fail with "password authentication failed".
Check the app logs in Databricks:
databricks apps logs my-builder-appCommon causes:
- Frontend files not properly deployed (check
client/outexists in staging) - Database connection issues (verify Lakebase resource is added)
- Python import errors (check logs for traceback)
# Full redeploy (rebuilds frontend)
./scripts/deploy.sh my-builder-app
# Quick redeploy (skip frontend build)
./scripts/deploy.sh my-builder-app --skip-buildThe app supports MLflow tracing for Claude Code conversations. To enable:
- Set
MLFLOW_TRACKING_URI=databricksinapp.yaml - Optionally set
MLFLOW_EXPERIMENT_NAMEto a specific experiment path
Traces will appear in your Databricks MLflow UI and include:
- User prompts and Claude responses
- Tool usage and results
- Session metadata
See the Databricks MLflow Tracing documentation for more details.
If you want to embed the Databricks agent into your own application, see the integration example at:
scripts/_integration-example/
This provides a minimal working example with setup instructions for integrating the agent services into external frameworks.
- databricks-tools-core: Core MCP functionality and SQL operations
- databricks-mcp-server: MCP server exposing Databricks tools
- databricks-skills: Skill definitions for Databricks development