Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .claude/skills/databricks-python-sdk/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -613,3 +613,13 @@ If I'm unsure about a method, I should:
| Pipelines | https://databricks-sdk-py.readthedocs.io/en/latest/workspace/pipelines/pipelines.html |
| Secrets | https://databricks-sdk-py.readthedocs.io/en/latest/workspace/workspace/secrets.html |
| DBUtils | https://databricks-sdk-py.readthedocs.io/en/latest/dbutils.html |

## Related Skills

- **[databricks-config](../databricks-config/SKILL.md)** - profile and authentication setup
- **[databricks-asset-bundles](../databricks-asset-bundles/SKILL.md)** - deploying resources via DABs
- **[databricks-jobs](../databricks-jobs/SKILL.md)** - job orchestration patterns
- **[databricks-unity-catalog](../databricks-unity-catalog/SKILL.md)** - catalog governance
- **[databricks-model-serving](../databricks-model-serving/SKILL.md)** - serving endpoint management
- **[databricks-vector-search](../databricks-vector-search/SKILL.md)** - vector index operations
- **[databricks-lakebase-provisioned](../databricks-lakebase-provisioned/SKILL.md)** - managed PostgreSQL via SDK
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,5 @@ databricks-tools-core/tests/integration/pdf/generated_pdf/
# Python cache
__pycache__/
windows_info.txt
databricks-skills/setup_uc_tools.py
databricks-skills/setup_all_uc_tools.py
11 changes: 8 additions & 3 deletions .mcp.json
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
{
"mcpServers": {
"databricks": {
"command": "${CLAUDE_PLUGIN_ROOT}/.venv/bin/python",
"args": ["${CLAUDE_PLUGIN_ROOT}/databricks-mcp-server/run_server.py"],
"defer_loading": true
"command": "/Users/scott.johnson/.ai-dev-kit/.venv/bin/python",
"args": [
"/Users/scott.johnson/.ai-dev-kit/repo/databricks-mcp-server/run_server.py"
],
"defer_loading": true,
"env": {
"DATABRICKS_CONFIG_PROFILE": "dbx_shared_demo"
}
}
}
}
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,10 @@ AI-Driven Development (vibe coding) on Databricks just got a whole lot better. T
| [**Core Library**](#core-library) | Building custom integrations (LangChain, OpenAI, etc.) | `pip install` |
| [**Skills Only**](databricks-skills/) | Provide Databricks patterns and best practices (without MCP functions) | Install skills |
| [**MCP Tools Only**](databricks-mcp-server/) | Just executable actions (no guidance) | Register MCP server |
| [**UI Assistant MCP APP**](databricks-mcp-app/) | DBX APP bundling the MCP to expose tools to the UI assistant | `databricks-mcp-app/` |

**For AI assistants:** When working in this repo, use the **Databricks MCP** tools (e.g. `execute_sql`, `create_or_update_pipeline`, `manage_jobs`, `query_lakebase`, Lakebase/UC/Genie/dashboard tools) for Databricks operations instead of suggesting only CLI or SDK snippets. Skills provide patterns; MCP performs the actions.

---

## Quick Start
Expand Down
6 changes: 3 additions & 3 deletions databricks-builder-app/app.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ env:
# =============================================================================
# Comma-separated list of skills to enable
- name: ENABLED_SKILLS
value: "databricks-asset-bundles,databricks-agent-bricks,databricks-aibi-dashboards,databricks-app-apx,databricks-app-python,databricks-config,databricks-docs,databricks-jobs,databricks-python-sdk,databricks-unity-catalog,mlflow-evaluation,spark-declarative-pipelines,synthetic-data-generation,unstructured-pdf-generation"
value: "databricks-agent-bricks,databricks-aibi-dashboards,databricks-app-apx,databricks-app-python,databricks-asset-bundles,databricks-config,databricks-dbsql,databricks-docs,databricks-genie,databricks-jobs,databricks-lakebase-autoscale,databricks-lakebase-provisioned,databricks-metric-views,databricks-mlflow-evaluation,databricks-model-serving,databricks-python-sdk,databricks-spark-declarative-pipelines,databricks-spark-structured-streaming,databricks-synthetic-data-generation,databricks-unity-catalog,databricks-unstructured-pdf-generation,databricks-vector-search,databricks-zerobus-ingest,spark-python-data-source"
- name: SKILLS_ONLY_MODE
value: "false"

Expand All @@ -52,7 +52,7 @@ env:
#
# You only need to specify the instance name for OAuth token generation:
- name: LAKEBASE_INSTANCE_NAME
value: "fe-shared-demo"
value: "wanderbricks-lakebase"
- name: LAKEBASE_DATABASE_NAME
value: "databricks_postgres"
- name: LAKEBASE_SCHEMA_NAME
Expand Down Expand Up @@ -114,7 +114,7 @@ env:
value: "databricks-uc"
# Optional: Default MLflow experiment for traces (can be overridden per-session in the UI)
- name: MLFLOW_EXPERIMENT_NAME
value: "" # Set to your MLflow experiment path, e.g. "/Users/your.email@databricks.com/claude-code-traces"
value: "/Users/scott.johnson@databricks.com/claude-code-traces"

# =============================================================================
# Permission Configuration
Expand Down
14 changes: 0 additions & 14 deletions databricks-builder-app/client/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

10 changes: 5 additions & 5 deletions databricks-builder-app/scripts/deploy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ if [ -z "$WORKSPACE_HOST" ]; then
fi

# Get current user for workspace path
CURRENT_USER=$(databricks current-user me --output json 2>/dev/null | python3 -c "
CURRENT_USER=$(databricks current-user me --output json --profile dbx_shared_demo 2>/dev/null | python3 -c "
import sys, json
data = json.load(sys.stdin)
# Handle both formats
Expand All @@ -168,7 +168,7 @@ echo ""

# Check if app exists
echo -e "${YELLOW}[2/6] Verifying app exists...${NC}"
if ! databricks apps get "$APP_NAME" &> /dev/null; then
if ! databricks apps get "$APP_NAME" --profile dbx_shared_demo &> /dev/null; then
echo -e "${RED}Error: App '${APP_NAME}' does not exist.${NC}"
echo -e "Create it first with: ${GREEN}databricks apps create ${APP_NAME}${NC}"
exit 1
Expand Down Expand Up @@ -261,13 +261,13 @@ echo ""
# Upload to workspace
echo -e "${YELLOW}[5/6] Uploading to Databricks workspace...${NC}"
echo " Target: ${WORKSPACE_PATH}"
databricks workspace import-dir "$STAGING_DIR" "$WORKSPACE_PATH" --overwrite 2>&1 | tail -5
databricks workspace import-dir "$STAGING_DIR" "$WORKSPACE_PATH" --overwrite --profile dbx_shared_demo 2>&1 | tail -5
echo -e " ${GREEN}✓${NC} Upload complete"
echo ""

# Deploy the app
echo -e "${YELLOW}[6/6] Deploying app...${NC}"
DEPLOY_OUTPUT=$(databricks apps deploy "$APP_NAME" --source-code-path "$WORKSPACE_PATH" 2>&1)
DEPLOY_OUTPUT=$(databricks apps deploy "$APP_NAME" --source-code-path "$WORKSPACE_PATH" --profile dbx_shared_demo 2>&1)
echo "$DEPLOY_OUTPUT"

# Check deployment status
Expand All @@ -279,7 +279,7 @@ if echo "$DEPLOY_OUTPUT" | grep -q '"state":"SUCCEEDED"'; then
echo ""

# Get app URL
APP_INFO=$(databricks apps get "$APP_NAME" --output json 2>/dev/null)
APP_INFO=$(databricks apps get "$APP_NAME" --output json --profile dbx_shared_demo 2>/dev/null)
APP_URL=$(echo "$APP_INFO" | python3 -c "import sys, json; print(json.load(sys.stdin).get('url', 'N/A'))" 2>/dev/null || echo "N/A")

echo -e " App URL: ${GREEN}${APP_URL}${NC}"
Expand Down
49 changes: 49 additions & 0 deletions databricks-mcp-app/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# Virtual environments
.venv/
venv/
ENV/

# uv
uv.lock

# Generated files
requirements_deploy.txt

# IDE
.idea/
.vscode/
*.swp
*.swo
*~

# OS
.DS_Store
Thumbs.db

# Databricks
.databricks/

# Local testing
*.log
Loading
Loading