Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions databricks-skills/databricks-reusable-ip/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
name: databricks-reusable-ip
description: "Production-ready Databricks reference implementations (agent deployment, model serving, CI/CD, Genie Spaces, Lakebase, DABs, A/B testing, Claude Code). TRIGGER: when implementing a new Databricks pattern or porting a reference implementation. ACTION: fetch llms.txt index first, then fetch only relevant files."
---

# Reusable IP — Databricks Reference Implementations

## When to Use

When implementing a new Databricks pattern or porting a reference implementation, check this repo
for existing solutions. Covers: agent deployment, model serving (concurrent PyFunc), CI/CD,
Genie Spaces, Lakebase, Databricks Asset Bundles (DABs), A/B testing, and Claude Code
integration.

## JIT Fetch Protocol

**Step 1: Always fetch the index first**
```bash
gh api repos/databricks-field-eng/reusable-ip-ai/contents/llms.txt \
--jq '.content' | base64 -d
```

If this command fails (authentication error, access denied, or file not found), stop and inform
the user that the reusable-ip-ai repo is inaccessible. Do not proceed with this skill.

**Step 2: Identify relevant files** from the descriptions (not filenames alone).
If nothing is relevant, proceed without fetching further.

**Step 3: Fetch only the files you need**
```bash
gh api repos/databricks-field-eng/reusable-ip-ai/contents/PATH/TO/FILE \
--jq '.content' | base64 -d
```

**Rules:**
- Always fetch `llms.txt` first — do not guess file paths
- Fetch MINIMUM files (1–3). Fetch additional files incrementally only if needed
- Do not dump the full directory tree
3 changes: 2 additions & 1 deletion databricks-skills/install_skills.sh
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ MLFLOW_REPO_RAW_URL="https://raw.githubusercontent.com/mlflow/skills"
MLFLOW_REPO_REF="main"

# Databricks skills (hosted in this repo)
DATABRICKS_SKILLS="databricks-agent-bricks databricks-aibi-dashboards databricks-asset-bundles databricks-app-python databricks-config databricks-dbsql databricks-docs databricks-genie databricks-iceberg databricks-jobs databricks-lakebase-autoscale databricks-lakebase-provisioned databricks-metric-views databricks-mlflow-evaluation databricks-model-serving databricks-parsing databricks-python-sdk databricks-spark-declarative-pipelines databricks-spark-structured-streaming databricks-synthetic-data-gen databricks-unity-catalog databricks-unstructured-pdf-generation databricks-vector-search databricks-zerobus-ingest spark-python-data-source"
DATABRICKS_SKILLS="databricks-agent-bricks databricks-aibi-dashboards databricks-asset-bundles databricks-app-python databricks-config databricks-dbsql databricks-docs databricks-genie databricks-iceberg databricks-jobs databricks-lakebase-autoscale databricks-lakebase-provisioned databricks-metric-views databricks-mlflow-evaluation databricks-model-serving databricks-parsing databricks-python-sdk databricks-reusable-ip databricks-spark-declarative-pipelines databricks-spark-structured-streaming databricks-synthetic-data-gen databricks-unity-catalog databricks-unstructured-pdf-generation databricks-vector-search databricks-zerobus-ingest spark-python-data-source"

# MLflow skills (fetched from mlflow/skills repo)
MLFLOW_SKILLS="agent-evaluation analyze-mlflow-chat-session analyze-mlflow-trace instrumenting-with-mlflow-tracing mlflow-onboarding querying-mlflow-metrics retrieving-mlflow-traces searching-mlflow-docs"
Expand Down Expand Up @@ -73,6 +73,7 @@ get_skill_description() {
"databricks-iceberg") echo "Apache Iceberg - managed tables, UniForm, IRC, Snowflake interop, migration" ;;
"databricks-jobs") echo "Databricks Lakeflow Jobs - workflow orchestration" ;;
"databricks-python-sdk") echo "Databricks Python SDK, Connect, and REST API" ;;
"databricks-reusable-ip") echo "JIT access to reference implementations: agents, model serving, CI/CD, DABs, A/B testing" ;;
"databricks-unity-catalog") echo "System tables for lineage, audit, billing" ;;
"databricks-lakebase-autoscale") echo "Lakebase Autoscale - managed PostgreSQL with autoscaling" ;;
"databricks-lakebase-provisioned") echo "Lakebase Provisioned - data connections and reverse ETL" ;;
Expand Down