diff --git a/assets/images/custom_dc/customdc_setup2.png b/assets/images/custom_dc/customdc_setup2.png index 134f7b1b2..59d714d58 100644 Binary files a/assets/images/custom_dc/customdc_setup2.png and b/assets/images/custom_dc/customdc_setup2.png differ diff --git a/assets/images/custom_dc/customdc_setup3.png b/assets/images/custom_dc/customdc_setup3.png index 71f788e3f..72ac00d4a 100644 Binary files a/assets/images/custom_dc/customdc_setup3.png and b/assets/images/custom_dc/customdc_setup3.png differ diff --git a/custom_dc/advanced.md b/custom_dc/advanced.md index 037d224ae..2026df8a0 100644 --- a/custom_dc/advanced.md +++ b/custom_dc/advanced.md @@ -1,7 +1,7 @@ --- layout: default title: Advanced (hybrid) setups -nav_order: 10 +nav_order: 11 parent: Build your own Data Commons --- diff --git a/custom_dc/custom_data.md b/custom_dc/custom_data.md index 2fc1861e1..70e2e99f4 100644 --- a/custom_dc/custom_data.md +++ b/custom_dc/custom_data.md @@ -466,7 +466,7 @@ If the servers have started up without errors, check to ensure that your data is 1. Verify statistical variables: go to the [Statistical Variable Explorer](https://localhost:8080/tools/statvar){: target="_blank"} to verify that your statistical variables are showing up correctly. You should see something like this: - ![](/assets/images/custom_dc/customdc_screenshot11.png){: width="400"} + ![](/assets/images/custom_dc/customdc_screenshot11.png){: width="400"} 1. Click on a variable name to get more information on the right panel. 1. Verify that your observations are loaded: Click on an **Example Place** link to open the detailed page for that place. Scroll to the bottom, where you should see a timeline graph of observations for the selected place. 1. Verify natural-language querying: go to the [Search page](https://localhost:8080/tools/explore){: target="_blank"} and enter a query related to your data. You should get relevant graphs using your data. diff --git a/custom_dc/deploy_cloud.md b/custom_dc/deploy_cloud.md index 9bdff2a13..a6528a427 100644 --- a/custom_dc/deploy_cloud.md +++ b/custom_dc/deploy_cloud.md @@ -1,14 +1,14 @@ --- layout: default title: Deploy to Google Cloud -nav_order: 8 +nav_order: 9 parent: Build your own Data Commons --- {: .no_toc} # Deploy your custom instance to Google Cloud -This page shows you how to create a development environment in Google Cloud Platform, using [Terraform](https://cloud.google.com/docs/terraform){: target="_blank"}. This is step 4 of the [recommended workflow](/custom_dc/index.html#workflow). +This page shows you how to create a development environment in Google Cloud Platform, using [Terraform](https://cloud.google.com/docs/terraform){: target="_blank"}. This is step 5 of the [recommended workflow](/custom_dc/index.html#workflow). > **Note**: It's recommended that you go through the [Quickstart](quickstart.md) to start up a local instance before attempting to set up a Google Cloud instance. This will ensure you have all the necessary prerequisites, and give you a chance to test out your own data to make sure everything is working. @@ -114,6 +114,7 @@ All of the deployment options you can configure are listed in [deploy/terraform- | `dc_web_service_image` | `gcr.io/datcom-ci/datacommons-services:stable` | Specifies the image for the Docker services container. You will want to change this to a custom image once you have created it in [Upload a custom Docker image](#upload). | | `make_dc_web_service_public` | `true` | If you intend to restrict access to your instance, set this to `false`. | | `disable_google_maps` | `false` | If you want to disable showing Google Maps in the website, set this to `true`. | +| `dc_search_scope` | `base_and_custom` | If you want to limit AI agent queries to only searching your custom data, set this to `custom_only`. | Other recommended settings for a production environment are provided in [Launch your Data Commons](launch_cloud.md#create-env). @@ -372,6 +373,14 @@ The URL for your service is in the form https://NAMESPACE-datac If the link is not clickable and the service is not running, go back to the Console Cloud Run page, click the **Logs** tab and look for errors. Also check the output of your `terraform apply` run. +### Connect an AI agent to the MCP server + +To connect an AI agent to the cloud service: + +1. Obtain the app URL from the previous step. +1. In the configuration for the agent/client, specify the HTTP URL as https://APP_URL/mcp. +1. Run the agent as usual. + ## Update your Terraform deployment {#update-terraform} diff --git a/custom_dc/faq.md b/custom_dc/faq.md index c44c5de33..5bbf5190e 100644 --- a/custom_dc/faq.md +++ b/custom_dc/faq.md @@ -1,7 +1,7 @@ --- layout: default title: Frequently asked questions -nav_order: 12 +nav_order: 13 parent: Build your own Data Commons --- diff --git a/custom_dc/index.md b/custom_dc/index.md index 2311face4..7e7ea7072 100644 --- a/custom_dc/index.md +++ b/custom_dc/index.md @@ -40,6 +40,7 @@ For the following use cases, a custom Data Commons instance is not necessary: |--------------------------------------------------------------|--------------------|---------------------| | Interactive tools (Exploration tools, Statistical Variable Explorer, etc.) | yes | yes | | Natural language query interface | yes, using Google AI technologies and models | yes, using open-source models only1 | +| Model Context Protocol (MCP) server | yes | yes | | REST APIs | yes | yes | | Python and Pandas API wrappers | yes | yes | | Google Spreadsheets | yes | no2 | @@ -60,7 +61,6 @@ Essentially, a custom Data Commons instance is a mirror of the public Data Commo A custom Data Commons instance uses custom data that you provide as raw CSV files. An importer script converts the CSV data into the Data Commons format and stores this in a SQL database. For local development, we provide a lightweight, open-source [SQLite](http://sqlite.org) database; for production, we recommend that you use [Google Cloud SQL](https://cloud.google.com/sql/){: target="_blank"}. - > **Note**: You have full control and ownership of your data, which will live in SQL data stores that you own and manage. Your data is never transferred to the base Data Commons data stores managed by Google; see full details in this [FAQ](/custom_dc/faq.html#data-security). In addition to the data, a custom Data Commons instance consists of two Docker containers: @@ -71,7 +71,7 @@ Details about the components that make up the containers are provided in the [Qu ## Requirements and cost -A custom Data Commons site runs in a Docker container on Google Cloud Platform (GCP), using Google Cloud Run, a serverless solution that provides auto-scaling and other benefits. You will need the following: +A custom Data Commons site runs in Docker containers on Google Cloud Platform (GCP), using Google Cloud Run, a serverless solution that provides auto-scaling and other benefits. You will need the following: - A [GCP](http://console.cloud.google.com) billing account and project - A [Docker](http://docker.com) account @@ -98,6 +98,7 @@ You may also need Cloud DNS, Networking - Cloud Loadbalancing, and Redis Memorys 1. Prepare your real-world data and load it in the local custom instance. Data Commons requires your data to be in a specific format. See [Prepare and load your own data](/custom_dc/custom_data.html) for details. > Note: This section is very important! If your data is not in the scheme Data Commons expects, it won't load. 1. If you want to customize the look of the feel of the site, see [Customize the site](/custom_dc/custom_ui.html) and [Build a custom image](build_images.md). +1. Optionally, configure an AI agent to send NL queries to the MCP server (via an LLM). See [Run MCP tools](run_mcp_tools.md). 1. When you have finished testing locally, set up a development environment in Google Cloud Platform. See [Deploy to Google Cloud](/custom_dc/deploy_cloud.html). 1. Productionize and launch your site for external traffic. See [Launch your Data Commons](/custom_dc/launch_cloud.html). 1. For future updates and launches, continue to make UI and data changes locally, before deploying the changes to GCP. diff --git a/custom_dc/launch_cloud.md b/custom_dc/launch_cloud.md index 31ac93368..73760e5d0 100644 --- a/custom_dc/launch_cloud.md +++ b/custom_dc/launch_cloud.md @@ -1,7 +1,7 @@ --- layout: default title: Launch your Data Commons -nav_order: 9 +nav_order: 10 parent: Build your own Data Commons --- diff --git a/custom_dc/quickstart.md b/custom_dc/quickstart.md index 0c02604f3..9877698b8 100644 --- a/custom_dc/quickstart.md +++ b/custom_dc/quickstart.md @@ -29,7 +29,8 @@ The "services" Docker container consists of the following Data Commons component - A [Nginx reverse proxy server](https://www.nginx.com/resources/glossary/reverse-proxy-server/){: target="_blank"}, which routes incoming requests to the web or API server - A Python-Flask web server, which handles interactive requests from users - An Python-Flask NL server, for serving natural language queries -- A Go Mixer, also known as the API server, which serves programmatic requests using Data Commons APIs. The SQL query engine is built into the Mixer, which sends queries to both the local and remote data stores to find the right data. If the Mixer determines that it cannot fully resolve a user query from the custom data, it will make an REST API call, as an anonymous "user" to the base Data Commons Mixer and data. +- An [MCP server](https://modelcontextprotocol.io/){: target="_blank"}, for serving tool responses to an MCP-compliant AI agent (e.g. Google ADK apps, Gemini CLI, Google Antigravity) +- A Go Mixer, also known as the API server, which serves programmatic requests using Data Commons APIs. The SQL query engine is built into the Mixer, which sends queries to both the local and remote data stores to find the right data. If the Mixer determines that it cannot fully resolve a user query from the custom data, it will make a REST API call, as an anonymous "user" to the base Data Commons Mixer and data. ## Prerequisites @@ -167,7 +168,7 @@ This does the following: - Imports the data from the CSV files, resolves entities, and writes the data to a SQLite database file, `custom_dc/sample/datacommons/datacommons.db`. - Generates embeddings in `custom_dc/sample/datacommons/nl`. (To learn more about embeddings generation, see the [FAQ](/custom_dc/faq.html#natural-language-processing)). - Starts the services Docker container. -- Starts development/debug versions of the Web Server, NL Server, and Mixer, as well as the Nginx proxy, inside the container. +- Starts development/debug versions of the Web server, MCP server, NL server, and Mixer, as well as the Nginx proxy, inside the container. - Maps the output sample data to a Docker path. You can see the actual Docker commands that the script runs at the [end of this page](#docker). diff --git a/custom_dc/run_mcp_tools.md b/custom_dc/run_mcp_tools.md index b5b31d52f..c4c9233d0 100644 --- a/custom_dc/run_mcp_tools.md +++ b/custom_dc/run_mcp_tools.md @@ -1,145 +1,76 @@ --- layout: default title: Run MCP tools -nav_order: 9 +nav_order: 8 parent: Build your own Data Commons --- {:.no_toc} # Run MCP tools -To use Data Commons MCP tools with a Custom Data Commons, you must run your own instance of the [Data Commons MCP server](https://pypi.org/project/datacommons-mcp/). This page describes how to run a server locally and in Google Cloud. +The Custom Data Commons services container includes the [Data Commons MCP server](/mcp/index.html) as a component. This page describes how to connect from an AI agent to a local MCP server. This is step 4 of the [recommended workflow](/custom_dc/index.html#workflow). > **Important**: -> If you have not rebuilt your Data Commons image since the stable release of 2026-01-29, you must [sync to the latest stable release](/custom_dc/build_image.html#sync-code-to-the-stable-branch), [rebuild your image](/custom_dc/build_image.html#build-package) and [redeploy](/custom_dc/deploy_cloud.html#manage-your-service). +> This feature is available starting from the stable release of 2026-02-10. To use it, you must [sync your code](/custom_dc/build_image.html#sync-code-to-the-stable-branch) to a stable release from that date or later, [rebuild your image](/custom_dc/build_image.html#build-package), and [redeploy](/custom_dc/deploy_cloud.html#manage-your-service). * TOC {:toc} -## Run a local MCP server +## Configure the MCP server -You can use any AI agent to spawn a local MCP server as a subprocess, or start a standalone server and connect to it from any client. For the most part, the procedures to do so are the same as those provided in [Run your own MCP server](/mcp/host_server.html). The main difference is that you must set additional environment variables, as described below. +The MCP server runs by default, in HTTP streaming mode, when you start up the services. You don't need an API key for the server or for any agent connecting to it. -### Prerequisites - -- Install `uv` for managing and installing Python packages; see the instructions at {: target="_blank"}. - -### Configure environment variables - -To run against a Custom Data Commons instance, you must set the following required variables: -- DC_API_KEY="YOUR API KEY" -- `DC_TYPE="custom"` -- CUSTOM_DC_URL="YOUR_INSTANCE_URL" - -Various other optional variables are also available; all are documented in [packages/datacommons-mcp/.env.sample](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample){: target="_blank"}. - -You can set variables in the following ways: -1. In a shell/startup script (e.g. `.bashrc`). -1. Use an `.env` file. This is useful if you're setting multiple variables, to keep all settings in one place. Copy the contents of [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample){: target="_blank"} into a file called `.env` in the directory where you plan to run the server and/or agent. -1. If you are using Gemini CLI (not the extension), you can use the `env` option in the `settings.json` file. +There are a few additional environment variables you can configure in your `env.list` file: +- `ENABLE_MCP`: By default this is set to true. If you want to disable the MCP server from running, set it to false. +- `DC_SEARCH_SCOPE`: This controls the datasets (base and/or custom) that are searched in response to AI queries. By default it is set to search both base and custom data (`base_and_custom`). If you would like to search only your custom data, set it to `custom_only`. -## Run the MCP Server in Google Cloud Platform - -If you have built a custom agent or Gemini CLI extension which you want to make publicly available, the following sections describe how to run the MCP server in the cloud, using Google Cloud Run. - -Since setting up an MCP server is a simple, one-time setup, there's no need to use Terraform to manage it. Data Commons provides a prebuilt Docker image in the Artifact Registry, so you only need to set up a new Cloud Run service to point to it. - -### Prebuilt images - -There are several versions of the image available, viewable at . We recommend that you choose a production version with a specific version number, to ensure that changes introduced by the Data Commons team don't break your application. - -### Before you start: decide on a hosting model - -There are several ways you can host the MCP server in Cloud Run, namely: - -- As a standalone service. In this case, any client simply connects to it over HTTP, including your own MCP agent running as a separate Cloud Run service or locally. You can choose whether to make the internal Cloud Run app URL publicly available, or whether to put a load balancer in front of the service and map a domain name. -- As a ["sidecar"](https://docs.cloud.google.com/run/docs/deploying#sidecars){: target="_blank"} to an MCP client. If you are hosting your own MCP client in Cloud Run as well, this may be a useful option. In this case, the MCP server is not directly addressable; all external connections are managed by the client. - -In this page, we provide steps for running the Data Commons MCP server as a standalone container. If you want to go with the sidecar option, please see [Deploying multiple containers to a service (sidecars)](https://docs.cloud.google.com/run/docs/deploying#sidecars){: target="_blank"} for additional requirements and setup procedures. - -### Prerequisites - -The following procedures assume that you have set up the following Google Cloud Platform services, using the [Terraform scripts](deploy_cloud.md#terraform): -- A service account and roles. -- A Google Cloud Secret Manager secret for storing your Data Commons API key. - -### Create a Cloud Run Service for the MCP server - -The following procedure sets up a bare-bones container service. To set additional options, such as request timeouts, instance replication, etc., please see [Configure Cloud Run services](https://docs.cloud.google.com/run/docs/configuring){: target="_blank"} for details. - -
-
    -
  • Cloud Console
  • -
  • gcloud CLI
  • -
-
-
-
    -
  1. Go to the https://console.cloud.google.com/run/services page for your project.
  2. -
  3. Click Deploy container.
  4. -
  5. In the Container image URL field, click Select.
  6. -
  7. In the Artifact Registry panel that appears in the right side of the window, that appears, click Change.
  8. -
  9. In the project search bar, enter datcom-ci and click on the link that appears.
  10. -
  11. Expand gcr.io/datcom-ci and expand datacommons-mcp-server.
  12. -
  13. From the list of images, select a production image, e.g. production-v1.1.4.
  14. -
  15. Under Configure, select the desired region for the service, e.g. us-central1.
  16. -
  17. Expand Containers, Networking, Security.
  18. -
  19. Click the Variables & secrets tab.
  20. -
  21. Under Environment variables, click Add variable and set the following variables: -
      -
    • name: DC_TYPE, value: custom
    • -
    • name: CUSTOM_DC_URL, value: YOUR_INSTANCE_URL
    • -
  22. -
  23. Under Secrets exposed as environment variables, click Reference a secret.
  24. -
  25. In the Name field, enter DC_API_KEY, and from the Secret field, select the secret previously created by the Terraform scripts. It is in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT.
  26. -
  27. In the Version field, select the desired version, e.g. latest.
  28. -
  29. Click Done.
  30. -
  31. Click the Security tab. From the Service account field, select the service account for your namespace and project, previously created by the Terraform scripts.
  32. -
  33. Click Create. If correctly configured, the service will deploy automatically. It may take several minutes to start up.
-
-
-
    -
  1. If you haven't recently refreshed your Google Cloud credentials, run gcloud auth application-default login and authenticate.
  2. -
  3. From any local directory, run the following command: -
    gcloud run deploy datacommons-mcp-server --image CONTAINER_IMAGE_URL \
    -        --service-account SERVICE_ACCOUNT --region REGION \
    -        --allow-unauthenticated \
    -        --set-secrets="DC_API_KEY=SECRET_NAME:latest" \
    -        --set-env-vars="DC_TYPE=custom" --set-env-vars="CUSTOM_DC_URL=INSTANCE_URL"
  4. -
-
    -
  • The container image URL is gcr.io/datcom-ci/datacommons-mcp-server:TAG. The tag should be a production image with a version number, e.g. production-v1.1.4.
  • -
  • The service account was created when you ran Terraform. It is in the form NAMESPACE-datacommons-sa@PROJECT_ID.iam.gserviceaccount.com.
  • -
  • The region is the Cloud region where you want to run the service, e.g. us-central1.
  • -
  • The secret name is the one created when you ran the Terraform scripts, in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT. If you're not sure about the name or fingerprint, go to https://console.cloud.google.com/security/secret-manager for your project and look it up.
  • -
- To view the startup status, run the following command: -
gcloud run services logs tail datacommons-mcp-server --region REGION
-
-
-
- - - -### Connect to the server from a remote client - -For details, see [Configure an agent to connect to the running server](/mcp/host_server.html#standalone-client). - -The HTTP URL parameter is the Cloud Run App URL, if you are exposing the service directly, or a custom domain URL if you are using a load balancer and domain mapping. - -### Troubleshoot deployment issues - -{:.no_toc} -#### Container fails to start - -If you see this error message: - -``` -The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable within the allocated timeout... +## Connect an AI agent to a locally running server + +You can use any AI agent to connect to the MCP server. The server is accessible at the `/mcp` endpoint. + +Below we provide procedures for Gemini CLI and for a sample Google ADK agent provided in the GitHub Data Commons [`agent-toolkit` repo](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent){: target="_blank"}. You should be able to adapt the configuration to any other MCP-compliant agent, including your own custom-built agent. + +### Use Gemini CLI + +1. If you don't have it on your system, install [Node.js](https://nodejs.org/en/download){: target="_blank"}. +1. Install [Google Gemini CLI](https://geminicli.com/docs/get-started/installation/){: target="_blank"}. +1. Start the service container if it's not already running. +1. Configure Gemini CLI to connect to the Data Commons MCP server: edit the relevant `settings.json` file (e.g. `~/.gemini/settings.json`) to add the following: +
+    {
+      ...
+      "mcpServers": {
+          "datacommons-mcp": {         
+             "httpUrl": "http://localhost:8080/mcp"
+          }
+      }
+      ...
+    }
+    
+1. From any directory, start Gemini as described in [Run Gemini CLI](/mcp/run_tools.html#run-gemini). + +### Use the sample agent + +1. Install [`uv`](https://docs.astral.sh/uv/getting-started/installation/), a Python package manager. +1. Start the services container if it's not already running. +1. From the desired directory, clone the `agent-toolkit` repo: +```bash +git clone https://github.com/datacommonsorg/agent-toolkit.git ``` -This is a generic message that could indicate a number of configuration problems. Check all of these: -- Be sure you have specified the `DC_API_KEY` environment variable. -- Be sure you have specified the correct service account. -- Try increasing the health check timeout. + > Tip: You do not need to install the Google ADK; when you use the [command we provide](/mcp/run_tools.html#run-sample) to start the agent, it downloads the ADK dependencies at run time. +1. Modify [`packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py){: target="_blank"} to set the `url` parameter of the `StreamableHTTPConnectionParams` object. +
+   ...
+   tools=[McpToolset(
+         connection_params=StreamableHTTPConnectionParams(
+            url="http://localhost:8080/mcp",
+            ...
+          )
+         )
+        ]
+   ...
+   
+1. Customize the agent as desired, as described in [Customize the agent](/mcp/run_tools.html#customize-agent). +1. Start the agent as described in [Run the startup commands](/mcp/run_tools.html#run-sample). diff --git a/custom_dc/troubleshooting.md b/custom_dc/troubleshooting.md index e0c592c3e..98e4ae8be 100644 --- a/custom_dc/troubleshooting.md +++ b/custom_dc/troubleshooting.md @@ -1,7 +1,7 @@ --- layout: default title: Troubleshooting -nav_order: 11 +nav_order: 12 parent: Build your own Data Commons --- diff --git a/mcp/host_server.md b/mcp/host_server.md index 731b1d827..13d888f5d 100644 --- a/mcp/host_server.md +++ b/mcp/host_server.md @@ -108,52 +108,46 @@ The server is addressable with the endpoint `mcp`. For example, `http://my-mcp-s #### Gemini CLI -Replace the `datacommons-mcp` section in your `settings.json` file as follows: -
-{
-   "mcpServers": {
-      "datacommons-mcp": {
-         "httpUrl": "http://HOST:PORT/mcp",
-         "headers": {
-            "Content-Type": "application/json",
-            "Accept": "application/json, text/event-stream"
-            // If you have set the key in your environment
-           , "X-API-Key": "$DC_API_KEY"
-            // If you have not set the key in your environment
-           , "X-API-Key": "YOUR DC API KEY"
+1. Replace the `datacommons-mcp` section in your `settings.json` file as follows:
+   
+   {
+      "mcpServers": {
+         "datacommons-mcp": {
+           "httpUrl": "http://HOST:PORT/mcp",
+           "headers": {
+             "Accept": "application/json, text/event-stream"
+            }
          }
       }
    }
-}
-
+
-[Run Gemini CLI](run_tools.md#run-gemini) as usual. +1. [Run Gemini CLI](run_tools.md#run-gemini) as usual. #### Sample agent -Modify [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py){: target="_blank"} as follows: +1. Modify [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py){: target="_blank"} as follows: -
-from google.adk.tools.mcp_tool.mcp_toolset import (
+   
+   from google.adk.tools.mcp_tool.mcp_toolset import (
    MCPToolset,
    StreamableHTTPConnectionParams
-)
-
-root_agent = LlmAgent(
+   )
+   ...
+   root_agent = LlmAgent(
       # ...
       tools=[McpToolset(
          connection_params=StreamableHTTPConnectionParams(
             url="http://HOST:PORT/mcp",
             headers={
-               "Content-Type": "application/json",
                "Accept": "application/json, text/event-stream"
             }
          )
       )
-   ],
-)
+    ],
+   )  
 
- -[Run the startup commands](run_tools.md#run-sample) as usual. +1. Customize the agent as desired, as described in [Customize the agent](run_tools.md#customize-agent). +1. [Run the startup commands](run_tools.md#run-sample) as usual. diff --git a/mcp/run_tools.md b/mcp/run_tools.md index b5f447113..305995416 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -143,7 +143,6 @@ To uninstall the extension, run: ``` gemini extensions uninstall datacommons ``` - ## Use Gemini CLI In addition to the Data Commons API key, you must install the following: @@ -187,6 +186,7 @@ To configure Gemini CLI to connect to the Data Commons server, edit the relevant In addition to the Data Commons API key, you will need: - [Git](https://git-scm.com/){: target="_blank"} installed. +- [`uv`](https://docs.astral.sh/uv/getting-started/installation/), a Python package manager, installed. > Tip: You do not need to install the Google ADK; when you use the [command we provide](#run-sample) to start the agent, it downloads the ADK dependencies at run time. @@ -228,6 +228,7 @@ git clone https://github.com/datacommonsorg/agent-toolkit.git 1. Enter your [queries](#sample-queries) at the `User` prompt in the terminal. {:.no_toc} +{: #customize-agent} ### Customize the agent To customize the sample agent, you can make changes directly to the Python files. You'll need to [restart the agent](#run-sample) any time you make changes.