From d6241c9e535dc45878f69cf4b1b914b45fb94f43 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 11 Jun 2025 19:07:09 -0700 Subject: [PATCH 001/121] remove references to dataclass objects --- api/python/v2/index.md | 8 ++++---- api/python/v2/node.md | 2 +- api/python/v2/observation.md | 6 ++++-- 3 files changed, 9 insertions(+), 7 deletions(-) diff --git a/api/python/v2/index.md b/api/python/v2/index.md index 63c82fd80..d6db2ddb4 100644 --- a/api/python/v2/index.md +++ b/api/python/v2/index.md @@ -123,7 +123,7 @@ The Python client library sends HTTP POST requests to the Data Commons [REST API | API | Endpoint | Description | Response type | | --- | --- -----| ----------- |---------------| -| Observation | [`observation`](observation.md) | Fetches statistical observations (time series) | `ObservationResponse` | +| Observation | [`observation`](observation.md) | Fetches statistical observations (time series) | `ObservationResponse` and Python dictionary | | [Observations Pandas DataFrame](pandas.md) | Similar to the `fetch_observatons_by_entity_dcids` and `fetch_observations_by_entity_type` methods of the Observation endpoint, except that the functionality is provided by a single method of the `DataCommonsClient` class directly, instead of an intermediate endpoint. Requires the optional `Pandas` module. | `pd.DataFrame` | | Node | [`node`](node.md) | Fetches information about edges and neighboring nodes | `NodeResponse` and Python dictionary | | Resolve entities | [`resolve`](resolve.md) | Returns Data Commons IDs ([`DCID`](/glossary.html#dcid)) for entities in the knowledge graph | `ResolveResponse` | @@ -162,7 +162,7 @@ For common requests, each endpoint also provides convenience methods that build ## Response formatting -By default, responses are returned as Python `dataclass` objects with the full structure. For example: +By default, most methods return responses as Python objects with the full structure. For example: ```python response = client.resolve.fetch_dcids_by_name(names="Georgia") @@ -173,8 +173,8 @@ Each response class provides some property methods that are useful for formattin | Method | Description | |--------|-------------| -| to_dict | Converts the dataclass to a Python dictionary. | -| to_json | Serializes the dataclass to a JSON string (using `json.dumps()`). | +| to_dict | Converts the object to a Python dictionary. | +| to_json | Serializes the object to a JSON string (using `json.dumps()`). | {: .doc-table } Both methods take the following input parameter: diff --git a/api/python/v2/node.md b/api/python/v2/node.md index 296351d46..1c76b1a50 100644 --- a/api/python/v2/node.md +++ b/api/python/v2/node.md @@ -43,7 +43,7 @@ The following are the methods available for this endpoint. ## Response -The `fetch_entity_names` and `fetch_place_*` methods return a Python dictionary. All other request methods return a `NodeResponse` dataclass object. It looks like this: +The `fetch_entity_names` and `fetch_place_*` methods return a Python dictionary. All other request methods return a `NodeResponse` object. It looks like this:
 {
diff --git a/api/python/v2/observation.md b/api/python/v2/observation.md
index df6e3f3ed..ec1541d64 100644
--- a/api/python/v2/observation.md
+++ b/api/python/v2/observation.md
@@ -13,7 +13,7 @@ published: true
 The Observation API fetches statistical observations. An observation is associated with an
 entity and variable at a particular date: for example, "population of USA in 2020", "GDP of California in 2010", and so on. 
 
-> Note: This endpoint returns Python dataclass objects, like other endpoints. To get Pandas DataFrames results, see [Observation pandas](pandas.md) which is a direct property method of the `Client` object.
+> Note: This endpoint returns Python objects, like other endpoints. To get Pandas DataFrames results, see [Observation pandas](pandas.md) which is a direct property method of the `Client` object.
 
 [Source code](https://github.com/datacommonsorg/api-python/blob/master/datacommons_client/endpoints/observation.py){: target="_blank"}
 
@@ -33,7 +33,9 @@ The following are the methods available for this endpoint.
 
 ## Response {#response}
 
-With `select=["date", "entity", "variable", "value"]` in effect (the default), the response looks like this:
+The `fetch_available_statistical_variables` returns a Python dictionary. All other methods return a `ObservationResponse` object.
+
+With `select=["date", "entity", "variable", "value"]` in effect (the default), the `ObservationResponse` looks like this:
 
 
 {

From 101c5fd34e769200cc0994eb44d37bf935c284ad Mon Sep 17 00:00:00 2001
From: Kara Moscoe 
Date: Wed, 11 Jun 2025 20:37:01 -0700
Subject: [PATCH 002/121] Fix a copy-paste error.

---
 api/python/v2/datacommons_client.html | 644 ++++++++++++++++++++++++++
 api/python/v2/node.md                 |   2 +-
 2 files changed, 645 insertions(+), 1 deletion(-)
 create mode 100644 api/python/v2/datacommons_client.html

diff --git a/api/python/v2/datacommons_client.html b/api/python/v2/datacommons_client.html
new file mode 100644
index 000000000..c76f71b31
--- /dev/null
+++ b/api/python/v2/datacommons_client.html
@@ -0,0 +1,644 @@
+
+
+
+
+Python: package datacommons_client
+
+
+
+
+
+
 
datacommons_client (version 2.1.0)
index
/usr/local/google/home/kmoscoe/api-python/datacommons_client/__init__.py
+

+

+ + + + + +
 
Package Contents
       
client
+
endpoints (package)
+
models (package)
+
utils (package)
+

+ + + + + +
 
Classes
       
+
builtins.object +
+
+
datacommons_client.client.DataCommonsClient +
datacommons_client.endpoints.base.API +
+
+
datacommons_client.endpoints.base.Endpoint(builtins.object) +
+
+
datacommons_client.endpoints.node.NodeEndpoint +
datacommons_client.endpoints.observation.ObservationEndpoint +
datacommons_client.endpoints.resolve.ResolveEndpoint +
+
+
+

+ + + + + + + +
 
class API(builtins.object)
   API(api_key: Optional[str] = None, dc_instance: Optional[str] = None, url: Optional[str] = None)

+Represents a configured API interface to the Data Commons API.

+This class handles environment setup, resolving the base URL, building headers,
+or optionally using a fully qualified URL directly. It can be used standalone
+to interact with the API or in combination with Endpoint classes.
 
 Methods defined here:
+
__init__(self, api_key: Optional[str] = None, dc_instance: Optional[str] = None, url: Optional[str] = None)
Initializes the API instance.

+Args:
+    api_key: The API key for authentication. Defaults to None.
+    dc_instance: The Data Commons instance domain. Ignored if `url` is provided.
+                 Defaults to 'datacommons.org' if both `url` and `dc_instance` are None.
+    url: A fully qualified URL for the base API. This may be useful if more granular control
+        of the API is required (for local development, for example). If provided, dc_instance`
+         should not be provided.

+Raises:
+    ValueError: If both `dc_instance` and `url` are provided.
+ +
__repr__(self) -> str
Returns a readable representation of the API object.

+Indicates the base URL and if it's authenticated.

+Returns:
+    str: A string representation of the API object.
+ +
post(self, payload: dict[str, typing.Any], endpoint: Optional[str] = None, *, all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request using the configured API environment.

+If `endpoint` is provided, it will be appended to the base_url. Otherwise,
+it will just POST to the base URL.

+Args:
+    payload: The JSON payload for the POST request.
+    endpoint: An optional endpoint path to append to the base URL.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+        Defaults to True. Set to False to only fetch the first page. In that case, a
+        `next_token` key in the response will indicate if more pages are available.
+        That token can be used to fetch the next page.

+Returns:
+    A dictionary containing the merged response data.

+Raises:
+    ValueError: If the payload is not a valid dictionary.
+ +
+Data descriptors defined here:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + + + +
 
class DataCommonsClient(builtins.object)
   DataCommonsClient(api_key: Optional[str] = None, *, dc_instance: Optional[str] = 'datacommons.org', url: Optional[str] = None)

+A client for interacting with the Data Commons API.

+This class provides convenient access to the V2 Data Commons API endpoints.

+Attributes:
+    api (API): An instance of the API class that handles requests.
+    node (NodeEndpoint): Provides access to node-related queries, such as fetching property labels
+        and values for individual or multiple nodes in the Data Commons knowledge graph.
+    observation (ObservationEndpoint): Handles observation-related queries, allowing retrieval of
+        statistical observations associated with entities, variables, and dates (e.g., GDP of California in 2010).
+    resolve (ResolveEndpoint): Manages resolution queries to find different DCIDs for entities.
 
 Methods defined here:
+
__init__(self, api_key: Optional[str] = None, *, dc_instance: Optional[str] = 'datacommons.org', url: Optional[str] = None)
Initializes the DataCommonsClient.

+Args:
+    api_key (Optional[str]): The API key for authentication. Defaults to None. Note that
+        custom DC instances do not currently require an API key.
+    dc_instance (Optional[str]): The Data Commons instance to use. Defaults to "datacommons.org".
+    url (Optional[str]): A custom, fully resolved URL for the Data Commons API. Defaults to None.
+ +
observations_dataframe(self, variable_dcids: str | list[str], date: datacommons_client.endpoints.payloads.ObservationDate | str, entity_dcids: Union[Literal['all'], list[str]] = 'all', entity_type: Optional[str] = None, parent_entity: Optional[str] = None, property_filters: Optional[dict[str, str | list[str]]] = None)
Fetches statistical observations and returns them as a Pandas DataFrame.

+The Observation API fetches statistical observations linked to entities and variables
+at a particular date (e.g., "population of USA in 2020", "GDP of California in 2010").

+Args:
+variable_dcids (str | list[str]): One or more variable DCIDs for the observation.
+date (ObservationDate | str): The date for which observations are requested. It can be
+    a specific date, "all" to retrieve all observations, or "latest" to get the most recent observations.
+entity_dcids (Literal["all"] | list[str], optional): The entity DCIDs for which to retrieve data.
+    Defaults to "all".
+entity_type (Optional[str]): The type of entities to filter by when `entity_dcids="all"`.
+    Required if `entity_dcids="all"`. Defaults to None.
+parent_entity (Optional[str]): The parent entity under which the target entities fall.
+    Required if `entity_dcids="all"`. Defaults to None.
+property_filters (Optional[dict[str, str | list[str]]): An optional dictionary used to filter
+    the data by using observation properties like `measurementMethod`, `unit`, or `observationPeriod`.

+Returns:
+    pd.DataFrame: A DataFrame containing the requested observations.
+ +
+Data descriptors defined here:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + + + +
 
class NodeEndpoint(datacommons_client.endpoints.base.Endpoint)
   NodeEndpoint(api: datacommons_client.endpoints.base.API)

+Initializes the NodeEndpoint with a given API configuration.

+Args:
+    api (API): The API instance providing the environment configuration
+        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
+
NodeEndpoint
+
datacommons_client.endpoints.base.Endpoint
+
builtins.object
+
+
+Methods defined here:
+
__getattr__(self, name)
+ +
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the NodeEndpoint with a given API configuration.
+ +
fetch(self, node_dcids: str | list[str], expression: str, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches properties or arcs for given nodes and properties.

+Args:
+    node_dcids (str | List[str]): The DCID(s) of the nodes to query.
+    expression (str): The property or relation expression(s) to query.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+      Defaults to True. Set to False to only fetch the first page. In that case, a
+      `next_token` key in the response will indicate if more pages are available.
+      That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    NodeResponse: The response object containing the queried data.

+Example:
+    ```python
+    response = node.fetch(
+        node_dcids=["geoId/06"],
+        expression="<-"
+    )
+    print(response)
+    ```
+ +
fetch_all_classes(self, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches all Classes available in the Data Commons knowledge graph.

+Args:
+  all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+      Defaults to True. Set to False to only fetch the first page. In that case, a
+      `next_token` key in the response will indicate if more pages are available.
+      That token can be used to fetch the next page.
+  next_token: Optionally, the token to fetch the next page of results. Defaults to None.


+Returns:
+    NodeResponse: The response object containing all statistical variables.

+Example:
+    ```python
+    response = node.fetch_all_classes()
+    print(response)
+    ```
+ +
fetch_entity_names(self, entity_dcids: str | list[str], language: Optional[str] = 'en', fallback_language: Optional[str] = None) -> dict[str, datacommons_client.models.node.Name]
Fetches entity names in the specified language, with optional fallback to English.
+Args:
+  entity_dcids: A single DCID or a list of DCIDs to fetch names for.
+  language: Language code (e.g., "en", "es"). Defaults to "en" (DEFAULT_NAME_LANGUAGE).
+  fallback_language: If provided, this language will be used as a fallback if the requested
+    language is not available. If not provided, no fallback will be used.
+Returns:
+  A dictionary mapping each DCID to a dictionary with the mapped name, language, and
+    the property used.
+ +
fetch_place_ancestors(self, place_dcids: str | list[str], as_tree: bool = False, *, max_concurrent_requests: Optional[int] = 10) -> dict[str, list[dict[str, str]] | dict]
Fetches the full ancestry (flat or nested) for one or more entities.
+For each input DCID, this method builds the complete ancestry graph using a
+breadth-first traversal and parallel fetching.
+It returns either a flat list of unique parents or a nested tree structure for
+each entity, depending on the `as_tree` flag. The flat list matches the structure
+of the `/api/place/parent` endpoint of the DC website.
+Args:
+    place_dcids (str | list[str]): One or more DCIDs of the entities whose ancestry
+       will be fetched.
+    as_tree (bool): If True, returns a nested tree structure; otherwise, returns a flat list.
+        Defaults to False.
+    max_concurrent_requests (Optional[int]): The maximum number of concurrent requests to make.
+        Defaults to PLACES_MAX_WORKERS.
+Returns:
+    dict[str, list[dict[str, str]] | dict]: A dictionary mapping each input DCID to either:
+        - A flat list of parent dictionaries (if `as_tree` is False), or
+        - A nested ancestry tree (if `as_tree` is True). Each parent is represented by
+          a dict with 'dcid', 'name', and 'type'.
+ +
fetch_place_children(self, place_dcids: str | list[str], *, children_type: Optional[str] = None, as_dict: bool = True) -> dict[str, list[datacommons_client.models.node.Node | dict]]
Fetches the direct children of one or more entities using the 'containedInPlace' property.

+Args:
+    place_dcids (str | list[str]): A single place DCID or a list of DCIDs to query.
+    children_type (str, optional): The type of the child entities to
+        fetch (e.g., 'Country', 'State', 'IPCCPlace_50'). If None, fetches all child types.
+    as_dict (bool): If True, returns a dictionary mapping each input DCID to its
+        immediate children entities. If False, returns a dictionary of Node objects.

+Returns:
+    dict[str, list[Node | dict]]: A dictionary mapping each input DCID to a list of its
+    immediate children. Each child is represented as a Node object or as a dictionary with
+    the same data.
+ +
fetch_place_descendants(self, place_dcids: str | list[str], descendants_type: Optional[str] = None, as_tree: bool = False, *, max_concurrent_requests: Optional[int] = 10) -> dict[str, list[dict[str, str]] | dict]
Fetches the full descendants (flat or nested) for one or more entities.
+For each input DCID, this method builds the complete descendants graph using a
+breadth-first traversal and parallel fetching.

+It returns either a flat list of unique child or a nested tree structure for
+each entity, depending on the `as_tree` flag.

+Args:
+    place_dcids (str | list[str]): One or more DCIDs of the entities whose descendants
+       will be fetched.
+    descendants_type (Optional[str]): The type of the descendants to fetch (e.g., 'Country', 'State').
+        If None, fetches all descendant types.
+    as_tree (bool): If True, returns a nested tree structure; otherwise, returns a flat list.
+        Defaults to False.
+    max_concurrent_requests (Optional[int]): The maximum number of concurrent requests to make.
+        Defaults to PLACES_MAX_WORKERS.
+Returns:
+    dict[str, list[dict[str, str]] | dict]: A dictionary mapping each input DCID to either:
+        - A flat list of Node dictionaries (if `as_tree` is False), or
+        - A nested ancestry tree (if `as_tree` is True). Each child is represented by
+          a dict.
+ +
fetch_place_parents(self, place_dcids: str | list[str], *, as_dict: bool = True) -> dict[str, list[datacommons_client.models.node.Node | dict]]
Fetches the direct parents of one or more entities using the 'containedInPlace' property.

+Args:
+    place_dcids (str | list[str]): A single place DCID or a list of DCIDs to query.
+    as_dict (bool): If True, returns a dictionary mapping each input DCID to its
+        immediate parent entities. If False, returns a dictionary of Node objects.

+Returns:
+    dict[str, list[Node | dict]]: A dictionary mapping each input DCID to a list of its
+    immediate parent entities. Each parent is represented as a Node object or
+    as a dictionary with the same data.
+ +
fetch_property_labels(self, node_dcids: str | list[str], out: bool = True, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches all property labels for the given nodes.

+Args:
+    node_dcids (str | list[str]): The DCID(s) of the nodes to query.
+    out (bool): Whether to fetch outgoing properties (`->`). Defaults to True.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+      Defaults to True. Set to False to only fetch the first page. In that case, a
+      `next_token` key in the response will indicate if more pages are available.
+      That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    NodeResponse: The response object containing the property labels.

+Example:
+    ```python
+    response = node.fetch_property_labels(node_dcids="geoId/06")
+    print(response)
+    ```
+ +
fetch_property_values(self, node_dcids: str | list[str], properties: str | list[str], constraints: Optional[str] = None, out: bool = True, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches the values of specific properties for given nodes.

+Args:
+    node_dcids (str | List[str]): The DCID(s) of the nodes to query.
+    properties (str | List[str]): The property or relation expression(s) to query.
+    constraints (Optional[str]): Additional constraints for the query. Defaults to None.
+    out (bool): Whether to fetch outgoing properties. Defaults to True.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+      Defaults to True. Set to False to only fetch the first page. In that case, a
+      `next_token` key in the response will indicate if more pages are available.
+      That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.


+Returns:
+    NodeResponse: The response object containing the property values.

+Example:
+    ```python
+    response = node.fetch_property_values(
+        node_dcids=["geoId/06"],
+        properties="name",
+        out=True
+    )
+    print(response)
+    ```
+ +
+Methods inherited from datacommons_client.endpoints.base.Endpoint:
+
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

+Shows the endpoint and underlying API configuration.

+Returns:
+    str: A string representation of the Endpoint object.
+ +
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

+Args:
+    payload: The JSON payload for the POST request.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+        Defaults to True. Set to False to only fetch the first page. In that case, a
+        `next_token` key in the response will indicate if more pages are available.
+        That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    A dictionary with the merged API response data.

+Raises:
+    ValueError: If the payload is not a valid dictionary.
+ +
+Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + + + +
 
class ObservationEndpoint(datacommons_client.endpoints.base.Endpoint)
   ObservationEndpoint(api: datacommons_client.endpoints.base.API)

+A class to interact with the observation API endpoint.

+Args:
+    api (API): The API instance providing the environment configuration
+        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
+
ObservationEndpoint
+
datacommons_client.endpoints.base.Endpoint
+
builtins.object
+
+
+Methods defined here:
+
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the ObservationEndpoint instance.
+ +
fetch(self, variable_dcids: str | list[str], date: datacommons_client.endpoints.payloads.ObservationDate | str = <ObservationDate.LATEST: 'LATEST'>, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, entity_dcids: Union[str, list[str], NoneType] = None, entity_expression: Optional[str] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches data from the observation endpoint.

+Args:
+    variable_dcids (str | list[str]): One or more variable IDs for the data.
+    date (str | ObservationDate): The date for which data is being requested.
+        Defaults to the latest observation.
+    select (list[ObservationSelect]): Fields to include in the response.
+        Defaults to ["date", "variable", "entity", "value"].
+    entity_dcids (Optional[str | list[str]]): One or more entity IDs to filter the data.
+    entity_expression (Optional[str]): A string expression to filter entities.
+    filter_facet_domains (Optional[str | list[str]]): One or more domain names to filter the data.
+    filter_facet_ids (Optional[str | list[str]]): One or more facet IDs to filter the data.

+Returns:
+    ObservationResponse: The response object containing observations for the specified query.
+ +
fetch_available_statistical_variables(self, entity_dcids: str | list[str]) -> dict[str, list[str]]
Fetches available statistical variables (which have observations) for given entities.
+Args:
+    entity_dcids (str | list[str]): One or more entity DCIDs(s) to fetch variables for.
+Returns:
+    dict[str, list[str]]: A dictionary mapping entity DCIDs to their available statistical variables.
+ +
fetch_observations_by_entity_dcid(self, date: datacommons_client.endpoints.payloads.ObservationDate | str, entity_dcids: str | list[str], variable_dcids: str | list[str], *, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches all observations for a given entity type.

+Args:
+    date (ObservationDate | str): The date option for the observations.
+        Use 'all' for all dates, 'latest' for the most recent data,
+        or provide a date as a string (e.g., "2024").
+    entity_dcids (str | list[str]): One or more entity IDs to filter the data.
+    variable_dcids (str | list[str]): The variable(s) to fetch observations for.
+        This can be a single variable ID or a list of IDs.
+    select (Optional[list[ObservationSelect | str]]): Fields to include in the response.
+        If not provided, defaults to ["date", "variable", "entity", "value"].
+    filter_facet_domains: Optional[str | list[str]: One or more domain names to filter the data.
+    filter_facet_ids: Optional[str | list[str]: One or more facet IDs to filter the data.

+Returns:
+    ObservationResponse: The response object containing observations for the specified entity type.

+Example:
+    To fetch all observations for Nigeria for a specific variable:

+    ```python
+    api = API()
+    ObservationEndpoint(api).fetch_observations_by_entity_dcid(
+        date="all",
+        entity_dcids="country/NGA",
+        variable_dcids="sdg/SI_POV_DAY1"
+    )
+    ```
+ +
fetch_observations_by_entity_type(self, date: datacommons_client.endpoints.payloads.ObservationDate | str, parent_entity: str, entity_type: str, variable_dcids: str | list[str], *, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches all observations for a given entity type.

+Args:
+    date (ObservationDate | str): The date option for the observations.
+        Use 'all' for all dates, 'latest' for the most recent data,
+        or provide a date as a string (e.g., "2024").
+    parent_entity (str): The parent entity under which the target entities fall.
+        For example, "africa" for African countries, or "Earth" for all countries.
+    entity_type (str): The type of entities for which to fetch observations.
+        For example, "Country" or "Region".
+    variable_dcids (str | list[str]): The variable(s) to fetch observations for.
+        This can be a single variable ID or a list of IDs.
+    select (Optional[list[ObservationSelect | str]]): Fields to include in the response.
+        If not provided, defaults to ["date", "variable", "entity", "value"].
+    filter_facet_domains: Optional[str | list[str]: One or more domain names to filter the data.
+    filter_facet_ids: Optional[str | list[str]: One or more facet IDs to filter the data.

+Returns:
+    ObservationResponse: The response object containing observations for the specified entity type.

+Example:
+    To fetch all observations for African countries for a specific variable:

+    ```python
+    api = API()
+    ObservationEndpoint(api).fetch_observations_by_entity_type(
+        date="all",
+        parent_entity="africa",
+        entity_type="Country",
+        variable_dcids="sdg/SI_POV_DAY1"
+    )
+    ```
+ +
+Methods inherited from datacommons_client.endpoints.base.Endpoint:
+
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

+Shows the endpoint and underlying API configuration.

+Returns:
+    str: A string representation of the Endpoint object.
+ +
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

+Args:
+    payload: The JSON payload for the POST request.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+        Defaults to True. Set to False to only fetch the first page. In that case, a
+        `next_token` key in the response will indicate if more pages are available.
+        That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    A dictionary with the merged API response data.

+Raises:
+    ValueError: If the payload is not a valid dictionary.
+ +
+Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + + + +
 
class ResolveEndpoint(datacommons_client.endpoints.base.Endpoint)
   ResolveEndpoint(api: datacommons_client.endpoints.base.API)

+A class to interact with the resolve API endpoint.

+Args:
+    api (API): The API instance providing the environment configuration
+        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
+
ResolveEndpoint
+
datacommons_client.endpoints.base.Endpoint
+
builtins.object
+
+
+Methods defined here:
+
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the ResolveEndpoint instance.
+ +
fetch(self, node_ids: str | list[str], expression: str | list[str]) -> datacommons_client.endpoints.response.ResolveResponse
Fetches resolved data for the given nodes and expressions, identified by name,
+ coordinates, or wiki ID.

+Args:
+    node_ids (str | list[str]): One or more node IDs to resolve.
+    expression (str): The relation expression to query.

+Returns:
+    ResolveResponse: The response object containing the resolved data.
+ +
fetch_dcid_by_coordinates(self, latitude: str, longitude: str, entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their geographic coordinates.

+Args:
+    latitude (str): Latitude of the entity.
+    longitude (str): Longitude of the entity.
+    entity_type (Optional[str]): Optional type of the entities to refine results
+    (e.g., "City", "State", "Country").

+Returns:
+    ResolveResponse: The response object containing the resolved DCIDs.

+Example:
+    To find the DCID for "Mountain View" using its latitude and longitude:
+    ```python
+    latitude = "37.42"
+    longitude = "-122.08"
+    response = client.fetch_dcid_by_coordinates(latitude=latitude, longitude=longitude)
+    print(response.entities)
+    ```
+    Note:
+     - For ambiguous results, providing an entity type (e.g., "City") can help disambiguate.
+     - The coordinates should be passed as strings in decimal format (e.g., "37.42", "-122.08").
+ +
fetch_dcids_by_name(self, names: str | list[str], entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their names.

+Args:
+    names (str | list[str]): One or more entity names to resolve.
+    entity_type (Optional[str]): Optional type of the entities.

+Returns:
+    ResolveResponse: The response object containing the resolved DCIDs.
+ +
fetch_dcids_by_wikidata_id(self, wikidata_ids: str | list[str], entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their Wikidata IDs.

+Args:
+    wikidata_ids (str | list[str]): One or more Wikidata IDs to resolve.
+    entity_type (Optional[str]): Optional type of the entities.

+Returns:
+    ResolveResponse: The response object containing the resolved DCIDs.
+ +
+Methods inherited from datacommons_client.endpoints.base.Endpoint:
+
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

+Shows the endpoint and underlying API configuration.

+Returns:
+    str: A string representation of the Endpoint object.
+ +
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

+Args:
+    payload: The JSON payload for the POST request.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+        Defaults to True. Set to False to only fetch the first page. In that case, a
+        `next_token` key in the response will indicate if more pages are available.
+        That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    A dictionary with the merged API response data.

+Raises:
+    ValueError: If the payload is not a valid dictionary.
+ +
+Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + +
 
Data
       __all__ = ['DataCommonsClient', 'API', 'NodeEndpoint', 'ObservationEndpoint', 'ResolveEndpoint']
+ \ No newline at end of file diff --git a/api/python/v2/node.md b/api/python/v2/node.md index 1c76b1a50..92bfd13bc 100644 --- a/api/python/v2/node.md +++ b/api/python/v2/node.md @@ -736,7 +736,7 @@ fetch_place_children(place_dcids, children_type, as_dict) | Name | Type | Description | |---------------|-------|----------------| -| place_dcids
Required | string or list of strings | One or more place entities whose direct parents you want to look up. | +| place_dcids
Required | string or list of strings | One or more place entities whose direct children you want to look up. | | children_type
Optional | string | The type of the child entities to fetch, for example, `Country`, `State`, `IPCCPlace_50`. If not specified, fetches all child types. This option is useful for cases where the input place may have direct links from various entities, and you only want a specific entity type. For example, in the case of the United States, states, counties, and some cities are directly linked to the `country/USA` entity, while others or not; if you only want states, set this option to `State`. | | as_dict
Optional | bool | Whether to return the response as a dictionary mapping each input DCID to a dict of child entities (when set to `True`), or a dictionary mapping each input DCID to a list of child `NodeResponse` objects (when set to `False`). Defaults to `True`. | {: .doc-table } From 504d852398e98312806ace14245378ff2eacfa36 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 11 Jun 2025 20:39:31 -0700 Subject: [PATCH 003/121] remove extra file --- api/python/v2/datacommons_client.html | 644 -------------------------- 1 file changed, 644 deletions(-) delete mode 100644 api/python/v2/datacommons_client.html diff --git a/api/python/v2/datacommons_client.html b/api/python/v2/datacommons_client.html deleted file mode 100644 index c76f71b31..000000000 --- a/api/python/v2/datacommons_client.html +++ /dev/null @@ -1,644 +0,0 @@ - - - - -Python: package datacommons_client - - - - - -
 
datacommons_client (version 2.1.0)
index
/usr/local/google/home/kmoscoe/api-python/datacommons_client/__init__.py
-

-

- - - - - -
 
Package Contents
       
client
-
endpoints (package)
-
models (package)
-
utils (package)
-

- - - - - -
 
Classes
       
-
builtins.object -
-
-
datacommons_client.client.DataCommonsClient -
datacommons_client.endpoints.base.API -
-
-
datacommons_client.endpoints.base.Endpoint(builtins.object) -
-
-
datacommons_client.endpoints.node.NodeEndpoint -
datacommons_client.endpoints.observation.ObservationEndpoint -
datacommons_client.endpoints.resolve.ResolveEndpoint -
-
-
-

- - - - - - - -
 
class API(builtins.object)
   API(api_key: Optional[str] = None, dc_instance: Optional[str] = None, url: Optional[str] = None)

-Represents a configured API interface to the Data Commons API.

-This class handles environment setup, resolving the base URL, building headers,
-or optionally using a fully qualified URL directly. It can be used standalone
-to interact with the API or in combination with Endpoint classes.
 
 Methods defined here:
-
__init__(self, api_key: Optional[str] = None, dc_instance: Optional[str] = None, url: Optional[str] = None)
Initializes the API instance.

-Args:
-    api_key: The API key for authentication. Defaults to None.
-    dc_instance: The Data Commons instance domain. Ignored if `url` is provided.
-                 Defaults to 'datacommons.org' if both `url` and `dc_instance` are None.
-    url: A fully qualified URL for the base API. This may be useful if more granular control
-        of the API is required (for local development, for example). If provided, dc_instance`
-         should not be provided.

-Raises:
-    ValueError: If both `dc_instance` and `url` are provided.
- -
__repr__(self) -> str
Returns a readable representation of the API object.

-Indicates the base URL and if it's authenticated.

-Returns:
-    str: A string representation of the API object.
- -
post(self, payload: dict[str, typing.Any], endpoint: Optional[str] = None, *, all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request using the configured API environment.

-If `endpoint` is provided, it will be appended to the base_url. Otherwise,
-it will just POST to the base URL.

-Args:
-    payload: The JSON payload for the POST request.
-    endpoint: An optional endpoint path to append to the base URL.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-        Defaults to True. Set to False to only fetch the first page. In that case, a
-        `next_token` key in the response will indicate if more pages are available.
-        That token can be used to fetch the next page.

-Returns:
-    A dictionary containing the merged response data.

-Raises:
-    ValueError: If the payload is not a valid dictionary.
- -
-Data descriptors defined here:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - - - -
 
class DataCommonsClient(builtins.object)
   DataCommonsClient(api_key: Optional[str] = None, *, dc_instance: Optional[str] = 'datacommons.org', url: Optional[str] = None)

-A client for interacting with the Data Commons API.

-This class provides convenient access to the V2 Data Commons API endpoints.

-Attributes:
-    api (API): An instance of the API class that handles requests.
-    node (NodeEndpoint): Provides access to node-related queries, such as fetching property labels
-        and values for individual or multiple nodes in the Data Commons knowledge graph.
-    observation (ObservationEndpoint): Handles observation-related queries, allowing retrieval of
-        statistical observations associated with entities, variables, and dates (e.g., GDP of California in 2010).
-    resolve (ResolveEndpoint): Manages resolution queries to find different DCIDs for entities.
 
 Methods defined here:
-
__init__(self, api_key: Optional[str] = None, *, dc_instance: Optional[str] = 'datacommons.org', url: Optional[str] = None)
Initializes the DataCommonsClient.

-Args:
-    api_key (Optional[str]): The API key for authentication. Defaults to None. Note that
-        custom DC instances do not currently require an API key.
-    dc_instance (Optional[str]): The Data Commons instance to use. Defaults to "datacommons.org".
-    url (Optional[str]): A custom, fully resolved URL for the Data Commons API. Defaults to None.
- -
observations_dataframe(self, variable_dcids: str | list[str], date: datacommons_client.endpoints.payloads.ObservationDate | str, entity_dcids: Union[Literal['all'], list[str]] = 'all', entity_type: Optional[str] = None, parent_entity: Optional[str] = None, property_filters: Optional[dict[str, str | list[str]]] = None)
Fetches statistical observations and returns them as a Pandas DataFrame.

-The Observation API fetches statistical observations linked to entities and variables
-at a particular date (e.g., "population of USA in 2020", "GDP of California in 2010").

-Args:
-variable_dcids (str | list[str]): One or more variable DCIDs for the observation.
-date (ObservationDate | str): The date for which observations are requested. It can be
-    a specific date, "all" to retrieve all observations, or "latest" to get the most recent observations.
-entity_dcids (Literal["all"] | list[str], optional): The entity DCIDs for which to retrieve data.
-    Defaults to "all".
-entity_type (Optional[str]): The type of entities to filter by when `entity_dcids="all"`.
-    Required if `entity_dcids="all"`. Defaults to None.
-parent_entity (Optional[str]): The parent entity under which the target entities fall.
-    Required if `entity_dcids="all"`. Defaults to None.
-property_filters (Optional[dict[str, str | list[str]]): An optional dictionary used to filter
-    the data by using observation properties like `measurementMethod`, `unit`, or `observationPeriod`.

-Returns:
-    pd.DataFrame: A DataFrame containing the requested observations.
- -
-Data descriptors defined here:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - - - -
 
class NodeEndpoint(datacommons_client.endpoints.base.Endpoint)
   NodeEndpoint(api: datacommons_client.endpoints.base.API)

-Initializes the NodeEndpoint with a given API configuration.

-Args:
-    api (API): The API instance providing the environment configuration
-        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
-
NodeEndpoint
-
datacommons_client.endpoints.base.Endpoint
-
builtins.object
-
-
-Methods defined here:
-
__getattr__(self, name)
- -
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the NodeEndpoint with a given API configuration.
- -
fetch(self, node_dcids: str | list[str], expression: str, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches properties or arcs for given nodes and properties.

-Args:
-    node_dcids (str | List[str]): The DCID(s) of the nodes to query.
-    expression (str): The property or relation expression(s) to query.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-      Defaults to True. Set to False to only fetch the first page. In that case, a
-      `next_token` key in the response will indicate if more pages are available.
-      That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    NodeResponse: The response object containing the queried data.

-Example:
-    ```python
-    response = node.fetch(
-        node_dcids=["geoId/06"],
-        expression="<-"
-    )
-    print(response)
-    ```
- -
fetch_all_classes(self, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches all Classes available in the Data Commons knowledge graph.

-Args:
-  all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-      Defaults to True. Set to False to only fetch the first page. In that case, a
-      `next_token` key in the response will indicate if more pages are available.
-      That token can be used to fetch the next page.
-  next_token: Optionally, the token to fetch the next page of results. Defaults to None.


-Returns:
-    NodeResponse: The response object containing all statistical variables.

-Example:
-    ```python
-    response = node.fetch_all_classes()
-    print(response)
-    ```
- -
fetch_entity_names(self, entity_dcids: str | list[str], language: Optional[str] = 'en', fallback_language: Optional[str] = None) -> dict[str, datacommons_client.models.node.Name]
Fetches entity names in the specified language, with optional fallback to English.
-Args:
-  entity_dcids: A single DCID or a list of DCIDs to fetch names for.
-  language: Language code (e.g., "en", "es"). Defaults to "en" (DEFAULT_NAME_LANGUAGE).
-  fallback_language: If provided, this language will be used as a fallback if the requested
-    language is not available. If not provided, no fallback will be used.
-Returns:
-  A dictionary mapping each DCID to a dictionary with the mapped name, language, and
-    the property used.
- -
fetch_place_ancestors(self, place_dcids: str | list[str], as_tree: bool = False, *, max_concurrent_requests: Optional[int] = 10) -> dict[str, list[dict[str, str]] | dict]
Fetches the full ancestry (flat or nested) for one or more entities.
-For each input DCID, this method builds the complete ancestry graph using a
-breadth-first traversal and parallel fetching.
-It returns either a flat list of unique parents or a nested tree structure for
-each entity, depending on the `as_tree` flag. The flat list matches the structure
-of the `/api/place/parent` endpoint of the DC website.
-Args:
-    place_dcids (str | list[str]): One or more DCIDs of the entities whose ancestry
-       will be fetched.
-    as_tree (bool): If True, returns a nested tree structure; otherwise, returns a flat list.
-        Defaults to False.
-    max_concurrent_requests (Optional[int]): The maximum number of concurrent requests to make.
-        Defaults to PLACES_MAX_WORKERS.
-Returns:
-    dict[str, list[dict[str, str]] | dict]: A dictionary mapping each input DCID to either:
-        - A flat list of parent dictionaries (if `as_tree` is False), or
-        - A nested ancestry tree (if `as_tree` is True). Each parent is represented by
-          a dict with 'dcid', 'name', and 'type'.
- -
fetch_place_children(self, place_dcids: str | list[str], *, children_type: Optional[str] = None, as_dict: bool = True) -> dict[str, list[datacommons_client.models.node.Node | dict]]
Fetches the direct children of one or more entities using the 'containedInPlace' property.

-Args:
-    place_dcids (str | list[str]): A single place DCID or a list of DCIDs to query.
-    children_type (str, optional): The type of the child entities to
-        fetch (e.g., 'Country', 'State', 'IPCCPlace_50'). If None, fetches all child types.
-    as_dict (bool): If True, returns a dictionary mapping each input DCID to its
-        immediate children entities. If False, returns a dictionary of Node objects.

-Returns:
-    dict[str, list[Node | dict]]: A dictionary mapping each input DCID to a list of its
-    immediate children. Each child is represented as a Node object or as a dictionary with
-    the same data.
- -
fetch_place_descendants(self, place_dcids: str | list[str], descendants_type: Optional[str] = None, as_tree: bool = False, *, max_concurrent_requests: Optional[int] = 10) -> dict[str, list[dict[str, str]] | dict]
Fetches the full descendants (flat or nested) for one or more entities.
-For each input DCID, this method builds the complete descendants graph using a
-breadth-first traversal and parallel fetching.

-It returns either a flat list of unique child or a nested tree structure for
-each entity, depending on the `as_tree` flag.

-Args:
-    place_dcids (str | list[str]): One or more DCIDs of the entities whose descendants
-       will be fetched.
-    descendants_type (Optional[str]): The type of the descendants to fetch (e.g., 'Country', 'State').
-        If None, fetches all descendant types.
-    as_tree (bool): If True, returns a nested tree structure; otherwise, returns a flat list.
-        Defaults to False.
-    max_concurrent_requests (Optional[int]): The maximum number of concurrent requests to make.
-        Defaults to PLACES_MAX_WORKERS.
-Returns:
-    dict[str, list[dict[str, str]] | dict]: A dictionary mapping each input DCID to either:
-        - A flat list of Node dictionaries (if `as_tree` is False), or
-        - A nested ancestry tree (if `as_tree` is True). Each child is represented by
-          a dict.
- -
fetch_place_parents(self, place_dcids: str | list[str], *, as_dict: bool = True) -> dict[str, list[datacommons_client.models.node.Node | dict]]
Fetches the direct parents of one or more entities using the 'containedInPlace' property.

-Args:
-    place_dcids (str | list[str]): A single place DCID or a list of DCIDs to query.
-    as_dict (bool): If True, returns a dictionary mapping each input DCID to its
-        immediate parent entities. If False, returns a dictionary of Node objects.

-Returns:
-    dict[str, list[Node | dict]]: A dictionary mapping each input DCID to a list of its
-    immediate parent entities. Each parent is represented as a Node object or
-    as a dictionary with the same data.
- -
fetch_property_labels(self, node_dcids: str | list[str], out: bool = True, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches all property labels for the given nodes.

-Args:
-    node_dcids (str | list[str]): The DCID(s) of the nodes to query.
-    out (bool): Whether to fetch outgoing properties (`->`). Defaults to True.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-      Defaults to True. Set to False to only fetch the first page. In that case, a
-      `next_token` key in the response will indicate if more pages are available.
-      That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    NodeResponse: The response object containing the property labels.

-Example:
-    ```python
-    response = node.fetch_property_labels(node_dcids="geoId/06")
-    print(response)
-    ```
- -
fetch_property_values(self, node_dcids: str | list[str], properties: str | list[str], constraints: Optional[str] = None, out: bool = True, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches the values of specific properties for given nodes.

-Args:
-    node_dcids (str | List[str]): The DCID(s) of the nodes to query.
-    properties (str | List[str]): The property or relation expression(s) to query.
-    constraints (Optional[str]): Additional constraints for the query. Defaults to None.
-    out (bool): Whether to fetch outgoing properties. Defaults to True.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-      Defaults to True. Set to False to only fetch the first page. In that case, a
-      `next_token` key in the response will indicate if more pages are available.
-      That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.


-Returns:
-    NodeResponse: The response object containing the property values.

-Example:
-    ```python
-    response = node.fetch_property_values(
-        node_dcids=["geoId/06"],
-        properties="name",
-        out=True
-    )
-    print(response)
-    ```
- -
-Methods inherited from datacommons_client.endpoints.base.Endpoint:
-
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

-Shows the endpoint and underlying API configuration.

-Returns:
-    str: A string representation of the Endpoint object.
- -
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

-Args:
-    payload: The JSON payload for the POST request.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-        Defaults to True. Set to False to only fetch the first page. In that case, a
-        `next_token` key in the response will indicate if more pages are available.
-        That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    A dictionary with the merged API response data.

-Raises:
-    ValueError: If the payload is not a valid dictionary.
- -
-Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - - - -
 
class ObservationEndpoint(datacommons_client.endpoints.base.Endpoint)
   ObservationEndpoint(api: datacommons_client.endpoints.base.API)

-A class to interact with the observation API endpoint.

-Args:
-    api (API): The API instance providing the environment configuration
-        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
-
ObservationEndpoint
-
datacommons_client.endpoints.base.Endpoint
-
builtins.object
-
-
-Methods defined here:
-
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the ObservationEndpoint instance.
- -
fetch(self, variable_dcids: str | list[str], date: datacommons_client.endpoints.payloads.ObservationDate | str = <ObservationDate.LATEST: 'LATEST'>, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, entity_dcids: Union[str, list[str], NoneType] = None, entity_expression: Optional[str] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches data from the observation endpoint.

-Args:
-    variable_dcids (str | list[str]): One or more variable IDs for the data.
-    date (str | ObservationDate): The date for which data is being requested.
-        Defaults to the latest observation.
-    select (list[ObservationSelect]): Fields to include in the response.
-        Defaults to ["date", "variable", "entity", "value"].
-    entity_dcids (Optional[str | list[str]]): One or more entity IDs to filter the data.
-    entity_expression (Optional[str]): A string expression to filter entities.
-    filter_facet_domains (Optional[str | list[str]]): One or more domain names to filter the data.
-    filter_facet_ids (Optional[str | list[str]]): One or more facet IDs to filter the data.

-Returns:
-    ObservationResponse: The response object containing observations for the specified query.
- -
fetch_available_statistical_variables(self, entity_dcids: str | list[str]) -> dict[str, list[str]]
Fetches available statistical variables (which have observations) for given entities.
-Args:
-    entity_dcids (str | list[str]): One or more entity DCIDs(s) to fetch variables for.
-Returns:
-    dict[str, list[str]]: A dictionary mapping entity DCIDs to their available statistical variables.
- -
fetch_observations_by_entity_dcid(self, date: datacommons_client.endpoints.payloads.ObservationDate | str, entity_dcids: str | list[str], variable_dcids: str | list[str], *, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches all observations for a given entity type.

-Args:
-    date (ObservationDate | str): The date option for the observations.
-        Use 'all' for all dates, 'latest' for the most recent data,
-        or provide a date as a string (e.g., "2024").
-    entity_dcids (str | list[str]): One or more entity IDs to filter the data.
-    variable_dcids (str | list[str]): The variable(s) to fetch observations for.
-        This can be a single variable ID or a list of IDs.
-    select (Optional[list[ObservationSelect | str]]): Fields to include in the response.
-        If not provided, defaults to ["date", "variable", "entity", "value"].
-    filter_facet_domains: Optional[str | list[str]: One or more domain names to filter the data.
-    filter_facet_ids: Optional[str | list[str]: One or more facet IDs to filter the data.

-Returns:
-    ObservationResponse: The response object containing observations for the specified entity type.

-Example:
-    To fetch all observations for Nigeria for a specific variable:

-    ```python
-    api = API()
-    ObservationEndpoint(api).fetch_observations_by_entity_dcid(
-        date="all",
-        entity_dcids="country/NGA",
-        variable_dcids="sdg/SI_POV_DAY1"
-    )
-    ```
- -
fetch_observations_by_entity_type(self, date: datacommons_client.endpoints.payloads.ObservationDate | str, parent_entity: str, entity_type: str, variable_dcids: str | list[str], *, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches all observations for a given entity type.

-Args:
-    date (ObservationDate | str): The date option for the observations.
-        Use 'all' for all dates, 'latest' for the most recent data,
-        or provide a date as a string (e.g., "2024").
-    parent_entity (str): The parent entity under which the target entities fall.
-        For example, "africa" for African countries, or "Earth" for all countries.
-    entity_type (str): The type of entities for which to fetch observations.
-        For example, "Country" or "Region".
-    variable_dcids (str | list[str]): The variable(s) to fetch observations for.
-        This can be a single variable ID or a list of IDs.
-    select (Optional[list[ObservationSelect | str]]): Fields to include in the response.
-        If not provided, defaults to ["date", "variable", "entity", "value"].
-    filter_facet_domains: Optional[str | list[str]: One or more domain names to filter the data.
-    filter_facet_ids: Optional[str | list[str]: One or more facet IDs to filter the data.

-Returns:
-    ObservationResponse: The response object containing observations for the specified entity type.

-Example:
-    To fetch all observations for African countries for a specific variable:

-    ```python
-    api = API()
-    ObservationEndpoint(api).fetch_observations_by_entity_type(
-        date="all",
-        parent_entity="africa",
-        entity_type="Country",
-        variable_dcids="sdg/SI_POV_DAY1"
-    )
-    ```
- -
-Methods inherited from datacommons_client.endpoints.base.Endpoint:
-
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

-Shows the endpoint and underlying API configuration.

-Returns:
-    str: A string representation of the Endpoint object.
- -
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

-Args:
-    payload: The JSON payload for the POST request.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-        Defaults to True. Set to False to only fetch the first page. In that case, a
-        `next_token` key in the response will indicate if more pages are available.
-        That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    A dictionary with the merged API response data.

-Raises:
-    ValueError: If the payload is not a valid dictionary.
- -
-Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - - - -
 
class ResolveEndpoint(datacommons_client.endpoints.base.Endpoint)
   ResolveEndpoint(api: datacommons_client.endpoints.base.API)

-A class to interact with the resolve API endpoint.

-Args:
-    api (API): The API instance providing the environment configuration
-        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
-
ResolveEndpoint
-
datacommons_client.endpoints.base.Endpoint
-
builtins.object
-
-
-Methods defined here:
-
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the ResolveEndpoint instance.
- -
fetch(self, node_ids: str | list[str], expression: str | list[str]) -> datacommons_client.endpoints.response.ResolveResponse
Fetches resolved data for the given nodes and expressions, identified by name,
- coordinates, or wiki ID.

-Args:
-    node_ids (str | list[str]): One or more node IDs to resolve.
-    expression (str): The relation expression to query.

-Returns:
-    ResolveResponse: The response object containing the resolved data.
- -
fetch_dcid_by_coordinates(self, latitude: str, longitude: str, entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their geographic coordinates.

-Args:
-    latitude (str): Latitude of the entity.
-    longitude (str): Longitude of the entity.
-    entity_type (Optional[str]): Optional type of the entities to refine results
-    (e.g., "City", "State", "Country").

-Returns:
-    ResolveResponse: The response object containing the resolved DCIDs.

-Example:
-    To find the DCID for "Mountain View" using its latitude and longitude:
-    ```python
-    latitude = "37.42"
-    longitude = "-122.08"
-    response = client.fetch_dcid_by_coordinates(latitude=latitude, longitude=longitude)
-    print(response.entities)
-    ```
-    Note:
-     - For ambiguous results, providing an entity type (e.g., "City") can help disambiguate.
-     - The coordinates should be passed as strings in decimal format (e.g., "37.42", "-122.08").
- -
fetch_dcids_by_name(self, names: str | list[str], entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their names.

-Args:
-    names (str | list[str]): One or more entity names to resolve.
-    entity_type (Optional[str]): Optional type of the entities.

-Returns:
-    ResolveResponse: The response object containing the resolved DCIDs.
- -
fetch_dcids_by_wikidata_id(self, wikidata_ids: str | list[str], entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their Wikidata IDs.

-Args:
-    wikidata_ids (str | list[str]): One or more Wikidata IDs to resolve.
-    entity_type (Optional[str]): Optional type of the entities.

-Returns:
-    ResolveResponse: The response object containing the resolved DCIDs.
- -
-Methods inherited from datacommons_client.endpoints.base.Endpoint:
-
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

-Shows the endpoint and underlying API configuration.

-Returns:
-    str: A string representation of the Endpoint object.
- -
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

-Args:
-    payload: The JSON payload for the POST request.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-        Defaults to True. Set to False to only fetch the first page. In that case, a
-        `next_token` key in the response will indicate if more pages are available.
-        That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    A dictionary with the merged API response data.

-Raises:
-    ValueError: If the payload is not a valid dictionary.
- -
-Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - -
 
Data
       __all__ = ['DataCommonsClient', 'API', 'NodeEndpoint', 'ObservationEndpoint', 'ResolveEndpoint']
- \ No newline at end of file From 156cff02ee76d98017b363dea88e409bc0cd6f8a Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 23 Sep 2025 10:32:31 -0700 Subject: [PATCH 004/121] Create placeholders for LLM pages --- index.md | 2 ++ llm/index.md | 0 2 files changed, 2 insertions(+) create mode 100644 llm/index.md diff --git a/index.md b/index.md index 8abdf1c66..9c58af748 100644 --- a/index.md +++ b/index.md @@ -38,6 +38,8 @@ There are several options for directly querying the data, without accessing the Data Commons also provides ideal training data for developing machine learning models and other data science applications. We have developed a [Data science curriculum](/courseware/intro_data_science.html) featuring the Python APIs and data, currently in use at MIT. +- **LLMs**: Data Commons provides a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) Server so you can use any MCP-enabled AI agent/client to query the Data Commons data using [interactive, natural-language queries](/llm/index.html). + - **Google Sheets Add-on**: You can load Data Commons data into Google Sheets for analysis and charting, using a familiar spreadsheet interface. Install and run the Data Commons Google [Sheets add-on](/api/sheets/index.html). ## Embed Data Commons visualizations in your website {#embed} diff --git a/llm/index.md b/llm/index.md new file mode 100644 index 000000000..e69de29bb From a4e40a442e531b59561fe5b4ba79407b268bea05 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 23 Sep 2025 12:16:35 -0700 Subject: [PATCH 005/121] rename to MCP --- mcp/index.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) create mode 100644 mcp/index.md diff --git a/mcp/index.md b/mcp/index.md new file mode 100644 index 000000000..10a271489 --- /dev/null +++ b/mcp/index.md @@ -0,0 +1,14 @@ +--- +layout: default +title: MCP - Query data interactively with an AI agent +nav_order: 20 +has_children: true +--- + +# MCP overview + +Data Commons has recently launched a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) server, so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to interactively query Data Commons data. See the following pages for details: + +- [Quickstart: Use the Data Commons MCP Server with Gemini CLI](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/quickstart.md){: target="_blank"} +- [User Guide](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/user_guide.md){: target="_blank"} + From 1fbd18761109cae4199427149b72e91af3082199 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 23 Sep 2025 12:18:26 -0700 Subject: [PATCH 006/121] complete rename --- index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/index.md b/index.md index bed898019..6270b3753 100644 --- a/index.md +++ b/index.md @@ -38,7 +38,7 @@ There are several options for directly querying the data, without accessing the Data Commons also provides ideal training data for developing machine learning models and other data science applications. We have developed a [Data science curriculum](/courseware/intro_data_science.html) featuring the Python APIs and data, currently in use at MIT. -- **LLMs**: Data Commons provides a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) Server so you can use any MCP-enabled AI agent/client to query the Data Commons data using [interactive, natural-language queries](/llm/index.html). +- **LLMs**: Data Commons provides a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to [interactively query](/mcp/index.html) Data Commons data. - **Google Sheets Add-on**: You can load Data Commons data into Google Sheets for analysis and charting, using a familiar spreadsheet interface. Install and run the Data Commons Google [Sheets add-on](/api/sheets/index.html). From 82228b1c4419c9ee9c798278d07a936493e6b1a7 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 23 Sep 2025 13:07:45 -0700 Subject: [PATCH 007/121] remove old file --- llm/index.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 llm/index.md diff --git a/llm/index.md b/llm/index.md deleted file mode 100644 index e69de29bb..000000000 From e6565525f1ac68895bd29a11ac4db9cf7e67b234 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 23 Sep 2025 13:44:44 -0700 Subject: [PATCH 008/121] Changes from Keyur --- index.md | 2 +- mcp/index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/index.md b/index.md index 6270b3753..55103c6ae 100644 --- a/index.md +++ b/index.md @@ -38,7 +38,7 @@ There are several options for directly querying the data, without accessing the Data Commons also provides ideal training data for developing machine learning models and other data science applications. We have developed a [Data science curriculum](/courseware/intro_data_science.html) featuring the Python APIs and data, currently in use at MIT. -- **LLMs**: Data Commons provides a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to [interactively query](/mcp/index.html) Data Commons data. +- **LLMs**: Data Commons provides a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) server so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to [interactively query](/mcp/index.html) Data Commons data. - **Google Sheets Add-on**: You can load Data Commons data into Google Sheets for analysis and charting, using a familiar spreadsheet interface. Install and run the Data Commons Google [Sheets add-on](/api/sheets/index.html). diff --git a/mcp/index.md b/mcp/index.md index 10a271489..9d934200c 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -7,7 +7,7 @@ has_children: true # MCP overview -Data Commons has recently launched a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) server, so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to interactively query Data Commons data. See the following pages for details: +Data Commons has recently launched a [Model Context Protocol](https://github.com/datacommonsorg/agent-toolkit) Server, so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to interactively query Data Commons data. See the following pages for details: - [Quickstart: Use the Data Commons MCP Server with Gemini CLI](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/quickstart.md){: target="_blank"} - [User Guide](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/user_guide.md){: target="_blank"} From 91013251d29539b2fff91bb6786e7ac749d6e420 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 23 Sep 2025 13:46:39 -0700 Subject: [PATCH 009/121] Add target tag --- mcp/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/index.md b/mcp/index.md index 9d934200c..5f8bb7bf3 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -7,7 +7,7 @@ has_children: true # MCP overview -Data Commons has recently launched a [Model Context Protocol](https://github.com/datacommonsorg/agent-toolkit) Server, so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to interactively query Data Commons data. See the following pages for details: +Data Commons has recently launched a [Model Context Protocol](https://github.com/datacommonsorg/agent-toolkit){: target="_blank"} Server, so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to interactively query Data Commons data. See the following pages for details: - [Quickstart: Use the Data Commons MCP Server with Gemini CLI](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/quickstart.md){: target="_blank"} - [User Guide](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/user_guide.md){: target="_blank"} From d003cc14e817d6c9f05a2b8f1759ab2337a3be3a Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 23 Sep 2025 13:52:13 -0700 Subject: [PATCH 010/121] Rewording suggested by Keyur --- index.md | 4 +++- mcp/index.md | 2 +- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/index.md b/index.md index 55103c6ae..a2830370d 100644 --- a/index.md +++ b/index.md @@ -38,7 +38,9 @@ There are several options for directly querying the data, without accessing the Data Commons also provides ideal training data for developing machine learning models and other data science applications. We have developed a [Data science curriculum](/courseware/intro_data_science.html) featuring the Python APIs and data, currently in use at MIT. -- **LLMs**: Data Commons provides a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) server so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to [interactively query](/mcp/index.html) Data Commons data. +- **LLMs**: Data Commons provides a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) server. This allows you to use any MCP-enabled agent, powered by a Large Language Model (LLM) such as Google Gemini, to [interactively query](/mcp/index.html) Data Commons data. + +Data Commons has recently launched a Model Context Protocol server. This allows you to use any MCP-enabled agent, powered by a Large Language Model (LLM) like Google Gemini, to interactively query Data Commons data. - **Google Sheets Add-on**: You can load Data Commons data into Google Sheets for analysis and charting, using a familiar spreadsheet interface. Install and run the Data Commons Google [Sheets add-on](/api/sheets/index.html). diff --git a/mcp/index.md b/mcp/index.md index 5f8bb7bf3..ba27ca0c3 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -7,7 +7,7 @@ has_children: true # MCP overview -Data Commons has recently launched a [Model Context Protocol](https://github.com/datacommonsorg/agent-toolkit){: target="_blank"} Server, so you can use any Large Language Model (LLM), such as Google Gemini, and an MCP-enabled agent to interactively query Data Commons data. See the following pages for details: +Data Commons has recently launched a [Model Context Protocol](https://github.com/datacommonsorg/agent-toolkit){: target="_blank"} server. This allows you to use any MCP-enabled agent, powered by a Large Language Model (LLM) like Google Gemini, to interactively query Data Commons data. See the following pages for details: - [Quickstart: Use the Data Commons MCP Server with Gemini CLI](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/quickstart.md){: target="_blank"} - [User Guide](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/user_guide.md){: target="_blank"} From 6b949480e4cec041800d70966fbd71499e277774 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 23 Sep 2025 13:53:10 -0700 Subject: [PATCH 011/121] Remove extraneous text --- index.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/index.md b/index.md index a2830370d..d62640693 100644 --- a/index.md +++ b/index.md @@ -40,8 +40,6 @@ There are several options for directly querying the data, without accessing the - **LLMs**: Data Commons provides a [Model Context Protocol](https://modelcontextprotocol.io/docs/getting-started/intro) server. This allows you to use any MCP-enabled agent, powered by a Large Language Model (LLM) such as Google Gemini, to [interactively query](/mcp/index.html) Data Commons data. -Data Commons has recently launched a Model Context Protocol server. This allows you to use any MCP-enabled agent, powered by a Large Language Model (LLM) like Google Gemini, to interactively query Data Commons data. - - **Google Sheets Add-on**: You can load Data Commons data into Google Sheets for analysis and charting, using a familiar spreadsheet interface. Install and run the Data Commons Google [Sheets add-on](/api/sheets/index.html). ## Embed Data Commons visualizations in your website {#embed} From 4b26adc211cdb24aff37b76151b83e216ee29c8b Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 11 Jun 2025 20:37:01 -0700 Subject: [PATCH 012/121] Fix a copy-paste error. --- api/python/v2/datacommons_client.html | 644 ++++++++++++++++++++++++++ 1 file changed, 644 insertions(+) create mode 100644 api/python/v2/datacommons_client.html diff --git a/api/python/v2/datacommons_client.html b/api/python/v2/datacommons_client.html new file mode 100644 index 000000000..c76f71b31 --- /dev/null +++ b/api/python/v2/datacommons_client.html @@ -0,0 +1,644 @@ + + + + +Python: package datacommons_client + + + + + +
 
datacommons_client (version 2.1.0)
index
/usr/local/google/home/kmoscoe/api-python/datacommons_client/__init__.py
+

+

+ + + + + +
 
Package Contents
       
client
+
endpoints (package)
+
models (package)
+
utils (package)
+

+ + + + + +
 
Classes
       
+
builtins.object +
+
+
datacommons_client.client.DataCommonsClient +
datacommons_client.endpoints.base.API +
+
+
datacommons_client.endpoints.base.Endpoint(builtins.object) +
+
+
datacommons_client.endpoints.node.NodeEndpoint +
datacommons_client.endpoints.observation.ObservationEndpoint +
datacommons_client.endpoints.resolve.ResolveEndpoint +
+
+
+

+ + + + + + + +
 
class API(builtins.object)
   API(api_key: Optional[str] = None, dc_instance: Optional[str] = None, url: Optional[str] = None)

+Represents a configured API interface to the Data Commons API.

+This class handles environment setup, resolving the base URL, building headers,
+or optionally using a fully qualified URL directly. It can be used standalone
+to interact with the API or in combination with Endpoint classes.
 
 Methods defined here:
+
__init__(self, api_key: Optional[str] = None, dc_instance: Optional[str] = None, url: Optional[str] = None)
Initializes the API instance.

+Args:
+    api_key: The API key for authentication. Defaults to None.
+    dc_instance: The Data Commons instance domain. Ignored if `url` is provided.
+                 Defaults to 'datacommons.org' if both `url` and `dc_instance` are None.
+    url: A fully qualified URL for the base API. This may be useful if more granular control
+        of the API is required (for local development, for example). If provided, dc_instance`
+         should not be provided.

+Raises:
+    ValueError: If both `dc_instance` and `url` are provided.
+ +
__repr__(self) -> str
Returns a readable representation of the API object.

+Indicates the base URL and if it's authenticated.

+Returns:
+    str: A string representation of the API object.
+ +
post(self, payload: dict[str, typing.Any], endpoint: Optional[str] = None, *, all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request using the configured API environment.

+If `endpoint` is provided, it will be appended to the base_url. Otherwise,
+it will just POST to the base URL.

+Args:
+    payload: The JSON payload for the POST request.
+    endpoint: An optional endpoint path to append to the base URL.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+        Defaults to True. Set to False to only fetch the first page. In that case, a
+        `next_token` key in the response will indicate if more pages are available.
+        That token can be used to fetch the next page.

+Returns:
+    A dictionary containing the merged response data.

+Raises:
+    ValueError: If the payload is not a valid dictionary.
+ +
+Data descriptors defined here:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + + + +
 
class DataCommonsClient(builtins.object)
   DataCommonsClient(api_key: Optional[str] = None, *, dc_instance: Optional[str] = 'datacommons.org', url: Optional[str] = None)

+A client for interacting with the Data Commons API.

+This class provides convenient access to the V2 Data Commons API endpoints.

+Attributes:
+    api (API): An instance of the API class that handles requests.
+    node (NodeEndpoint): Provides access to node-related queries, such as fetching property labels
+        and values for individual or multiple nodes in the Data Commons knowledge graph.
+    observation (ObservationEndpoint): Handles observation-related queries, allowing retrieval of
+        statistical observations associated with entities, variables, and dates (e.g., GDP of California in 2010).
+    resolve (ResolveEndpoint): Manages resolution queries to find different DCIDs for entities.
 
 Methods defined here:
+
__init__(self, api_key: Optional[str] = None, *, dc_instance: Optional[str] = 'datacommons.org', url: Optional[str] = None)
Initializes the DataCommonsClient.

+Args:
+    api_key (Optional[str]): The API key for authentication. Defaults to None. Note that
+        custom DC instances do not currently require an API key.
+    dc_instance (Optional[str]): The Data Commons instance to use. Defaults to "datacommons.org".
+    url (Optional[str]): A custom, fully resolved URL for the Data Commons API. Defaults to None.
+ +
observations_dataframe(self, variable_dcids: str | list[str], date: datacommons_client.endpoints.payloads.ObservationDate | str, entity_dcids: Union[Literal['all'], list[str]] = 'all', entity_type: Optional[str] = None, parent_entity: Optional[str] = None, property_filters: Optional[dict[str, str | list[str]]] = None)
Fetches statistical observations and returns them as a Pandas DataFrame.

+The Observation API fetches statistical observations linked to entities and variables
+at a particular date (e.g., "population of USA in 2020", "GDP of California in 2010").

+Args:
+variable_dcids (str | list[str]): One or more variable DCIDs for the observation.
+date (ObservationDate | str): The date for which observations are requested. It can be
+    a specific date, "all" to retrieve all observations, or "latest" to get the most recent observations.
+entity_dcids (Literal["all"] | list[str], optional): The entity DCIDs for which to retrieve data.
+    Defaults to "all".
+entity_type (Optional[str]): The type of entities to filter by when `entity_dcids="all"`.
+    Required if `entity_dcids="all"`. Defaults to None.
+parent_entity (Optional[str]): The parent entity under which the target entities fall.
+    Required if `entity_dcids="all"`. Defaults to None.
+property_filters (Optional[dict[str, str | list[str]]): An optional dictionary used to filter
+    the data by using observation properties like `measurementMethod`, `unit`, or `observationPeriod`.

+Returns:
+    pd.DataFrame: A DataFrame containing the requested observations.
+ +
+Data descriptors defined here:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + + + +
 
class NodeEndpoint(datacommons_client.endpoints.base.Endpoint)
   NodeEndpoint(api: datacommons_client.endpoints.base.API)

+Initializes the NodeEndpoint with a given API configuration.

+Args:
+    api (API): The API instance providing the environment configuration
+        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
+
NodeEndpoint
+
datacommons_client.endpoints.base.Endpoint
+
builtins.object
+
+
+Methods defined here:
+
__getattr__(self, name)
+ +
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the NodeEndpoint with a given API configuration.
+ +
fetch(self, node_dcids: str | list[str], expression: str, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches properties or arcs for given nodes and properties.

+Args:
+    node_dcids (str | List[str]): The DCID(s) of the nodes to query.
+    expression (str): The property or relation expression(s) to query.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+      Defaults to True. Set to False to only fetch the first page. In that case, a
+      `next_token` key in the response will indicate if more pages are available.
+      That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    NodeResponse: The response object containing the queried data.

+Example:
+    ```python
+    response = node.fetch(
+        node_dcids=["geoId/06"],
+        expression="<-"
+    )
+    print(response)
+    ```
+ +
fetch_all_classes(self, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches all Classes available in the Data Commons knowledge graph.

+Args:
+  all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+      Defaults to True. Set to False to only fetch the first page. In that case, a
+      `next_token` key in the response will indicate if more pages are available.
+      That token can be used to fetch the next page.
+  next_token: Optionally, the token to fetch the next page of results. Defaults to None.


+Returns:
+    NodeResponse: The response object containing all statistical variables.

+Example:
+    ```python
+    response = node.fetch_all_classes()
+    print(response)
+    ```
+ +
fetch_entity_names(self, entity_dcids: str | list[str], language: Optional[str] = 'en', fallback_language: Optional[str] = None) -> dict[str, datacommons_client.models.node.Name]
Fetches entity names in the specified language, with optional fallback to English.
+Args:
+  entity_dcids: A single DCID or a list of DCIDs to fetch names for.
+  language: Language code (e.g., "en", "es"). Defaults to "en" (DEFAULT_NAME_LANGUAGE).
+  fallback_language: If provided, this language will be used as a fallback if the requested
+    language is not available. If not provided, no fallback will be used.
+Returns:
+  A dictionary mapping each DCID to a dictionary with the mapped name, language, and
+    the property used.
+ +
fetch_place_ancestors(self, place_dcids: str | list[str], as_tree: bool = False, *, max_concurrent_requests: Optional[int] = 10) -> dict[str, list[dict[str, str]] | dict]
Fetches the full ancestry (flat or nested) for one or more entities.
+For each input DCID, this method builds the complete ancestry graph using a
+breadth-first traversal and parallel fetching.
+It returns either a flat list of unique parents or a nested tree structure for
+each entity, depending on the `as_tree` flag. The flat list matches the structure
+of the `/api/place/parent` endpoint of the DC website.
+Args:
+    place_dcids (str | list[str]): One or more DCIDs of the entities whose ancestry
+       will be fetched.
+    as_tree (bool): If True, returns a nested tree structure; otherwise, returns a flat list.
+        Defaults to False.
+    max_concurrent_requests (Optional[int]): The maximum number of concurrent requests to make.
+        Defaults to PLACES_MAX_WORKERS.
+Returns:
+    dict[str, list[dict[str, str]] | dict]: A dictionary mapping each input DCID to either:
+        - A flat list of parent dictionaries (if `as_tree` is False), or
+        - A nested ancestry tree (if `as_tree` is True). Each parent is represented by
+          a dict with 'dcid', 'name', and 'type'.
+ +
fetch_place_children(self, place_dcids: str | list[str], *, children_type: Optional[str] = None, as_dict: bool = True) -> dict[str, list[datacommons_client.models.node.Node | dict]]
Fetches the direct children of one or more entities using the 'containedInPlace' property.

+Args:
+    place_dcids (str | list[str]): A single place DCID or a list of DCIDs to query.
+    children_type (str, optional): The type of the child entities to
+        fetch (e.g., 'Country', 'State', 'IPCCPlace_50'). If None, fetches all child types.
+    as_dict (bool): If True, returns a dictionary mapping each input DCID to its
+        immediate children entities. If False, returns a dictionary of Node objects.

+Returns:
+    dict[str, list[Node | dict]]: A dictionary mapping each input DCID to a list of its
+    immediate children. Each child is represented as a Node object or as a dictionary with
+    the same data.
+ +
fetch_place_descendants(self, place_dcids: str | list[str], descendants_type: Optional[str] = None, as_tree: bool = False, *, max_concurrent_requests: Optional[int] = 10) -> dict[str, list[dict[str, str]] | dict]
Fetches the full descendants (flat or nested) for one or more entities.
+For each input DCID, this method builds the complete descendants graph using a
+breadth-first traversal and parallel fetching.

+It returns either a flat list of unique child or a nested tree structure for
+each entity, depending on the `as_tree` flag.

+Args:
+    place_dcids (str | list[str]): One or more DCIDs of the entities whose descendants
+       will be fetched.
+    descendants_type (Optional[str]): The type of the descendants to fetch (e.g., 'Country', 'State').
+        If None, fetches all descendant types.
+    as_tree (bool): If True, returns a nested tree structure; otherwise, returns a flat list.
+        Defaults to False.
+    max_concurrent_requests (Optional[int]): The maximum number of concurrent requests to make.
+        Defaults to PLACES_MAX_WORKERS.
+Returns:
+    dict[str, list[dict[str, str]] | dict]: A dictionary mapping each input DCID to either:
+        - A flat list of Node dictionaries (if `as_tree` is False), or
+        - A nested ancestry tree (if `as_tree` is True). Each child is represented by
+          a dict.
+ +
fetch_place_parents(self, place_dcids: str | list[str], *, as_dict: bool = True) -> dict[str, list[datacommons_client.models.node.Node | dict]]
Fetches the direct parents of one or more entities using the 'containedInPlace' property.

+Args:
+    place_dcids (str | list[str]): A single place DCID or a list of DCIDs to query.
+    as_dict (bool): If True, returns a dictionary mapping each input DCID to its
+        immediate parent entities. If False, returns a dictionary of Node objects.

+Returns:
+    dict[str, list[Node | dict]]: A dictionary mapping each input DCID to a list of its
+    immediate parent entities. Each parent is represented as a Node object or
+    as a dictionary with the same data.
+ +
fetch_property_labels(self, node_dcids: str | list[str], out: bool = True, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches all property labels for the given nodes.

+Args:
+    node_dcids (str | list[str]): The DCID(s) of the nodes to query.
+    out (bool): Whether to fetch outgoing properties (`->`). Defaults to True.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+      Defaults to True. Set to False to only fetch the first page. In that case, a
+      `next_token` key in the response will indicate if more pages are available.
+      That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    NodeResponse: The response object containing the property labels.

+Example:
+    ```python
+    response = node.fetch_property_labels(node_dcids="geoId/06")
+    print(response)
+    ```
+ +
fetch_property_values(self, node_dcids: str | list[str], properties: str | list[str], constraints: Optional[str] = None, out: bool = True, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches the values of specific properties for given nodes.

+Args:
+    node_dcids (str | List[str]): The DCID(s) of the nodes to query.
+    properties (str | List[str]): The property or relation expression(s) to query.
+    constraints (Optional[str]): Additional constraints for the query. Defaults to None.
+    out (bool): Whether to fetch outgoing properties. Defaults to True.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+      Defaults to True. Set to False to only fetch the first page. In that case, a
+      `next_token` key in the response will indicate if more pages are available.
+      That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.


+Returns:
+    NodeResponse: The response object containing the property values.

+Example:
+    ```python
+    response = node.fetch_property_values(
+        node_dcids=["geoId/06"],
+        properties="name",
+        out=True
+    )
+    print(response)
+    ```
+ +
+Methods inherited from datacommons_client.endpoints.base.Endpoint:
+
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

+Shows the endpoint and underlying API configuration.

+Returns:
+    str: A string representation of the Endpoint object.
+ +
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

+Args:
+    payload: The JSON payload for the POST request.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+        Defaults to True. Set to False to only fetch the first page. In that case, a
+        `next_token` key in the response will indicate if more pages are available.
+        That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    A dictionary with the merged API response data.

+Raises:
+    ValueError: If the payload is not a valid dictionary.
+ +
+Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + + + +
 
class ObservationEndpoint(datacommons_client.endpoints.base.Endpoint)
   ObservationEndpoint(api: datacommons_client.endpoints.base.API)

+A class to interact with the observation API endpoint.

+Args:
+    api (API): The API instance providing the environment configuration
+        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
+
ObservationEndpoint
+
datacommons_client.endpoints.base.Endpoint
+
builtins.object
+
+
+Methods defined here:
+
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the ObservationEndpoint instance.
+ +
fetch(self, variable_dcids: str | list[str], date: datacommons_client.endpoints.payloads.ObservationDate | str = <ObservationDate.LATEST: 'LATEST'>, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, entity_dcids: Union[str, list[str], NoneType] = None, entity_expression: Optional[str] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches data from the observation endpoint.

+Args:
+    variable_dcids (str | list[str]): One or more variable IDs for the data.
+    date (str | ObservationDate): The date for which data is being requested.
+        Defaults to the latest observation.
+    select (list[ObservationSelect]): Fields to include in the response.
+        Defaults to ["date", "variable", "entity", "value"].
+    entity_dcids (Optional[str | list[str]]): One or more entity IDs to filter the data.
+    entity_expression (Optional[str]): A string expression to filter entities.
+    filter_facet_domains (Optional[str | list[str]]): One or more domain names to filter the data.
+    filter_facet_ids (Optional[str | list[str]]): One or more facet IDs to filter the data.

+Returns:
+    ObservationResponse: The response object containing observations for the specified query.
+ +
fetch_available_statistical_variables(self, entity_dcids: str | list[str]) -> dict[str, list[str]]
Fetches available statistical variables (which have observations) for given entities.
+Args:
+    entity_dcids (str | list[str]): One or more entity DCIDs(s) to fetch variables for.
+Returns:
+    dict[str, list[str]]: A dictionary mapping entity DCIDs to their available statistical variables.
+ +
fetch_observations_by_entity_dcid(self, date: datacommons_client.endpoints.payloads.ObservationDate | str, entity_dcids: str | list[str], variable_dcids: str | list[str], *, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches all observations for a given entity type.

+Args:
+    date (ObservationDate | str): The date option for the observations.
+        Use 'all' for all dates, 'latest' for the most recent data,
+        or provide a date as a string (e.g., "2024").
+    entity_dcids (str | list[str]): One or more entity IDs to filter the data.
+    variable_dcids (str | list[str]): The variable(s) to fetch observations for.
+        This can be a single variable ID or a list of IDs.
+    select (Optional[list[ObservationSelect | str]]): Fields to include in the response.
+        If not provided, defaults to ["date", "variable", "entity", "value"].
+    filter_facet_domains: Optional[str | list[str]: One or more domain names to filter the data.
+    filter_facet_ids: Optional[str | list[str]: One or more facet IDs to filter the data.

+Returns:
+    ObservationResponse: The response object containing observations for the specified entity type.

+Example:
+    To fetch all observations for Nigeria for a specific variable:

+    ```python
+    api = API()
+    ObservationEndpoint(api).fetch_observations_by_entity_dcid(
+        date="all",
+        entity_dcids="country/NGA",
+        variable_dcids="sdg/SI_POV_DAY1"
+    )
+    ```
+ +
fetch_observations_by_entity_type(self, date: datacommons_client.endpoints.payloads.ObservationDate | str, parent_entity: str, entity_type: str, variable_dcids: str | list[str], *, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches all observations for a given entity type.

+Args:
+    date (ObservationDate | str): The date option for the observations.
+        Use 'all' for all dates, 'latest' for the most recent data,
+        or provide a date as a string (e.g., "2024").
+    parent_entity (str): The parent entity under which the target entities fall.
+        For example, "africa" for African countries, or "Earth" for all countries.
+    entity_type (str): The type of entities for which to fetch observations.
+        For example, "Country" or "Region".
+    variable_dcids (str | list[str]): The variable(s) to fetch observations for.
+        This can be a single variable ID or a list of IDs.
+    select (Optional[list[ObservationSelect | str]]): Fields to include in the response.
+        If not provided, defaults to ["date", "variable", "entity", "value"].
+    filter_facet_domains: Optional[str | list[str]: One or more domain names to filter the data.
+    filter_facet_ids: Optional[str | list[str]: One or more facet IDs to filter the data.

+Returns:
+    ObservationResponse: The response object containing observations for the specified entity type.

+Example:
+    To fetch all observations for African countries for a specific variable:

+    ```python
+    api = API()
+    ObservationEndpoint(api).fetch_observations_by_entity_type(
+        date="all",
+        parent_entity="africa",
+        entity_type="Country",
+        variable_dcids="sdg/SI_POV_DAY1"
+    )
+    ```
+ +
+Methods inherited from datacommons_client.endpoints.base.Endpoint:
+
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

+Shows the endpoint and underlying API configuration.

+Returns:
+    str: A string representation of the Endpoint object.
+ +
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

+Args:
+    payload: The JSON payload for the POST request.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+        Defaults to True. Set to False to only fetch the first page. In that case, a
+        `next_token` key in the response will indicate if more pages are available.
+        That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    A dictionary with the merged API response data.

+Raises:
+    ValueError: If the payload is not a valid dictionary.
+ +
+Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + + + +
 
class ResolveEndpoint(datacommons_client.endpoints.base.Endpoint)
   ResolveEndpoint(api: datacommons_client.endpoints.base.API)

+A class to interact with the resolve API endpoint.

+Args:
+    api (API): The API instance providing the environment configuration
+        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
+
ResolveEndpoint
+
datacommons_client.endpoints.base.Endpoint
+
builtins.object
+
+
+Methods defined here:
+
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the ResolveEndpoint instance.
+ +
fetch(self, node_ids: str | list[str], expression: str | list[str]) -> datacommons_client.endpoints.response.ResolveResponse
Fetches resolved data for the given nodes and expressions, identified by name,
+ coordinates, or wiki ID.

+Args:
+    node_ids (str | list[str]): One or more node IDs to resolve.
+    expression (str): The relation expression to query.

+Returns:
+    ResolveResponse: The response object containing the resolved data.
+ +
fetch_dcid_by_coordinates(self, latitude: str, longitude: str, entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their geographic coordinates.

+Args:
+    latitude (str): Latitude of the entity.
+    longitude (str): Longitude of the entity.
+    entity_type (Optional[str]): Optional type of the entities to refine results
+    (e.g., "City", "State", "Country").

+Returns:
+    ResolveResponse: The response object containing the resolved DCIDs.

+Example:
+    To find the DCID for "Mountain View" using its latitude and longitude:
+    ```python
+    latitude = "37.42"
+    longitude = "-122.08"
+    response = client.fetch_dcid_by_coordinates(latitude=latitude, longitude=longitude)
+    print(response.entities)
+    ```
+    Note:
+     - For ambiguous results, providing an entity type (e.g., "City") can help disambiguate.
+     - The coordinates should be passed as strings in decimal format (e.g., "37.42", "-122.08").
+ +
fetch_dcids_by_name(self, names: str | list[str], entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their names.

+Args:
+    names (str | list[str]): One or more entity names to resolve.
+    entity_type (Optional[str]): Optional type of the entities.

+Returns:
+    ResolveResponse: The response object containing the resolved DCIDs.
+ +
fetch_dcids_by_wikidata_id(self, wikidata_ids: str | list[str], entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their Wikidata IDs.

+Args:
+    wikidata_ids (str | list[str]): One or more Wikidata IDs to resolve.
+    entity_type (Optional[str]): Optional type of the entities.

+Returns:
+    ResolveResponse: The response object containing the resolved DCIDs.
+ +
+Methods inherited from datacommons_client.endpoints.base.Endpoint:
+
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

+Shows the endpoint and underlying API configuration.

+Returns:
+    str: A string representation of the Endpoint object.
+ +
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

+Args:
+    payload: The JSON payload for the POST request.
+    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
+        Defaults to True. Set to False to only fetch the first page. In that case, a
+        `next_token` key in the response will indicate if more pages are available.
+        That token can be used to fetch the next page.
+    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

+Returns:
+    A dictionary with the merged API response data.

+Raises:
+    ValueError: If the payload is not a valid dictionary.
+ +
+Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
+
__dict__
+
dictionary for instance variables
+
+
__weakref__
+
list of weak references to the object
+
+

+ + + + + +
 
Data
       __all__ = ['DataCommonsClient', 'API', 'NodeEndpoint', 'ObservationEndpoint', 'ResolveEndpoint']
+ \ No newline at end of file From 7d15bbeb39db4a4b2a23172b64d1775a20d7e790 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 15 Oct 2025 15:42:29 +0000 Subject: [PATCH 013/121] Migrate MCP docs to docsite --- assets/images/mcp.png | Bin 0 -> 43998 bytes mcp/develop_agent.md | 39 +++++++++ mcp/index.md | 47 ++++++++++- mcp/run_tools.md | 188 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 270 insertions(+), 4 deletions(-) create mode 100644 assets/images/mcp.png create mode 100644 mcp/develop_agent.md create mode 100644 mcp/run_tools.md diff --git a/assets/images/mcp.png b/assets/images/mcp.png new file mode 100644 index 0000000000000000000000000000000000000000..f83eb482b9167ee969b0c62c99939195413a7569 GIT binary patch literal 43998 zcmaI81yqz@_%14lw4^jciXaUNNJ}UJiqah;AV_yfh?KN+HwX+hAl(cCA}P|1^e}{s zi1gi~zjOZQ-gVbuE!T1l-|TP4yWe=8_aXd+n$k@|dctehuH96AqNs808nz_($B2&$ zJ|PQmoVa%F?KNdZIV~>})B;|rX3r_E6V45CoVc=E>RmB6$rT06y53#Pj&>M-=Fx$B zRv*VqEy6wJK38&aV`~)h!v{1Pj^j;+PcG3Uw_kDUPsi7vnKk=0xds~VLIR{?iBs*zqPdw3?6h1+g-97BW zW9~0hPFUaX*i?NM&5mq$A2@pdzE(6R0ep+=E|^*zccRwgu*9Qvr$&njn8@LeNP&Lu z%`Abd%VRU~gHC^!t-P@Q8LJ$==h8>D7FL^z(!(WpxesdZUd}q>GEsz+lg)TnXm0TIdx~p;3O7h4AJY=h7`*mB|H(Py z(6XD4cvgBV72x66G8?~g##Nqj81+KF`=6iAS1**KCV0G&{83F&>5tD z7Ou^ovLl-}B@ZTkOP3^Ulk1-EXBZM`kS66FVb`>ttlo7;#~nGYob7v9T5>V2>Dn!T zc|TV+m@Zlfp){B-;}?pm?qk9{xC`%&IqW8teY{)He&{9Drcf1*dDcN|RL>SS_LN)9 zvnUE8>5)(&X+Kw=M5*>b&a};Pub|!K!Mr)U2{p!o(BfIc*M~-{s}YS^ufO{zK)XXH z7`*CdZFuM&IOS%m&-kAA4jd_D(I|l#j~f9a8QdDJ{>jZ#_QA3JMCWnT1H;o_+XaTs zUF)oe&BnGL-~l|Iw5(bHf?qFy8|hkPyGG!cnxY5 zvjZ*zT#hEDzHp>OCGu3Yst8aCxo(&u{k3h*US3o+#YQLUD%J~{vSX&2097BR&0~K> zMNK`>#~hH~J@oy#EPKHD&*HUpsx9iw!$TFnI_lo7!E_Nr6Yq5qIuV=1)}OtG(ew?P zhK6SXXb}o9j7SUCg_+9qBr9Hr&uF|$_XE-edfgHum1yJEp@HgAJV@=1{Eq&Vl} z*X1sSd|Y<+Kee_~Y9q<5L!xtv_DaiLjoDQlm!dWuLSHH3DlWa<|L)70n-6~!xdif- zjsUadZwZKV6p<;{=r&7F+V=YhvvyOldu<`Et%WiB-{x+;U+?67D!!RXSr<@{ z?SOF%w;aJy4$yWfOr4()tqYdzr%j;6&@Y`^#|>6hkR zM!Ql}9a2{3bkcmn$Q9OoYl1_&lfd?IFzTD`u7k^80}_)CC7KK959w5 z!eoWto4q|ON-?>E$LN&%3pJyOmmFIE%S>L*&Dec^(fIeAqpbL9)IE#E){|G$C()Dy zz#`Pg8jUKyy&538bN%x)#OR&>c2U9FXUgY^;9?n(NdJ04Cuo-YYd-+a==LI@E%EbP ztn0sa6~dVKyTeFRBKe#3_S%m9(;hf$lq=uRb>`y*3sjNVyfZm~Pz=&LlH#!~9Zb8Y z`|)`WYsZz0@x@v3V79apwZL$Eoh|woB6q`_9rks_3FRmwhS@@m50XiPo0rA> z#-OC~_uih9mx*p((e1|{N*lB~)3%)&D}PYwVl0-eWOS^#e)68yjkG@CSZH9ETBcPd z4g97d3{x6MM*_$t0^p}yw55moD8X=bTB(L%V4vw#6n#`(j2+r>ZEl4^z_fbFLbH-X zq2PBVLc!4DBIqrYtE4I&YCbl?xo+W2pj*NW3W&n|?mh8g8lKlHEu{L<=;wirqm z*|;;b?$STsU2Y_FvXbN;wD>2I&EulvlA z%AT`GqToTOPa`rl7r*Crv>VG{uwV68=bN%F>q)v_V!wV;H*|o>tTcnM3T>w< zeIyaX(#P|Z?Hx{oYGAkiE{l-4ErrkU4n)H3J@M^Xs~o>OVN5nb^WD627FQq#sTX5T?xT`}-ivP#xjZbnpBPTb8G6hPR@d`yb%=E~1N#4)Wh(nm z=Y^KukDb6JmIN0enw3~`I|6gmTui}GPUF8j-V;>lJuQATu4!Ib!*mED&6o8QQ-{S? zaWl+WM&P{$uH$s4>d0m@{GCV(+(Xx|{rpgMd#R(t8@=)N-(a8dB~?@Slf+#&wvvX} zp#=Ez*-1&-o4-GXLQsysQ8PYCZk^?W_pF8!waauSnoP~RN$x0pw#^i=P4<$?e*dO2 zOr(8dD1)Z%KCV}r$;G|&syx<%=@23sq+iB9#7qa-ae1ohE0fct-0jiIoEQGjUb$q( zmFvW?2Y`!OD@;&}Z}&U<>M>_u+~Fyaby|Mb%i#D0xIlsHcqHR1*LPk*cgxQ9JD52a zomSr9BnMpVnC4N|hiZ=qG_6rwh$P21B4K5}U|XrBbXZseYHoY;`uohE1amwWy;nL% zQ{h1$f~I2Ed4A8kKHr|K$ncjwm^P2y*3TxD`C~TY*l}TzVPF64o;>?`C-4A8O0i5n z>ff_GW|jFHm$7GTtJ>Ve=u8TlULXr%Wnj#tn#F2K+0t*mV&O5ewD}y3#Tqub6fK6P z6+V}}aJz8b?YO%13AGuN@w+_P+}XwVE6#L@xD>B8!du)FX#azqLF<8ueR86~5Z@E9 z$gt;~_&2NCPhppr;F=}$MAC#kUsPgr4mL5s9!$;(4bB2Nihj+fPwt}+cLoxk%iO(j zdu~}3bMBD{_JBW70`%)2RTs@%6fZykCke2o}fo$k4DDZ-gtVS ztfzFx_6XB>J*TJefS31z$Xw9zcz~jpEX#FR{PYcPvV_~#=mDa{i<$h60(5iMwrYZv zl21Xwn*eAnNcZ1xm4#oHh?>y)KTXj*juktkdLni0*Tsq<)Pm zxYrjD5k-c)yc&^0N0~VN+H2XLoRf#*5k=)|->x}m9&{+r3AohXYu@3f6LZLPh?DQc z3eH-Mm6h?8rIqr6w_aVI9qu+PA>UFFEIh+kxJe+1{{VcW$^YWd43qyK=t9JGuPoWi zKjv!+>%d-!!yvWRE6(`oc?ff@PPgGrG=7iBA_j%{O z{tUX>0fMx1iWIv)@$8d}HTbwunup!ViH& z?RGULe~}+}VKLohUR?!#tyVP~Gd2D38AIDmDCezFc@o)kC-42XW4FAzufhBkr-)(p zSSbn;Ng7k%YDTZ+z*BMZSvcEy``KPuLtfIIac=5&$-of>i!fr>Vn;KI*Qh;96YhaB zmEHVnCG-lSr&wXcH2Kd~kyK?PY@d6*R zv_kky@g?ztykc$;Q3p4)9Zedpz+JzvhHdS$XSq-6uOvfiOVapc8Jd4Rc=gx+SWrTX zTU(9lK|OKuXEoMc6{mO1Y*3jC#WHUu*;~$Q+HOQ@^Z0m#^0^f2qeqXLGw_$DXgo%3 z-gO~>Gbkfr7?C+Eziht?-2}C6~UK?(!3&Wnd*n-&pat-X#cu zxXtC=ATW1VQBiT*q5GYclF}!HaiTwSha1|g22)uJ9!HOHE|CStyx;=mOpSwx+zzfh zi2=9Mx;B1u>1Z>0OjRIczhkGe1^FQ;K|*brb``r}(WlJ!XhL_T4;vS^;EZ79N4jn0 zw}lt9>^RYBT28yObuOUVK{`v&rbbmiP-6D0T@3vY3a5CDnyy~aNlH(rZvr96{h*V| zu#t^WR@3?+8U&x>2G$amwcLT5T{p-+U~&LMyXp{s87a|Qd&@zjGNZX+EJ=w5T>@Ok!yuef+esYl>NP06Lo>rz#=$~Wj<|C`eE1l110T|b<- zT9Ex;gPPoQ(`t<5FZqn2PR#I%H`(vvOQsg%xendGj&UvIB2kLL5Tm$Yr_DSb})NAh9}%`jXo3_4lT;+0f3Ac?g~f-TBr% z*p11cvVYQMIG-E^6HxvdBa9P_6iha|_B3;<%2ZaGNH|$}8kGA-nZgI4@<*Q7!V9lY z4$4Q8yfP&j5{!Bq!f_j3h|Pb}n|Ys-Q*C@MA~Ny_xzcFCaP{(vfqKFM6JYa0)w}YJ zGsRsCF(roWBrimGv{~5q&jD^2q?|TJ+(sULDwh1XlPwKjvcj-o($*Hsz>lq)B-y$% zo-9v?;VCkVgO1EC+Xs1o}RO^;)lAhR_8ojf5)TXl_? zff^<;gxrUdy;^Gb>xSZ0+%Xr*+E_@HhKyx+EaZFUV_w6fpk{o6>4{Q8n7!BH zFj*R*hqF-8B$ZQ$R}o!m&GdbYXYY2 zGY251U-eIDV}r`t1%hYt=5&C11US!*EmCN`!X z`Ool=o^rp}p|*>e`nu%Y``>58l6*J+Mc60O(BhgJ4O1#AThX=q|BkOV?d4yy8>0Wa z0{cl2Bk1K6J+zNZwH;^cd0$X9(F;wMcD;L(18X*!aM)PQkl)>8(M~OOkKlV3-Yw-N zeB>T@x=E@{3Zu7>iFyBi5+wbzI8?W1VW(AuqM!H-8SApmOiYTVvW!vj95>X`I)?LX zq1g(_G))%vAq|C-lzJccO!Jt;l&J3fvN7NDGbFIwe>Z%C10$+s0PmdSw>6ZNi zfw>o9mNOywVO8Z;GaePEcXHp|phk4ddker$WAEn|DJ&doxan3@{5l;J{FZ5uxKHeL zA*#cCsWTzk>*`SCpWW}!u^x`1x3x(cKjrPPtWrPQB|DM2BUJ^{JeS)scS|Wf=3Bn0 zr&c_2m}?|6Q2H*xaG}zn{z~TNKR=btV+G`%=$hEyAVtNoPflb{hZ~As8kGf6^Ai ztX%nDTny^ih%t2b3U~3=Ex%Ii9dg1;?cq3~_ zLY+k3Cd{uw`|RS`MBE0tbkk>OwZX>O6#erjf653#+mQK(x1ZeEy)aH%{F_oSx~zJo zo7GwikP}zjsRpA7oo4Jy%ST6enWuz&&$TPx>fR_)dA@1#YJdcK`;bSuOr3$BWGD6Jk2E|b; z{yn5OHga>Gd_~B5?$h|lc0}KU!g&p`-@A3Zwzb~G;qNtW22o2_OyWVb9ws@cajJ;8 z%gYSb?9LlFjs{k8p>Y_hG*%Fshc~Rgo|Y9lkmA@+{kUDbrI<`el*B!HBV(+2ES0i5?-pL4x2Ap^Gp9^e@^Xl>*751Bdpf}jUCf2J5w@%4E z(h3T1^dT*C<{es9RZLX5Q8DQWHMVzPQK5EsL6r>mpkWJGN>WvdHggpMpUe>J64^T;DeZa?yFQ6! zMg~vcu^te!{wDN1y_DPNboWp{KBqAGgCY&KHXELIAB@oI8rIHDv=^oNJKh1)Ns(f= zZ{!1CshN8sjA=~#yCOI2%LBNJ@9*R`348LOKRp@>Q*DDR=CUuk4%}}L?7hL-)tWT{ z|H|)S&Ko}bHzVS0{;4L!Lyp{_AU zGecNXI-+yOCMDpl_k1{)T8-+bP#9%mw-mHmZ>Z|glF-3@=3ZC5`b4O_1p0RWtP~8f ztxM%QC-qzL7M{2Fz1gyavF5FWTai>%25v1UO$EIK28SkT-ON+ryL#K&JKU`rJ2!dn z#Y&U+$a(bth|GVk#;apQ#AD|O_5WQ09e>l{Ia(v#ulFV*srl<&=Sbl4GWv4f|9E&Y zaUma-Kp9CjWnj_@k3E?w2qnPT%^`&O5~TH{B*}!Rj!smQq+^~n^X`u)9i#-$?i;A5 zPb{6IzI_b;B8N5byU^vwLmd~wKH34z)Eq3f2#{Ur2-`fGK?DtrFxjUISo_6)p$l@h zq)C3$S%a^)8KE6e!H?dm{HVw`6u6CJXDA50OWwoKO~OOh1OM*DHfC~Bv>$%lPh}oJ zmMcikB62hN3zzGkqjUL{+lU#g2QAEmVV%9wm^g;ZQVi6@QY(l7HvrG?b%&)`NazwPCaT z?y+JPS%wBJ>AM@8cZH<FLg= z_=(T3-&#+aJ}E}npe@}k)yCgsAzx0vW^7pUA-rxjL}_YHWXwbROQru4KUP^sN=Ya3 z=CruYamCOGA-TvyZp1Jw!p{9&CB?wT1WWi2($hk%kT02I9P0Z4+?a`FE-pKIJ3n~s zlBsr>C&@!zHYO=0H?Y&tn~_7sX7kn}EXzeVeEO6gR|PjwGeO zga0&~C)jI6+vfi9bPsnF9-%?K>27<&i8pKhvnM*!aw$pOy0({Uxwql=hG72rG0MV` z;HB7db2Ia0Z*-Ep2V=20l%<+k%&B_!y{`X0-q;LzFUqP9E$)r{>7K}K{cNDj1mbQK zuT3m@vzTN`ojYfC2(o4Hq9?GwRAYzWF(lUO)}7#5I|e)tCA6!8B12phToc757(nxa z5N9`cEm@j zf>5kdE4YIVzF0daUqq*La1{ENT^Km7oKE3wPLX? z`6--UFObdrGTq}Ebg|1`qc=Zm*c%O;U-U|0{5> zeEzUVz3=X~9(aMma6OEv&c2kd^rz%b##@IA*g<=74i`z&sf0w|G=E ziC0dR>$CUt^lS{NbMx?6_C!!^3_(|WqZ2^lIoc||LP6&tN-Wi9`oVr^gqD1Q{TsAz z*E78XkE=&RNKgEGcZ|>6vE~U!!z|=WmB5;!jyj``M6B+#bCZ#<&o65XT*>J2g;m(B z^V*EhPQ`jKrP_!5d^Yr+v-7ANUVoZZ_UlE)^$4EI4t8>Ov{1tdGHxQ z3V;$0ae1TWW|qz~$%zckPA};V^Cjr16OWt?lnxk9LPq#F`UISkRl-F81s%gcl&#BhL|mYE?cE8etq8< zQU)heAb)T2>cQN3`M{a$Ub29i3**&;KMas|{-KBgfg6_m4VU%-nwtF3?Bc;goO}SK+&yn3lX`ykd$$ON z%qe^L)8K&Kp>YMb3BW)W{qYe=2Kbm)!LjgcGy z;}(yJK%BtMx7SHNL<+R)Ii7(+tLV+~9)_bv!d*|di+A>lSGvOnEK`g>cK2bBQw*Tk ze)T1eeI3BmQ>FltS{ws~FNSIKM`5t7DU&x9i=oS|fK99jz#?Dt-h|LexLIe-{RKT; zN2HHr$a?GN93(LiPxo;j8UWG?1v= zc6iW%+jdb(Xh+}3TNX`CP0DtF7y2itWl1R8+gFfC(uM_3e)M|cHr++#>yUF4c+$k4 z?<0t^U<%Fptm%j@FzFB(ky7UP^n=_Zl$egqvGwQUthrI?5PpvFlg%7~;w+E!Jsq3x zAESvGAH0Z>K$bYU17sQ0c~#wNfZ|7@@QyFG3StpU<{d$Jq?4u@(9NBy4w5BOV>L|0 zdfD6#ciqhL{0hv7>C%5NRmf#ETI}(>c^igq@YH8T1!qe#QGlCPqcHx2STZdP%2_>b z7&Q>YO|#j;xS`CQMimR-uN?DO>B&}c)V%9R5~i9B%K|+HI|qm0U6ZBmJCob+puOTJ zjWs*K_DBP+wu^Hn89dnfMgi*A`sYXFiVoYufl&;7;v9SQd)l1z1XzUowA_Qp6mW>~ z1|YIc25nfyzDIV!6TE3XC{){eMkX4se;TCJs^MXR;^`U#!bWXnnVi5_!Mdq`WMpcI zy(E{1`*RKNLQhDOQ#`~&aFo{E#kc)P^_B0>9vT2dGCNCLr4gF{jZ^F^-z~bYxE(hCWA+>`=4f&>J}?q2Sj6E||N>xvWx-p>%|vG%^B?%5iGd z->HtSwidp6sNeYOd}(9ze-@|I*0$+&;qeuhLrJFH+MF) zm_y!nxSunJx2p9pqUkQi>w~)cHd*Ze$h*kxak|w2->A%Z^qZRRuG@{jU;5rnViu)l z{j%Fa#bWwy$swq1o(svoQLH8ZXHG)nhI9H^I$0b2ceQ)q2@>JNl4_=8wvRIH{4d{I zv(tJAlXk1UENSrgAkkkT^jxi!J2Y!tS!=qV`*XymD6YJ7k86JKe}|Q~I7>M%W1Hmt zPDNV?Rj8JaP;-ucesN{{Oc3)%g8%)-H#GxR72rfmVac~V1a)KkR)riLD2%?j1M)Im zj8mHpF`VyPjKeoq=F!n|GQ*z#?0?>%sWq+KpPy6BodsaBbU+%f>{r`epKOi5GfHnq z4@>qPX&m;dC{Ew4{5b=P@+Qoc#{HkKE3{&jjY}yP!Nx>T%Sf(PV;btum=A^g zQ{mLCtO`c~5V8p6?e{GjuoUK58S#J?UIP9FWAvxu>eG*=G1 zhYU`NP|=?5dfKChSiv}b5J$Btn>OenQe<|5s=~?dUN|OYVxV~rR)BcU{p3h7VvChJ ze2;O_E)&|i+xB`6*s7|S&?~_L$c>AfUwaFNc#K=Kb!8^>KgKnmCxJNf;9LSuFFCoP z65m6rt`u1H?t%NROKo6%aCL_o zNj$;y2F2PPdh`2nkH6J$CY`?>KK0qt32l84!Z_+=e*WRp&%AoB3N7M!YpP!`Gy);F zmsbTFgX;()#pNR_HLTd`YXG26;j5pPKRi4{H0%N&47lLj;mU zXtxsbFtHK=vDo=!-TorB_vwX2Az48r5>Fb&mQ##x6B|eY-rHh;uYou$3*5Em!&2zM z5;GjV5ITSdn-j=`uv4s{X+fl32&mxgP170+J51U%tId%)Ejr^~|qj|LmcyfaBKIC>+)@hb>Q z%$K~^bsj81!yt%;mDuF+^74b<(IHtNi_^HmqL_&ep&C-a;?x0$rrCIY4xr?6*Ecoy zMNqsvF58otgjNqppNcl@JbK92^7(v}{M#K60fGYGWAbPKn!|@h&s{)@`TR)W2_W1E zadI*5#h67+5iG3DWra?l_EJE-%SB%wfS2 zwR=2J`)DU$&7i+TD+F>b8zL)pdb}@yAuK_ooQ{O&9&f^(XM{_5ImZE4XG<=}3!Xds zV$>c7N?4am66XiqN0*h2KL{aCAc3sfXW&7Bk4P~LVQRqEZ3WG4HMykclR91n4y07Q zyCpgJDY*@zGM^1s#8526(extRt+Ak63Ilq((J&xLvd;ifH$HcuD}Z3k z#uG(5@CFAj9^FFps2|;u52w?T4-s_Te@qCqmX{78JF=AyIjC#Smba$-WYFMX?}=)x zxNKJFi*Z?9ARTC(m{8}eQq+MVNLUIn9Pw%Fb)57sI*bnoI1J2}{Em$;q%W_+?UZ{` z#oh425QMmr9e@h(F=A@|cp%Awlajr9RZMoSE^Ix@B0Z%B1LkwT@ZApHT~EhHiZ`bK zBCssb2iRKm06##O?{vNYb7KyyKJA{B+8>*kXo8R|lQy4ixetGADbep(9;A`kjVdy$ zmcxVm?CK_T4#6hxn9k=RzZHZzga9tZ^)W)gmH0#GkQBR(_QmS`22B#4%he|fOr1Bs0z4H#c)m%vHBok zyKG>EZo<1s%05Xm(R#%9)s!1*FIQ^_9jo05%hFP{p8kpZlW5lq{pqnH!#9m?5|&%F zvx1AbgYY1-cY&W%`>9s(1l^?!UI{Yc1 zd+mE5TlDF?&0p6ORlU&c{(UuCl*$8c>ed65uYd6{0xvXc?L|(-y?4pGrUmavD&bx0 zpY>M1NO^O>XgxGIi$STu06hn+J#pH0h90)JFpudRwjIM`Fg$Ys#0N-BxvX_PU_7K0xQb~pyyI0o?A1Up zS!HVpRTzzqwojw*pr(1yQ)6F@N#*{JOppYrs0E=fPC(9PB1mxeGJ;NozMqKB*%kEt zv{aeaSO3xLLJrUukQihCON@z+CQWpANhZ@dOSC37Zh1IyY=BNVu1{7YBug<}XPY?z z?f!NjgR}B&m@7-YhR;zTLj0TGA5F)dKxNg zEATU385j%E%SX$ULnF-p>#gCkNA*nmH}!fzhMiQ`6q1EP*-0Bpyi)3G)z&kjv9$to z?c&}_=1a9QH$W~PGjR+W8RpX`pgG{z!}S=5q>8d~csrjaLajnfufoGG5R*{< zml6LgS{$pa=OaO{B_dV&;6dv;faKLL78GK^Rt0{0XP(KX37os$CW&fWt|jMfN(9NP zS)ClA!OYg^lav3~Fx`G~l2c%Fn#cH1W_K*y9!p+LuZ ze&2rZa&?^U>!KhpP+Zb?3{ZG4fUQ=)F7&tV{-fcz9io;~*ZJRA7V%!8Sv@DIPhP+k zKlA@x?fbtO`R5cLRRMLVRW>jg_at#}S$i1Z`7uhGWwn$0a77bQV5Z~v|7wo*aYKo( z+XTbFOiL>fHbzBHOP3FD3&6vKL5uc=f(>epgJ(gzU{N7 zKB-@cx2c97j{me)X}X}+k%gH1Z@e#w`&S(KmNvLK4kXuYgmX`XR5R6kAKbl5c7JH~ z+cbB{VUwNVrB=%v0t11t|IY$-V40DJnKA&KLdts*a-2jGNDAO zZgK31H+%k_Cl=HChDo-UOu6@$Z+hheILOwOL^5}Avj0STv9^VcsLVps1@I>MF$(6~ zW!H2~XYeIBGji^++2n&prKI(Np{47IxJEKcf0vk(PisaQ`@GfwjVh6dw5r8xAeV)PIx6Tz9dA8ijT*oXyR0l zEAyeW|7Ad`H7&EY#@sI*NL00k9GS4)ix;{ih3HlP_2&rPN=IJ6TM~57afe0gw7wbv!1_9$|?WJG& zJX$^^jaOFQlhstb#TiZ_x%Kt%M%%_do=Tjsl`{s!wUnR6I-=fx9hta!tEFv2na`NJ zWtwAR?`^xy`}E;$qRFPuj0q1!KU!J^!ZgCF9zt;4jBkp`sw&=60>hO6kbYyIKm{v! zSgRJ9E6jm<>r)oCTC*~*;n^wGRip`M#4lep8p_KKRN^GmKZzQjB$?<}V3Raw4IbZf zV@ChhtIHqP+##B^{@rU+yf8RIM&6?@j36Q8BTSUoYBw;Bqr|}8X}3Hcwyg(gFPCzc z|2&~ydz6FH%P>uvJmu*4%HaOSf=|H3dqIZC)umkLC}3kV^gYFGeJGyxp1ds|hhZn- zmU!&oYTO_I*}%^4Q`PR~_+br%OkNAk9%`|XlGE3xmUb2HRVgK_?PabN8n9`HF=*9^ z$O-@jNzFY>7_Vt#(3j2S`Ybg=z|HznMi=4_o=MWZU)|9qUqZr zgx1X?R5ryq9Qyg4r(jMkm1A^zh1etp&PK31r4stuwrI4RqVfmOZG&ix(@=`@!p>X} zA?KZoH*kzz)2@i{+1Jzt@UZ-N-pl|coXO>^;v_KFn6lO`$1*ZxAlVP87;dwbN~+^S zW#DK>ZWOMbYf}%a(po4(KG8PJ$hpC$QUem{FJJjgKR*8&BC_+#5?TI{2KQZ6#h^CS zMS1)z6VN8%eAWY=-bp5sWuqKi=-Xdy=tB0&a_ygs#R`xYg=>XDbY1~U$qrd{`6)}L z3;{a640Mttv9*zUi#ANAY$}xhkxq6pT+#pQ0m`^Nq{Vix{D+P3#H7phU;27Q=kn-A z3UzvYljc8s=+B>D_O$5SvfQ=u|GmYW%ps8S$-TITDtQrZ--VsMJ7}(=_*yaR{oeiO zPkD7R2XeI2C%yq*PIkQCMRFh+x&DZ;<$L_s)doSG&6j8|3iWh-oq96?cVWWb2XfVV zKVjjZh|Vuf*sw&dZ7)9^;47Ha*{YU^9Cw~WNF^&KhPr&EmS$ST>3P{x082^cU-*7L zsBR}`=+EW+M|vYXNjm^4FI)b=VKMs)f_OUF-u%^C(4hGf#n&op-$e%osAbcJsF`#a zZN2+{)Xl7`;-=+@!Ez+L9UNygnqhK*8$a3e zsacABotHA>IY=#1_|rOJ9gi~uCVkPOIu?BX3KwtskrQR&&-^J%I$hy}QgwP({ZilV zz($2@;2|D4Ns^*%vtF$>fiN4-P8$4K&B4{jinUGTXxw}+jlfOXfqGl#x>#?-Gez-M zVEgHMVQx3xac`aXlvTawq7+2EK$KD<`>j{>wKoXTITyTiO^Rsv zsMQ;a->k#lg88a9^tFt-hvcp{pn6h|W~G4YErUH2qC)8YQu=d!QESQUrXo#*H${wY z9ud!n&(!f=NrWsKFOsIa`Xh-le;nSpewlkg6huO9Nddk^@BVJL2z$iQp+h_Siy&wL zNDvN_MgDfHD90`?TBO;ZCCnL_r>;WSY;dpPwEFMdKsS<++wXynic-OpNLV#;&BS<{ zI8os4`~5MBfk}4^C+AlH@`M_J7ah zus?jXmjO;t2=!w21?L7o?6LqEo@*aa{BBU#eT-kzYR?6SzIOHx-;TB4M{ZKUurfMH2-WdNZq*ICt+jg=fw?Y1f z%q{+EQ^S?GAM`m7{Nx&%O*?7eDR208B#x$sH)O)!3sZHI*w@eRb&^q~^15xO%CS!* z86rz{n%e>j@WL_C`VdTF3__Kh&L4RauSgNBK^ zl9(Q#xpjSpS_%-lqcufFAqfRTQ>vgtHRTXq;J;#8f~3&V*zrO?u&W#&nH^@Xg@A-m zJ&&ELuCViUX`yDW$yD@)EeBiv=Ofz*dFG;1(X9f~J)xy*5y3;9fC{0d+BF~E{nzE(>eEy~)pi-M=hYw;y_ zCMyyq%5=kcXo=G~Hw#lGJu6!QASC>H>~?8MiJ+LHnlL}=?SPf^mS!Lx`Y;uGI}U6^ zWzAIVx29p=XF%!b^_twnV;YLL4GLil_Efbb zoz$PchaI1Dq3g)V)m;joO{WQ&KmPOd9CQy4d|*|;=n5zdFO*yF0^L>uXy@;5ZIZ*F zA>9*z-QonD&%S2>1mxh%_R4>dHT?dKqo}Yb>MbV=ZTBkT{C;@4o71DxzP#+~Daiq4 zHdbg6@Rq|u%W15zZq}B)<}wV^ar&lXsj4LvgS5~|dX(?Eyelu~Z2}xb5ztf|0xDtv zbcg=hEh;KH1T5MBhK^f&v!4_nAAiu!p2x8X3KT)dnAeAE{Xko&JCG`5z6Q9ln@c8o zdR(S{M@i)dbt(-{K>gt!pOH}(5URxFt=JI?0B1hmt0n)lXUPCeY#9JelzbRqh{S-1 zhCk7^vgvVUGm<+^%+ch}WnSO%W^X-4ed5^X>I-OY<47!{=S)&|k^(pgYF=_is5Xi;gB@sHmvt&=uX--PZRzTQAnfNdzUpAH0j zpvvF8#ddK7m{geI8xy}HZ6FIZbh9PK?gVEbz?tdvZtoI7Z~=-b>e#-5md}r)={W$) z7MkhYODFMQVtjmjCcx6NsMcY@;P6n;!EKey2}m^ApG*A=#z+X7=Wfd3epJhXb;zRN z-D@EPwj}9ntj*kC` z1E`FVNi0|}xNiw40uKTB-4DR!yFiuKv{(-WQm>cV+b{&5>8o1Gkwu%0DaF4>z)K(S8Ist2%k`FKA=Y)=FBn{ZQ?!<_ z4v1cd?W=(tRJ9%)t3-tYJ{`W^9~x-!vkcKw+sq_0>0a?mou>biCe|yB0I@4`yky&&y%q0PFZK^)TV*vUiDWp;{^_ZG-qJBE6AOv7f6lo1mWHst6(MLK}CU9 z^@c=DZ!w9~!NUa~jKq4f>AAXk90?&lDFYg#!O@}p4u^KTu_#Q-scPJova;|T1EW>7 z3|^C43&dl>cT|{SXkKB1E&&2N+o4nfSjCRr^;)+)Dm;J)X(n5SG)P~aqg{}t#i1RK zxg<~465I#ILMQn`vwTX9KPJE^wLpZ~UE(v0c)>mT=Vx!JpjOl))rpjxl4hj|GHt{3 zGgh|zUvc$NJWj{jUbAu{TmnxZY6hxYsx~0r+jbT@0&)WTJPurQvuhRXTN^^%B)O7z zCwJpU-KhwT<6<8C!1qQEiMdbD0GbwQw=-3h!_-Il&RHZ5iYLd91I62yD+B@s?N?sA zA65CA6!$>G0mNezc8kCmng_pby)MrsdH99holA4+%7FjlewHBPZZ=Ob!8GFn;YOw3-1?=n@p{y*f=&B0;xe<5n1@wPy zPcVd-vDb2y%NHU}D~3}+NgMEv@)9Ga5ZJv0Aq6q_Q@YXsVfIq5`ejJF55dZ9PEI9H z_nuitd;?VzpKT85l0O&)j_Yf0{NI7_z{&Vgj=i@I`J)_%tc_10_Y}eazWwIrXv8gX z+NRc4DVu})dkpjk#HpnCDWgk((sK8AXccI@Elc4)Jo?tss3!K9Y6XA6x%C&2_zFKW z0QCx`2*(XfleQ%>2)ds!-Q9HT5X7(m?9!~K0{&z=_}O^HhBFHr+pJNM!ZdV1%{R-g zy$=W?(xRYN=)VfcTdaHt60i)r-fFXOd?S?@Z^x&p0eJU_S1ESa2U4bHbaZq+GSk2S zj!1D+R|*=F;ngKfl{Xc8%NM#+_6$KWlD$x>76qqLQR?H* z>FTK|cp)6*YO)uWr{JH6Y0YeD8Wi_PCqmCV1AErgV1fH`P^rICf}+@30!d6W3Nb{Yc37W5^|slD+28|4LP?BD zh(t1#*U@|seqC<~nf(Utu__B+!`6A;$J>o>m=}IN>K?qwDDG0|K+es_SAl-OOBvXI zqZ)GWuA`or=A7Pinz-vnfapHv?>Kik0oZ&ukrysD)&j({hf4R0$dpk*N+*7)0BZT7 z>nxNmr+Sp@RXX)w{fflc9h6z`OE=3$F>PJS%=fNL+`c_pfd_E_xz$R@fgjR56jF;1 zRSpBHpq(7ayLu+rWf8~7301rlzz%8RWdmos^C9GH)s&YiP)J_TaZw6_fHU4;iY=mp zZ`^90!gQ}__bF6p0YEhq1$u}u3BZm$J|EC?q_&gF-&%|W!bBa%NPuRG((DCId>2Ba zWzJBP2Fh7~g`4uT!4NlDRw+!U(P?1}R=eK54d_C0N07+({B2WMqy+-kYpk8Qr@wd0 zg5sxkME2UpJ1eJ4%LC+;_}JT>INrgk@50I zT;x8&y$)x`69F>2RUcxC&xDeCwzj|(Px8sX&8>qj3;z`>1lNMk9btwI4$YB-#uE5c zWON>*r_lQ0&+mX3G044g^k&G!5YQxhiwzM=TKmcEJiCoUOzibo4?&31Bjp(bQOPa$ z9e8(S2nNr0cZduoyKnJ@?DbS-77Fy9G}mnd>T`VVWe6mga~|*b(GJ<`jbm5%-?s~ zY;2W#FPmZ2!=ZWm&V?UfjB;-123>%(aAj_?(zvW;cT^>~fz72d52!J%3ZQtboHZdt ze7Xzy28hcfES7Cx7>($w!PLC{@2T)lW?k5r zroT*}e?0`NQ}y?6Cn2%kl@p`A>^W|PB=Z}?@C4aEn4N)ZUU^;3`bRg|dqFHVc}5d7 z#B)B+RNyiMw3F>P@ZyLDX9QlybGVNh%#r=;(3?Q+xCWLp!S|AOkngWH*&TS96VM3t zE2@tbpV9gV{{+0$as?5xvX5mfyR|1zGy=$}N<7q$h%z!GjhKg{FK7jV9SY~?jV`3+ zX$*YH9v@VRU%zJ}uot@v{#M7Utj%mGZx<~wU3^?4SrGD1o#R|Dmwy6$xdziw(42&u zQXc>qPoFsq3Ct;j)2f?S)_!2X{01e|1oVsRpc87dXcM?oAuu)lfac-{f0g^|x2i{|{eZ9ai<$b*qv} z2+{&l5=w)BumPo!?(PmLfh~=6mo!K!otu>IknZm8kdpq^_MG><-+i9@+`rC|9ly0# z&N1g0W5HKjASYzD;8V2*nS-2%^8U`85BO8AU-HOKTB)MtM=J1si(~)%-xV5U(CqPf z@Lb^vXF3(*yxM`e_wavk|I8V`ic1md!+^h*jDW7x=|qYtLjynUConePe~sBtQ7YVu z(JouN`!>?SJ)ITer|ikdsD#Q;w3@YWX zbBul=Y;1-BX?UKn5t^eyl6mS6wL+)y-yz-e5< zF}VLE&cH#Jh5iqh8Z44mZKZ8+HZ8zQJ^yCLz+2h!&>`Gek}peKDsd+_Bx}lj$NOH} z<4ek0r55IQhr5(0YQrZJcpoYR&_g9Fa}*dLbTNdD>8in`NIY^$f0@l-EEH&_LFo6i z$|`x&L;_29h}M$c{(HDkSqOZf6lg%6lo7ph!QwVw{F9PUJ62oKFK2Kb;#BpQKO zPE&qoYNyFjxgaxKvh1&-UV(;49|a-u)tR@-`%k^`qNHhnfe!yoQkiW7!DS4ObE6ru z0wg}G|HBf7|E`xjdogdH^q(+hVIc=>#!)vp2K@7QK+RqMoV2Ma0n1FP&s~k#f=jkC zV?G#7{d%RDiPEXyl6R~mLRujFSM-)81=ye_zq3w#COA*jsB7UXoz58K*e8^W-VBz< zAhXn!!3nni^h^*2q@eGFA?smZ{>K;p{RSF;c+rV_C--~nUtKdO%FzSBxuMM^`S)Y; zY^-~IDG*hl?k0!JZ;Jtv(1sgcz_B?Xy*AgoN`Z|c2hN+>=x2QNH*ATK!hobeK6Bc{ z0fPm#^J!Bu7z4eVrkx!OQyAC)+^X-O{k08%SNWx-!)jhx!4GqJQT?e)F&4rYxOm>< zYMYkS2@Z{?J+AlH!Rb56C;(TCUl4gzZV2j5ZuI!O@6kgY`TQ+}*+f?Wy-503L8mIW!Bx3wD#fNVvKWZUW&9 zh^krc*XL!xo`dJ&1Ij^dMn!bk#pv83|KYGbLQSIe0_^wO)8#q50D)XJ0Mek%#CR11UG~7Pr>+W74a%MPu~>oPr%4R7h1^;phDK?!-X=p%TMlPq5T5A z13(l1ikg}lgaOrCUA{mET7JIZYLf^wo4B~R-56YR=@&5QzQ(Vfo)P;CxN#FYg!tcS z54gwNc}pN-)-h%I)D5>5fV)^Z>`q7@G~IhRFL|M?0iM{l=x_Ggqa-!4741SRjawP@K10lN8r2JKgSY4rAgeluX(f~D6(_qSB{?|}UEP00)e zWFSzF12Z0U&41p;$gwAt$R?u88BR^alGYEnKs2Db_~uZQ^`GxH57OTRvHR%Bw4Omz zEWK(^zH-4@tg6m%^aF#APcv`-c^DwE+<@$h3&-)0R96%rVf7`Se(Z$jfq)Df(8|X9 zVDtOX-9uJ&@`+B3s*j0wgJ*Yl*EZ^a9CT+%nnO*Vo;7g$i{H3#w~TpP92%*i z**FIEOahXQeYm@f&eunha_u<(tm`;@XsO&Lux2p>fqcMZhzs#LNS%q@8_RusiI= z4lDz9e-xZC5%8kf=4JkN9YLf>>KU7!~4aVbB+BTOxjq+qicqr2tMn=c0JtD}qK zzrt3cEWkqpWGcXYbu(DdLa7bI5(PSl-MMfWzZuaZ5a;Up!P6%tQFy@o#i1GGAcyo# zBlylyZ4i8M(7$3jgUpYEC8mZA*4H5SeGY|&Zv8g}SzG};Khw~H`%qvi z2mPNX0yO?|&Rc;jbW}h!dK-8v#DlFz!!xWDhe*7%oA#X4+m3x_AIzRl0IwI}ngiv) z_-cUQ&?BEx--+kEn5%Y_`b%%{@sL0R6aCWHQ6i2n-L71H=_+{iKR&B%`F(QIb`gg1 z-x&$qlr%qc^(S);&i$xM$8x}WYnvt#fe^_fZ6B|$QceI8OgiA`XUPCcXtnuU4R=xMm2Z@d1S`f;|vUh zXx8&h5Q3vEasl2Bd}&s+mm;9D!nyE!AP4Lc+6%%sxBR0lPV5rbE#G{_DihM1s zk1bbd=O@`^*zy(UXJ0+Y)4A9U0V^%>{DOl1x=OICfsjRhgl|s^z(SW4`a%}#RwMT* zD8RV^HfW4XL3sVFtc$gdyQfwtqz-aUpBjPZ%zN3le*k%Z#|3yboxb#eP0R_JwUY2n z5tx{Dy7B)kqJ~>BWrmJC1CoCopng%Ijdew0tX(?5X6UXb*fC_WEBzPS6J=5Rk}{xqDR;qLpN{rX5kNFBQN)%} zdIG#ww$wwos8yz%fh7Nop_8VsB6OYjlf8Yvv`gFuu**?t!sNv-65?8L`VK5GzAxz1 z0AGSrXvIr?(M@3LF*e;L$=vV~ZgF$>{2f~Df-PBJdMJ5-W20X4mxpl0(k%(xz5(-{ z^xqQ@Y!9F<>$Vf>rhYbJxy=es%sPO_N9dt>cV?mq7k*su`;Z|JrkE`#Gp~QKrgw;^ zI2C64a153}kB;kpmZs~2A|^ocHS`wLhudG_mP{5ji4_b0G7(j893$Ex0tND5ozt-a zse_=T9EdaKjiF$z`yEz;=_v{5HFpai*DcWhUB}dWxC8{dv3=NdLuWn{7z~e|6G5@Q z3|+y1kp zG@u{}2?Gg)qAN=V;ovlccY{CeR!4*HEx{&!b-0{NN$L?5|`_OFPDk!?@U4Nc_< z**g!nKgSem+&3Ur8`b4{vzN|+VrWW<^3w1fDr5=@+>Uwy3MD!YJoAagHUa^nEeIfb zK$fwvZh*y!2=L<7=UYMAQ$-Dl6u!3vCgoE5ks_ixKpgj`SVfsHYEs$ z@}yFDl;N&NKLx~jK-6=VzT)ps`o8q}8{pHB%qfk}sPU3YG!*rzcPhJScw--F3)(VV zeiH`(tb9Nj;0?orLa^w3{L-^>O$SqJLo9)?4sO%24RW-pWT^kNfo{GH(;KrkM-ZO! ztpOw6ws;R%!fepJsiTWh5IFrkT$l7KPT?B5D+xOED-6Dkor03$lZ31E%B?s}>nLD^ zrijAmo7+vF0fv|h=%{iV-`s&{5HsRdAh#7y161U{^e(YZ5tUFqEB?a+Xvk-cEb5d@ z0bL#s45N;;{yhh&L0dY={uyaN@TpWDwjY(-aYjgmORD}0B-E8*bG(BJ&uuSB!9w-YVN zS4Puug>=2a|NA9U2jfd0{*!N#uM`d#f_OxM*7DyEktzwCJ%_*WWhgONh4f#b1Sl@~ zt$2JjFQ}i^mP{xW2l6oe0^SV&VlJhuf<9gp#b<~`Us9Kbd0+e&j`@=+>G$)v|IS*o zy-X>~r-baheOdSS+%X2KA~`}Xg$)LL-DbRC*F%B~wBDA+y&$8BsVTjw768u=di zDftaKA{-JU!x69Y-~(d^(+o|tc}+#-6s^zfAeBI36wl}}$;%2;vn%;ZecLyvgNUxh zFFVewte$^YkguKM3U|wk<-z8A=4b#Qxh52RwXmJ<2gwDD(prl<=yaB(M}%Fb0W3-JZm*1*&=cqBzOr`FYq4=DWz#$h z#myJst;AHn&6B^4h)~%}5O|h&4_c_)I$loo=J(C2VJnxtrLQykRlvO-0*-8(BIT4~ zi`x9y{Lb8&>;XP_PGq{h-b-OcPB|^_weneD{(&PGaAlNN1bU%({`}5rjdux`jLO&d zp01`3X+AhsE(@0bM670briI}3;9=Z@$B$pCMJ{2X9>hJ+f9!>b_H2&B7tsT?#Sykt zc2N8sGQnP?e9?4R)Y!oB*+^{0C-1I7p$6dceq0y~$A(?DK0P+>>F8quM8W5dRG}|(!*cy9^#{fB4eUhxJu4yS;d=c zR^A#2U?B=%@d&jKG#k2F$(j{RLhGsyEC2l>Tx-V!I6^z5y81V1O)f3)Bf6?9Q7E{GsAw355ucFwWfPcdGiE#VCl97!S zvw6V?r`BxJJ^>oMq3h%ZrncpNRo4|9M)z!t@HXQ41*Y^o7{q+>f|t+9?daz34-#S7 z>Ihe`ptWLe9}BIjs!9arz+&?Nh$PJb;PedCQH7;xqPI(RQunkjwElkNRW2gzgo?U? z3iZ($;5p}kY=I^QDUXu*!yg(N8gt`Uy~+nQf7_-&ZjjS@`MZ~S9Ax{7Ce#cu904P~ z05}KM6rAE=L*NZnm%}yy8X4{#fd2-jiA=v`0PzefUPRYCa;;uPRaL&GO@lIAl-ZdK z0u&ZUkUGwnqFF80cxE#2Esv@GNU#z_y9IE)D^LyeXlGRiPGN(N>G2N6f`D9sN-tAg zq07y2*Eq0I%Y;|8D}jAfzE#zl01M>c=i_sb z+;#zB*f`jr^FW>b?dP+vBSS;!a2--5@Xjnizb3f^q!0!)!H zEdb(XPc1cK%iG!639V^2V7(v|-h&U7u|P*KX}W-}Wl`lAsEOKS+tlrIRsy8=yrdJ* z8^+S^Z%ob-rXpA2$3w6(5na8*W}u7@+#NBl1E zlT6QrZqQS6fdqk_)&l%id`F<5%?J6~MUvhg`jSFW-avAkHjYm`)V0XV>SMd3{j>bk zGZz4?7nQ@J#EJK$@?9`E60iVr2lnf(I-_N-DS&OqUl$Y_#E}9srToUm1%@^p6HlD; zjzNjf>3BiFewM2VW4Ey3d}e1{RhJ{VW>XM8#UunoLP+0&{BBrxi>2KF_h#;gXqv=J zpc-YRTmj-~C196Lle!LiNfG2{r?o_#q0H~^FF1P{FtU|vJski+`Xz^?#YETEFq^Pwh#cTOqSyIA>=Mqtp2Gi+bE~QaeY6dmXo& zsp~h;0_Fm>TE20dI?d+%)u89yceo&&Klnd$m;fS1>KIdd+)3PTD%=N>(l+f+b)tnZ zxrOIJoM57EIyfo#MI!wp8qT#=vwsL<*dk`nRZ08i8BfY`>5%xfvE&V2bYT zj%tmugX8fbpad{-J(V2=fYpYd3QDMBEs#r|%HR|_<^u=QU4CVEIqJ7&L0&<@q_v?8?G;*f z9e6e7!YK3H#Zw=0Mn&I{qx1M2u)VhxzCdxn7U6^VJ%ilXIQ`5V;u9+z1~zcDxR8n6 zj^kzQSul)YrK{Bw*SF=aOz^!s0~aj*79sC=Bv&BP{*;;wa9Z=5DJK?GmF#1b=YIgl zg@+b;q{qS*Wc?kunyo!vhp_Yj6$5~$tx(Kd*?B>t}yyjonor+w}Pa=5YlA+pdGVITCS|WSE!n_$p>G8 zwc&oZqG(fMPTj$lXXB zC59eBDj+&!KURQ*C=*?diYl&So)z17_@(=5vguBQYticJzFu=y(CVrAZ2o)@d_(u8 zl*cy6(WY-8JQ_PSO8AL1yd2)FK{DYwhUXM6fuybA-oG;^lA3a8QKvEc$cou=mh1DU zzvF@mPJ}0Z#OXJrRypRxIs2AYO~X|VbsT?ReDL>0dT1uek9}>c@=EPOtM&5`Bru*7 zo-x9ETmH5k!aeVpApe<{(3S5V(x_8*^$g%$CupZWj1-g@OSJ6{p? zK(z4i+^;I5p+b4ie)uuO{5qhD$Ew<5w)CHPVC$U-9^_S)M~pap1xz%YRFmpSBRKn@y+!bIl)+T|52!60`unlJoE>Od@Ms(rx?Ry&pk)nC^BC_Cg zOTB0J>PLM0H-;&Bl`u7ovBtJ7b!5T2PP-NN$xqVr<6eD&MX>2tkMzdj#kQdhCi&?f z-fM#tl5C=FGEsv7Sk6|g1qNeqPLWs7J@;M$J2odry7+5xLBZd>rA$HfIMTO@konzD>`& zleBC(GcH`sJz@P^9-#@M+5xO{>%nUH82rQqlcm!x5TA$4OyKKQ30DrJz*>g7Vkxnx z$p+_wkRdnVsWcxMf7DXC4dD`vt3M%P4Vjy889zY zjHVC9gK!3s%RNjna&J7m5Z?pIzdG%&0%+9qbg!TDZFB^Y<|l}SXT=zF2`UJF#rgUp z(#S9Q-}o{m8V6Dqbws^Qj5#XbU3Wt;fv8D~vlFD@l>hOPTS^ ziI2UpF|4)`=VglD|Elg+PYATUWmR-sUY6GmB<&%Id_`1?12KBBVD>$13}hGPM7UjE zL+{S3n-T@T2Vz=?yPpn=)flG)rGOMF4u5_GL!+&sf_r1#&h>>QBHF0yF#R$UqGxGoj75DN@~f`pQqu@n`X9)HJMRnfD-X! z0VC90f=?|%8XTx&J&(#BD^7oNk;O&spu%Ouf5L|c0oPhsF{K8K;Et7*slnI*kJUbO zZ0c%y&?6h}n_pf}4?6*o2@$K%V>Go=J+D08@LUSaz(f5Pw%;5wC>&~^L6mf>k0E1= znX@uvvkVuX84=<}p;cACX86aUzN$B-Az&JD^`Y^h6gRBnaoNAVy**&n``PZA)Pxj& zX{ht(Jt^*1)kloGAGhQ|&2|BvpN&^Hx|F0PdnQ-tq9o|^jW!0d?%dKBbaxY+`DRu5 z8RL$Q59Y8g3HDiuU~OWSP7Jv?cp)89-@~U`dT}tYvKJI_+gm{4!~Dr-1uFU-ukE0z zmgtI}*9Eywyw=%4y8=I^3`-5Vo_E>zyvkXVP<5b&<{pcrJH(i&;*0RXoq-imW5|I$ z$%1+VApd9wlRhfN=5P{ZYam-P4Wva_L!loCtgyz6^+thYEXdxVVzP zSl)>t5{yr{FEX3LaE!UE2F(kG&k3w{E8zGo($n8LUE8Cu(7EsILwlB z_8(f&8^=d^oyjx1*-jf74j1Fpe(!YEZ(g~cfrc@J7<6E|-{Na6DOO}v7A2>Ajv)w6 zSFwt&(BAd42@5dVERRUkajGYFU7n>18ZtvQnBgC;N>kl<`SnhETPiZOP$VdIkNmld z#Gg=Gbuw&nhhrA?0y?@Il-HTt@{F=py|uoL(dYpPdU!$qxm4xTDv1(VeX9t z!%(ME2ARF3eu1|vBf;3(6Gz=kB3_Nbq`XQFMxJ?De)_+Svya5UY_ZZ!M%eEi+T!i>z-ke0Q_~k+(3Rk;^hL@}#`CxR1%T1FbqGuSk7Xbb4K$2M2PLv**UkADH{v%Enb+ z>nZC421wVg#@@FjKT4FraovWamYI#%<5?rh^+jyiOQL$VKm_Ke1}Mq$#I(J3EX9cP zs`6Aa;?>89xod1k`7-U8-Uh(MNrh`cyw8yi+MR)eM@guIIU&%^g3;c|f_n=P%HY0@ zodF7K%iX*h8R7r{ui&Ijo-l>A-HU#nfYpKE{N zNDY142#j`G&_C0vHcO}?Gl^%NL8#s7H0VBIvAQ< za2xYPQlEn{Prmlnqy292>Lh|j=38zRAa{v85%8N=hr2>!qu4Z@sni`%TYeV}V|(=F z+uOBu{m*NS`o1bLa9`d1Dq`xNJOS^G3`}@d0UbvL(8l8*Q*JpsY@aq%#@YiW;G}=zj;PBn(wL;BoTNrN3F56NDVjWvo}2f1*jFc` z+{#0;N2f9DP6j*(Hc=R%2d@xCXRY~_7asOB_+7^dJhQLV-tfC$4cybgi=Fas1UI&>UUU!-}p zKr&tyuPRf@T^Qf@5T5VsuoHW$SImViZMNfR<-4{gUVJk2Q0Sn{sJ5{H2?dn1TV`=D zHUmUK<4-iU-mrh0!$W3(Zk|ohm`CBrUXpIak0h-UDXJJ)NNc+2Tk?>6W;_jOc?o5h zwR9EKY^a*8bT7Fz$Y{P9Kue2nw^$h_%HGGdQfy%Pp%2T_!*7$Vx4n0oL&``;KWPo= z@7&YleUtb(ioiwb&y5Z9=?Y7PIXzV8qLqndXN8=63z>?X2#ckW;K?ih6iPQNAy<=l zF_^*TbfJ54ii{bAye@oyq;>+WE<3PIe!QuiZCp;iO}@qo!aK-DGh#I|2$2r4hzW-z1jcipe6LbZHP@{qa4)Jb2{3l@6t zK}QC=jCYZau9ts`nf#M;W5f_NZIa(9MZUGLu9D#VS*dL8l*)ymKp$tdb4V)-Vhc-O z$eUdCRj*G)1W8Nfw2AJHClZK79gumO@s%#Mz>c20$oS02CS7kXJ(|AV2O9vm{BhE@{Asd|a*ISL{ zoE`!N`5{>}JB$AMJDww=OFeQh>wXtz6n1)-WywKwD>A3$z~O;Z+NxpPtAWA#BlTb8 zF$X_8(I$!U6T99&3vPQfAIU)5{&4jqty25jElH(x!;x$BwD|zL=Z6;7J~VxD^bErhCES_w zmAXkc=jBKXnh`&=1jxG$Fy@vu z^)iQvFcArC17E@~k9ZG>1bRc5)L@3u(a-o>-b}Xi=p%?z&|-y^sZzFMWC$U_GTldd`v--ah|zk z;5`@P$PIvNl}{w4k63Vg=@6?|+_SwJCeXw_EsJDJY9q=nwpd04rl(te21hZA3w{LJ zF4nbuL^U0B6b}-5WsoGmDu0)pGI(p#5xzfD21{x;)G%?<59buXatXH9iywj3RQdZ) zoSKO~tHJmr8(P>QZM99lkHX66Qap04Nx6Pau;qSrP&;w7QCRW&_3iH+bbL#RAK1K9 z71LZ!0)egMK@kli^Mnq5flJ#ECk3KVK`EMfNqNFtK`o3(iI+Py%Xcr0nuAH-{~$T^ z(@6~_@aOu12T?W62`bN!>|v^Q+wz;0R~-aVgOgszKw8--IYHdPiE2M0X4Q&TDF!`C7J}mhq9kja`~>W?s@tOlq{vM7Pn3WRwM;%GmfbyBK(;x z9K1klh-qDyZN%OB@Mg}(&o!AD_JUCb_1OA(oF-8nOYNdn_U3Y}3JIW+?u%%vACHb6 zXloQ+M^)*%>XY-lLc!woJZi$(&-IPQnntWq@7R2#A;+q=Z`R@;vhW;Tzj-^Sgo(W2j^Mvhj z!?}8sl$IlVUumWHBhuYlZw=lC1%VFI<{U*sfvLgl;|Rig!mMSVW9tbM_~`7piS}Wh z39{m8dwO$1wuC?eY7K2Z;h?}z=Js^Psc3d-qxWd*O$R&aMu?*g4t~YN?ulZ~Yr|9d zh{WNe{d}B)uDKH8vIYy4`?2D~=V?5&xqK9fT7IbYRsi*xi2m<}X@680d z1s1opbqG;KJ$%|41_B%~T^y)tg|Kq3UX5EJF(%ZMauk_N((?2XXx5K+D6F13yNvLC zj_vtT8@0RZZ)rdqmYAR>`|;d&CMvmZGL{^jMy~HAdT1?rQB+qaO?W=4141p6$ZOLv z>ZCbj&PCBn|6>_*R9GsJw&xY^_|A#3&r5pzthA3LmRpGUp~1)X)$3?!Ze{rHT9 z%vg0Fz4dt)+TJf7IYkZ4lK3(#@1=04%#pP-*I9`r8cy{3OWjF_iwTGHuj#9Ad`qca z5{3%>Q_(!;Kik_~tfC9LBInhxkV)HlS-wU5#tb1IkSs<-#P+iIk(F~7voCfFBgo#$ zW{$5|EEG~PeAG>cEf9|MsI%JyU3jYe&Ri#fI`J%#rtF@AnS@EA-Q!I!>m@E3|qRYgBB}#DLg(Ii)DNP)Gi=)~`>6 zFR0umez#6h*1ct?!G2e%wWj3^ z&JUmSs!-N?S?v6PH!b>uY8(rpzBmPC3;xI5U{9xIm$tG|bgp*$Hs1O-DtlPh2*RsW zD&5Wk+sLIvEU)NHL0)&-*eUktuC#w>kM~JE3{(DA%=q16lr}QDsnH8=!osPR?}qI5 z3rQ(r>!(F!Y4@9ci}NiwOY~>L()Pe^oyXv^k{cuFhi;4WXK<_7Z_;WQXh?)~5g87t zxrlpzci+)BFbNM;mNs4eLb=0`&dx z%Mv{B5>0#wc8AW4ItCo0)<^403wl2_5EQo^2{2jnN8>Meiv>8}pW+9R(x-kg)PK>1 zyoWG|eey?F+|F7~!U}BWQorZXdE0O{QG$8I@gCbOTN{E!NJb2dT`wr(3605|Ohm(2 z{BNmt88ENW4~tWmSJL?jQt!6xjy_3Qrv#I5Fr0=P*URY*%?p|I(Lu~1l-G&YR7?zI zni@xCwk;SZ_(3VT0jMw`KTSzf16ta*^>TYsVNzlp1BugvxYy;j^?j>v3#TY@XI6qq z4Ls-&UD63UFG~Av1{)PFRJd}U@bqaF+GhJ%XrZa>3KY@_?**>U@?b;S#Z%Y(khsW> zThG1E^`QVF9|C6q!{S8HR3cuYzrGG30@2x=;+vR*HSC!e zC#_}J&E*%4%x()Se$~er?MP89kW_5&w_Vv|fu3yPC{v32Q9C5u2^})iX7#w1jY-S?(!QtD* zOtl-yJqkt8M?{}}spN+bDUBysa5%r-4les%35t0D0p%q-L{(kGFA8haoOeHLpu*?m zO;f9=%K0P5` z$6EJoN_y6MXn+1|O7{HnTS7v^9Bjrw$~u1LnJNoN@394|MuIu=Jo>TV$B*<@WG9zW zbruMXR{9!^5%G?3?9!q1t^L6~!dPA1{DkRLxDUJo0?cP85^a_Paxfo1Y|46o*W^>B z4M^6($`x|lYO{u33Sk~tZ>p$o)g57{-e8%w7he4=hnR)6gHHGR4F9dftfw>!-%K+-#@v*k52ruh=e zf6+SJKx`UMp>;GsZo4p8+{Q)^#j+)#&m4F{r#Vl0h&}qz28rwispw_dJ6EZ8;)ez( z8ExsrPP}y_0$qpQrsuy7TUeL!p1sCnl;a$OR#$BZ(!Z^s%|$31ASGiikiJ%UWJ4b+Wy;$4;K>pG0V>6lwJp~);Xc`NbV^9fO1P} zkDR~oGgWPvW|=pkV6y%U9;99J5q03!i==Ho3Jl-Zk?)GwVdeD%$l{~(La4$q z;|(H0%Qg2w%b}gOY7YgYEZBerDDFx|GXPt1|V;@pR+kqjK*{P&ErI6^Kvr}gsM{9k# zVZKd3m>`^ej}y-{gEc>6jtb_6clVa2b1IjH?)y#nz6c^iTJv1d`S>wUu|0 zKNUt=Kr!H3zOOGGra|Br+<@&>;6ZXsa_+PCg_%R;Gbrxc(fC*q5oQSqRri`->HI`5 zc>?>6F8a0?4e4Q*=n)K$*vV;Ig_8OSWzafmIw*wr5;pykVh(l8(v0w}VW@sl1d3_P z6fY+`G~bM&(c_jfyiWCP3C4eGY2o_o>DZsgXd0$9VOlmAUWx>gc`^8J1U<~8O{Bs~ zhV3P3-LMK_!Y011w@Q!xP8-nf=kvL&uL&YT{fcSmzZ^9WakRUD6ZKHA%C(N~VS^Lo zPiO$`GGkBeV&c@p+8-ZP<2f25u$3JiVpI8Gg)q^gUXpX3KVgP088P!_Z?VpS6Bgw3 z3}W|e^uiX^AH|?RYDwqiXXiSdHs+(sHLu2vMOJG3Cn{!T(41cu;ceIZ`#$H+Sv}M> zE3lS$<_XxE9rCFLriDrKEZU48id zq8H2lY=FM!YNOnHBL}2} z?=`00rPA@rLPR$Z1p0=Zcb%8dImHOQMgv@zBi-)U7|~4}W@`$nknEqYP|f)lRdW_` z%8YPVTj4(uW{6@%;+a2@6aRh<<4$#MZU}$v_30Tz(V{D_R;{w<579*#mdh8iKNcDI z{EEAspZ{2en1_S%dIr{(>s{AOzUg?^J6qz)h37&8ek;*^h@ghB5@s!=$3cD5fgw#A z6B9C(xcEjggycAN13_5UvG)eE$qpt3LVH|7_cWU#ES}0+0v`6ATypZ5C*JiTcwIg} zC9+~jW98*ze5Ud&i=eViktoks(Z<)3Rz6z=O=0^!$~@f1s|YKwkZ9cIG-=?df6{oW z75i<&DZ8!aS9MsDp@kumwAF`X>AFn1%BhT55jv9g)1OVw8Pf2lV!7+S!;>{$a3n5o zRFf1mK<@ENbUen~q)p?@s+I0g)m>4 zjvGi_VyNp% ziR#7D5+Qz|-7IJlweY%9%b({BuD5u&*0pM$JnkT;_ynd%T<}R2CWrWZ^^9~tgo1$* zD&8Kz<*%XkK9fH`IQnNqW~_~~{)8Tya;?ocr{Jt-y+~y#fE4Da?U*Y;!rskyCFAGW z1S)7bxW3Xq0_axiK3sjSCOC-x*seql?7#Lq_t+XEfE9+!QLDuc2`$q5?Km9YmG6g( zO>e5QnjDE?jn9vX1)aYh@@NFou6WTXJ2K2k>=z63z_WB3C>&W>kYl9+R~J9Z$n(*E zo0!Y(_r8V}5=NJ?CQv|tkCG*f*z5UK`!0hZPSQzd_6>8Pr%1hQ1tpK@G?JdDl@6V_ zsA)W3sy5UDn)DlI?59OfXv)3dg9r49AA|*zWT@gEWEk$6AIBT6uxBD$56E~H?a%LR zPByT(mXqT?bw6avvbys{va+%^99J!{{7^F(P?la?EYqLBN@d!3SXY4Vk}J89T5?CI z!@J@*cR8UE-IykRX<9cqulF6-fEG3Q^X)UlTTnO41GXHwcB`E=A@t$qjyRBS5pJ^v z?U-bX(BkElE#?Ny7IL6TUI2T!3ENlNHeK@ObEhQyQ*`6(i!br*XPYfGzr_B3b(M%s^5J}S9OcG z;mHi9K>PHqb|cAhZXB5-X5V&DRX2=~)IXX1_kgxZ7{#{vJF^kphC=&ojMu@8(Cmru z&$3o6PJlo?j!gV9eL~A#x)QM53zSv0mG{9F>bLlOz(_?Y&HX$P;`*M&a(N7M^X1|M z0!(P1Zi)@!ZUen~#C*EJ5@C=Y@2rpK01qY1j^Wg;8GN<2$0HAG3uNh~vx%B&S#3KX&R^nO2=M;?QSxSprEBZ%neR&a;q>8Fm#SuC z?_+da7y|Hk5riBL03(whfWpM%*oMKCEB{?|ePeVT6yk3swSnYk2N3Z&wGH9`$1TkmCg~YI3aIXpc^9uOGk&TD)sbx<@YDHugT%a- z)S6r_H=EYLa#SNSi*AJuAllZ*^9(BtPJG`?&}n3}BQ1}r?RH8qaA9o-F7u9){d|4S zYS#6~ojZV2p#*nJ*8qH`aX?|WjyuK&w7}VwnF1WLwYX;fGfG;yQ|@d zzA|9zt!CADxzQ=28@{HvciQ>YJdGi`o&!??`TNvm*SkCY+9LceKr+h#mnnM9IZ#HW zA1eYk1{Gb8%PbA!SmiOGe9VPQZq5L4OO+2W(oGiOqKj>bjXP&Rn+)qR12?L$X&V9Z zo+=pL3Joxl_ROSvs{!wBGG+iZY@F==kC=VQ9pL<^z^}WQzOd^eHEo{;cb|XiQlRTm z+hcjUY%m}&@w{@MP<;}aXqF)IqHNT`c$nJn?RMT&hAVmo&MklYK54D zW5MI0i#@U(4FzW+>lc-u@F6#|@{sKx(Vu#qLkjcqeo4daH+Fz0K1&)GV6~lrJ2BSd zz`eO6Hw}QgKMrR-o%ow`1G}Ixvb&>zIR|d}*RZyuQwOa2vAZWEyy&7ZA1-88*jqq~ zfM4jUZk-73b{JS)g&H2aqb6=OIE+ej<;_E9``twMKEgWiH;@oe<2GD@Ye?4I72Nje z_z!LpWk||;ed%r&@^i45tFY?Y9Nu6hn)}DyccL}$$&1rD#Z=Q-2Q;rzRitEu9k$o8 z_o0bN!I;v%Ur%*tzBFN|?=HQC&8RrW!5G`t)tN5ceOIC%dtaap|A{T1;y&qPxBm_! z)GhD%r4$#|D|n(K9t>UmJvBg_5d$zL6eDi=NDXlsR zB!>38H2`@D#jC`0c8ZoW>j+{&hTmoY-|Hk}w=^UhtP6{bS7q?X>)lo-Eho+3F^?K| z2q~Az=HmU`=KiZxOTAt~ z6YjQUJuOy{{;1oQYr;#MR8ZUGdr)H^tXU+A_2vpddAkyj;A6(x33nhd^ZMEV7&$q$ zzl>Vy78>wT5PanK9;ALvyIG^n&kNdn7}6}{)7@1dSk1`#27hI)zHkbDInFp>mqu$s zfk>H99sz#QJ<*6YypC^Uu#kz>DW<=6HJ z*sc@LL_qHjS)Sfrox};#9KgHx6CMPvVkTlW-UpIE#qn_^c2xcDw^U_Vg7sc(8ojlf z$2Ujmrqf^snEDThR@VV&Csu=BS6*E>N0&84#6g?=$an>cteyX-18n%084e{!*oEd zo5G%<0!0vH)a?pSW9jn16p8O+2DDL60yXAm8(yIBP(oj$0jfL8d-3dBOW{4G7g_j_ z{^LF+cS72_$<_B_?J~v`e?Na*ac_zZi51K1kTKNsSD8@}fAJ=8Dvtro%GL)NjNA9h z?K11~7O>HwxNPrr=YzAiLqLt74yK0hLqW}%ruxR#M=`Ia%*$yF(TPjmNH9Y2I~8!w z^tIM>A`ns}@FAh;!3(OxpPu@_srR5gMq@1?Z57Z-N2XG25bzu8O4r78tkMOz$jmF( z`5IZ^(2nRlm6BkFj;zK8rg&&_2azhUIYqe!lR)0xUk*yY$WIldw*P&A4ybRSi2{oy znxMdb$SDd&79XOdR~6a52P~l8e0cDyW)vqk@DA)a6DfRp0Mi6fyFJU1`(`UAzimL9 zbV+!K>1SIr`LW}7hd`x$HIoYWKV5p|p*{Rm2~fPz9`~yNuCCUzHx}1&+=+d=pI9 z4kAsT7zeLeTHMNEovJon#X5H@@Ozt&Y*3leJ--4gb071(c!E64$_grE3gj z#%VQ7`uXjrtT9mX-h1v&NL(|y0v~WOn9rs|URE3%Wq?%o9PJ!2I3c+%#%w%Jh|3oM z8?Jal$JOd=mF#|MPPP4J4%kvz+@4Kxt*drR47i8Ty(akE-;}ApX>l`;i31Hb1F6F0 zfMZYjC5g2BpeD^3%n1jnni!24)0moJYkqZ96bi$yadH0?s0McIeJh4&B70lSOT$FXbg* z?3FYN*-w`ZFL+4M1E;p>pic6HY({~Pm9l$xWIeBKCbMFVj443q>CgHbYQkU4V>5d`5b|sKOdM) zj?W>?J;gUzfCD2(z!1y_z1qK4z0%PZoB~Ai?TB|Eid^47@*#L~egcR_`JjD$4xS)6 zf|w!pV|3>MA;&&_+={NuX4T_Aj&B#$4+0g`k~qH#^X2` zN{_5?#`O`dmb>0P(LZ_q_3Ual4O;T6MUXm#^ikx#yIcfxQ-jPIdj>>HJ zRnBrInTAZvFhx)J88XCgf^sPH8fQ&=4{GvD>b*i{+Rw>#y^OCNGSrk{EPEmWyQ{$Z z3NGc;)@;S^{nW6?c&8)siK*y0P6WP;#qjCp+J zp{0$M5@#+GD#HqY5MX4r938bkBT|m-THY=SLSLsCumzv+|L-S$IUF6%#3?P{AfLDY zPjT1%59ju+1vw;o4?>6@MAQ*P@4W@lGZ;}5Oc0{?km!Pl-lJqh?`4pKh)ysPBI@X! z;O;l)d+xoT`wv{7`S@wS@9Z{vul20;JnK5YXK{nVj&{tjj{MhEZLGD;Qjg})?&8`- zwC%Y?(5>P}j^rs3&*+Ka5yN4#2J92I$8|%Z(CiWq9WA_ZkWr<4i4~LW{wN>C#N6I? zhS}1r$be*DUFtBNg(6)gMVX}wZt8chMcaV9zjuX`YWaS4qg4CdkMjq$S$5I78`au zi~Tn+-Fr2!=lJD>A{QXG)!tsF$+k zzZkEg=`KRFKKKr%^8i$!Lzq`A8`0(j-E@e%5QO zX~8rA{N|}o2m%$~+iG6`vfq~n-Vo_SF_=)m@rfH*{p2M&ifSCF$XIWFFt&Wb0n0jz z*m~mV*oR33)}OoU{chEU?^9g8DV45I!O^M<7i#tSSjv;;#+VH^1yciYrxNv^yOBTr z2&il610ST2fRX0=shoy}Zw5FY<3M;J4=OpCIFVUaF~Ek0C6&Zxlipt>|8zGk(QS1v zyuWpuCB8OOeZ|(8lL`yEm_;Qe7572|QoVFJR1)lhq>{@!#BpKGI)$d8@-41RjTczReCd?Qnb z0?%(9O~FEB08D9Y1Xx~*ZZB2|1{pahk*dNq{KjIf$bEW@Lw8JB6B zwuyZ(a6Zt|RSQuT9}P-fkK28al>MXr1?#E+wADgae4nYE3(aZfiq}!xDWs9dLS=^5 z02|5OzAgl&vbO$155=CSwr5??&b6)E;{lhW)|>3@`K_6HPmaH4OuvUMWa<$j%}%q_ zr#&x3eBtra22i)@%ZwnTm|Lh@9QIRA97W24JeKZ<)K}1yx|s`r{{EFe57&2WJyF1i z&2xM02WHg)JDId57D^glKGQV0WNpv#iiyF`ZzY=FCD=Z39SQE~h8MF`gzoAeSUjg< zX_gPArGOLezCI)25jZvPUpon#LHUi@d;(Vq{Zm%}#+bU)jE@1XUYeP**UV6k!k4}F zSMNc>sd(LQNJ&RPT-N-SB}YG?EVyT#g+wyRakHK>&+;T+W0+Mgo-!gtb%i8U6m4c7 zb9VGssQxJ5r)xV7JBgV(0em+1{*}zmQTY6PpUxv=V|z4dW+u0{%^X;qS&<5lId!)# zwgl%EJ|Z&P;F2pS1txwX3)7c$!7+7C3}nqJ$b$J10sIyU$s2$F~L{HhMGtP4N z+AtHeO;#EZJpMCGNGb|}EkH@l}t&JMYQ(BOax;L#6|w#E#No7>yl2Bs(lnmz9CXLCnm1h-zs zz3Z_Fotl?xSxnGaGT)+|Gpd_)*Vv;+DU2gE46!&E3qlD7ibgu%(Y4Grp5)fSBM!_c z(@6L7#5AH=`H_T$rzl@7qw8RtI2PrEDVy25; zj%b?|Pq_xV_i@}5BbhdXK=~hW*mrraNr$5MDO8fPm9z^HWm8uUzY^hNoM+pp@i+h7 z&)Z@2&vz0QC!)adKpGT=u13<&R@{q10lX{h?EeIR1Vf*gu6yp9sS zcmlOF>R24PPANrtIi-(Vz|1RHTVJ0kS0G;ilt9;%y|`%3o3<-2kbby)^c{AnF1sju zmJ}9=7@8ZRG)%4PvL4k)J?$WmTvxNM>gwaCb&pr>BUs;rpRrueEPNs>!px~`H2nMkpdL#jYEwDO!_;Kn z+1T^0%XSU2OS+k5g_XenoDiCCh1%ZKfJMj&*jK{pO62mdzGDwDUCgIi4MZI}HBDM^ z2oC`xPV&zpU)$Hv577U;+^>jD5&(UtCOkN2$x|AuD(|x@{>S}SSIzztT$U6u>=pxto`x0hU^l2%s`GrmTx0#$u9M$? za2)%Mh>8dkUOM&1MEd|-X}|EMn=6(c@{#}ASmb_WnUh25>A%GO=VczZ6|{mBFizTp z|AGdTxmz9YNBwnw{;yUVQVOsMf!Y7viVELxZE>wO0ij2WGHBg@h(#xGB9?KEe5m9B znnC*?Yi2v4X98J1V?r~k&f}>nt&F)@8f#!Kd_65zO-vjVt`r5zqO&h4GvXhn{Tlmm z3hoE3Tp?Hq!?#!

zt+$q*l=rmzdJji~vU;gE8<6B{6s^>>`67^EI&f&s8FQvwTZ zV(Y8-wMja#iOkId&QM}c)Lti%Kzx4NyH_m z`NnkC>(#?@uS)P(bv)>9p()pO?bnZMPxOob7+1Q%zPVWc8Fzyg=Pvudh8c;B?w6m- zZ(R+IO3&%M~pPNTs_09$0?{;D()5jZ9 z!KB(oWL9ojK9&D$O)E(l3kp`BtwF8@5CLjzIHZtD^2$b|fds)5_506S40c0JP1_%Y zLB-S7U~MO_d5Wrlq-rfMT20yAOgZ?>20{ytsK32wP~o}6wWYDQD(ipniaVE@$c}@X zQQ;BY<|jCWIrZUKmGr;+nU*?ZP1CQ=;mJMvZ0d2uuR>7c(RJ?Tiy6gYAbBWVN6e~l z&ci$vnF4&ov83FOx++|zcJB$+KxDx|dhbVUVehgpsD2ICUl7jfTR#jUqrS)C*BVAi zRgfzHkH`-`weS2=6Y5)jjWy&ERySa9q6F9Xe4${T`PCe3tXTrYRtD_RQM|JCMY#_r z$biuM0bq_;d;wZ#{B6H?8%)guRU(740;h6}-ukKixRGM&$Ya1wot~aPaBd8tWAgOb zTitdqWl~{XJaZ;|yvb8?1KS9-99{Ia+xXtTidbFhv1yD81 zREZdY>N@W_IP15%7#T~?qosZJ{RjW+If5DIPp=z{aZn0+gWQDdyw>F6)Z{MC&W4ax zR^rNxp*uodIB3-gz?T8-=*vcgFXenS(6lOrs-(7ne+_osAYJN@M>QJ{e_fy|WO4_E zM?J|Nj`O$!kAZ9uf}a{wlw)oTcaSqnRj(d-&6iW>$Y;-~?r45S-%E$F*r6nbT zaC;DUard&DGl7(F4N&yPQcrUWvIXGwo__LnxCa@&udhYl7yz`CONiw(4P>ImfDvj3 zM|jJ%&#cZ7pqsA%lIlI`$ekRYQ%Dx8v6p*=2tDfzyeg){u=)2&*rAqRyAXom>eSe_ zGIHn7F%Xk|=eZD7ezb*rt|5FGhzr2TRQjQ!esdSaBAcH4g0b0;SJXkE6H*g+4t$zY zpiqSw$dkWMwhl!h>zHZYK6>-ay_@u|f3QS3EMg-{l*RiKH8_PudPccFC1Tb!OwiUK zjSA9_GW(Jy%mv%Z4j~n?&}%l_rX)Ww+=a_#%OE*D1CKL-AjmK>2&%w|U>3mh)^8Bt z$3HWeziE!@@&Qbyr>+bF@zNxvIXSGb$~Xw#63PP*t7!5yq3-={5L_5is)oD`*Z3QSu=dCA{|2 zimAtQ5Fxs;1&Xu^t7Ac&xnh->_bNF4>19~B6_}eby3Q!!@(OP^NJb*1fecc}cn&CR z>Fac{*E`RPWX?;b0rWi%E?PTfIm%||kXL{@OUQ+r43X_?-+et~bgj>!&~Gl@PV=QV zM;TaHxHOjS{174xq@!#uoPFAD?b%fg1)d?f1|K!PRjc?x{EKR75u_jL)Z~nHrW^k& zDVe?UYef}<4uDGK00E7gHbAn=(N$XED}B`*J+YO&cHS~=t=WfCo>ghaP@%`>0tS?0b%kd8 zh__9Xi9?zCk?(Ce=QAyH2=Rj|IVWQ_o=NYzKu3M-7;lZcht!^$XLYds6|gy zsEj(ywiLx*ej``AYe~%0Pqdjp{v;>NHNs>3J%F$Bg@b&)wD^aiZvOC^6QC;ewU2^rc8kKKX^U3L_8S=Y> zQ`+1DOLsMq;yM=e3$ULA$j2^QmXh)+>a3n0y%~Dn>GF;Et7}y-P0mYEcS6Ub%KpWrhmq3a@&uH`DR^Ee-Ue3g2T>c5)o0l&*u9)#t|}V%lBFs zCzX{9{V1j~!B%emU|n(3ME=vGyDzP)M1gZZq_e;i;3*iMLq)`%lxfJh@=(*rtdC20 zCT#OL8&Zp>dvI|Yb(*1;Va}>%m*D8QaZK9g#|ouA?c@14qOoWUPQZ2)jdb|+)}-o@7LUGcl@qj6ZdG#% z3mn=Z9|jP27|h&e&V(lnm)bdMef`9t^TDAXKPb0gcDe--+HF7z=+Xtm=>Ace?WF;y zjBoRG{Dn})mw%k<$*9jL;nFB#qFd{Cq^`;5BH>oLw1lF`?om@lJ*)@p$cMs58H8Az zK2@)jJT2FYLYkih5!0@asy}6U#xT7$Kv)O=>$aEOpAN0SE=*7#vQ=Iaw|b`}I8Zz| zDddP>O@81@)5wBZ$t7tf_+&FdvQm*$+b`W`!o8+?cTY z@qm+RX`@LWyI4Moc76JD@@Trb2qsPo=l%!joSuDf0t6dx+2qV|jNd4@sVGBF2v>2o zP6LMW3R|O|SqGWnw97I2uW-F4)^x%u0p;IgvtfVNIy1j_s^RKSPu zQ-nD?0lwR&SRfp)Ya6>?1Al;t0H~{=-v#5otMt%h=hEf@>pl~OB>V&zz`;W*NW8iX z*A;zN1#Y7Y<;gs0uT@2fW98_S_`GFw!lpz;yy`QAh1A?sO4@5KF5R`Oek9lhOFH?> z|NhQ%S(qn%qtI-)9!QVv#8nr^-#a*w)DFun(2CA#h%&s_h~fv;b@2yvK zh5bLuRw@Kl!9djA1@Fj|d^aMJ3H98bFv<33 zF04x5C3;|b49l9!c`{j7zYqVPCM$=kw!u8EHg}9#m1B}`6^n74JnN(VpI0D58p>)< zQG2F#h7&j_K;^l!LGTmf!IMPS81$-%Bqi0v%4NE<(#Wk+5D>|NXeG%es@dB-86To{ ztJ6~qJB`;IkEE{u3Ukd?Zh%_fatZSFyB0j8Q9bodP`A`^EW_m8%*R*BSq0~T&b>2@ z%X-`boa1lk=hX55M}n@Yx1|+dZUwJ#=Ve(A|9KA7^sQE90lbD`BU2{lcPd0}YK$*k z&7TJ>kJsIW6S3ch{&~a{TUJ@_>Hcutq%Met_f~Y#OXbH*o8Td-G!seB?`o7yY!hA@ zIu*>mp}rQNM`TF0({8*s>Qa`#w6tqq7?(p z+&8TxsET*?%q01EVWVwUcYgBH``m$Ke!>E+OxxG=%)_9`- zkdZl~7SJz$5y<6&8)`YZGF3K4bkzpA{5Em*yYT1Z4(jtWf#pm_Ij9@EF|N<*COyMy z>F8?;Za-l)EON=e@NTOOsbQ*QTn%@B6P*)X&cFKM+d&j*e_zMvlDvQ}<#V%&txTdn z&-dYARITWB&YtHjA)P<>cO%9b<>XpQ+AWOKyEF=8d2<4~-ebN${4-)9dDsM5UBT~Z zJ&(SxUBcyJt_z+F(?U9qEu&I3g;Myj(xjo_>w}bo3gSFpWE$%8nl#v<21q zYO+xq+{@x#lfgv}{x&0Cb-dLSGKS@`(YLeIcy@W9@4=(u=Nu%)9lux3$?U;+2EWlU z!oL&qisN_sRg#^Y&7C0W9+DBcZ7sfd2VSFR2jCWUv^?1{+-|tI7{}O9LdrsqadH^< zT5t=NM5_@9>Q81)Mt-=JQQxOh{4M!8i3IycTLbVgY5?l!Szp)`xFx0>&C{%T>Z|W^ z?NaJU9D0M?jqE3asOxT0{@IS;+W25j5vSATG?MW7I7(BOF}5fG%dFwGVl#U;ne^iA zAii37(Xz(AqM|lu%0P2Yo4v(R6C`6KcLRS#+26{=YE3yH1ypr4*BiR^u0A_?pv$3U z%S31+k@1Pc(V6e7IZrx%pVpBx^~qj+L^U-^X4A{l-qsmp@lz_u4Bh66f03uPy9`Z^ zyj^xTT7cq#TsAFVOhGFHVCMdK6_#mYq-#5^?;moj__VsRV))Er?!2`%_o$|oR(p~| zxPus9L*sUyqpAKM$)nt>LrH5&%JQ;nLE{c?)EbDGb_cCf8~C+L%9bn|j2|Qb6s&ut z_@2U^%k+}dyl|0d+6hw5U^m*KX2%W0#qLI4XQZUjE^~Csd~_@8;S3N1SH1cPx4`tk zvvPHVOfNdBe^tQe(&`. +1. Click on the link to open the browser. The tool is prepopulated with all relevant variables. +1. In the far left pane, click **Connect**. +1. Click the **Tools** button to display the Data Commons tools and prompts. +1. In the left pane, select a tool. +1. In the right pane, scroll below the prompts to view the input form. +1. Enter values for required fields and click **Run Tool**. Data are shown in the **Tool Result** box. + diff --git a/mcp/index.md b/mcp/index.md index ba27ca0c3..8e9c74c01 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -5,10 +5,49 @@ nav_order: 20 has_children: true --- -# MCP overview +# Query data interactively with an AI agent -Data Commons has recently launched a [Model Context Protocol](https://github.com/datacommonsorg/agent-toolkit){: target="_blank"} server. This allows you to use any MCP-enabled agent, powered by a Large Language Model (LLM) like Google Gemini, to interactively query Data Commons data. See the following pages for details: +## Overview -- [Quickstart: Use the Data Commons MCP Server with Gemini CLI](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/quickstart.md){: target="_blank"} -- [User Guide](https://github.com/datacommonsorg/agent-toolkit/blob/main/docs/user_guide.md){: target="_blank"} +The [Data Commons](https://datacommons.org) [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) service gives AI agents access to the Data Commons knowledge graph and returns data related to statistical variables, topics, and observations. It allows end users to formulate complex natural-language queries interactively, get data in textual, structured or unstructured formats, and download the data as desired. For example, depending on the agent, a user can answer high-level questions such as "give me the economic indicators of the BRICS countries", view simple tables, and download a CSV file of the data in tabular format. + +The MCP server returns data from datacommons.org by default or can be configured for a Custom Data Commons instance. + +The server is a Python binary based on the [FastMCP 2.0 framework](https://gofastmcp.com). A prebuilt package is available at . + +At this time, there is no centrally deployed server; you run your own server, and any client you want to connect to it. + +![alt text](mcp.png) + +## Tools + +The server currently supports the following tools: + +- `search_indicators`: Searches for available variables and/or topics (a hierarchy of sub-topics and member variables) for a given place or metric. Topics are only relevant for Custom Data Commons instances that have implemented them. +- `get_observations`: Fetches statistical data for a given variable and place. + +> Tip: If you want a deeper understanding of how the tools work, you may use the [MCP Inspector](https://modelcontextprotocol.io/legacy/tools/inspector) to make tool calls directly; see [Test with MCP Inspector](/mcp/develop_agent.html#test-with-mcp-inspector) for details. + +## Clients + +To connect to the Data Commons MCP Server, you can use any available AI application that supports MCP, or your own custom agent. + +The server supports both standard MCP [transport protocols](https://modelcontextprotocol.io/docs/learn/architecture#transport-layer): +- Stdio: For clients that connect directly using local processes +- Streamable HTTP: For clients that connect remotely or otherwise require HTTP (e.g. Typescript) + +See [Run MCP tools](run_tools.md) for how to use the server with Google-based clients over Stdio. + +For an end-to-end tutorial using a server and agent over HTTP in the cloud, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb). + +## Unsupported features + +At the current time, the following are not supported: +- Non-geographical ("custom") entities +- Events +- Exploring nodes and relationships in the graph + +## Disclaimer + +AI applications using the MCP server can make mistakes, so please double-check responses. diff --git a/mcp/run_tools.md b/mcp/run_tools.md new file mode 100644 index 000000000..bd3e4f43d --- /dev/null +++ b/mcp/run_tools.md @@ -0,0 +1,188 @@ +--- +layout: default +title: Run MCP tools +nav_order: 2 +parent: MCP - Query data interactively with an AI agent +--- + +# Run MCP tools + +This page shows you how to run a local agent that kicks off the server in a subprocess. + +We provide specific instructions for the following agents: + +- [Gemini CLI](https://github.com/google-gemini/gemini-cli) + - Can be used for datacommons.org or a Custom Data Commons instance + - Requires minimal setup + See [Use Gemini CLI](#use-gemini-cli) for this option. +- A sample basic agent based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) and [Gemini Flash 2.5](https://deepmind.google/models/gemini/flash/) + - Best for interacting with a sample ADK-based web agent + - Can be used for datacommons.org or a Custom Data Commons instance + - Requires some additional setup + See [Use the sample agent](#use-the-sample-agent) for this option + +For other clients/agents, see the relevant documentation; you should be able to reuse the commands and arguments detailed below. + +## Prerequisites + +For all instances: + +- A (free) Data Commons API key. To obtain an API key, go to and request a key for the `api.datacommons.org` domain. +- For running the sample agent or the Colab notebook, a GCP project and a Google AI API key. For details on supported keys, see . +- For running the sample agent locally, or running the server locally in standalone mode, install `uv` for managing and installing Python packages; see the instructions at . +- For running the sample agent locally, install [Git](https://git-scm.com/). + +> **Important**: Additionally, for custom Data Commons instances: + +> If you have not rebuilt your Data Commons image since the stable release of 2025-09-08, you must [sync to the latest stable release](/custom_dc/build_image.html#sync-code-to-the-stable-branch), [rebuild your image](/custom_dc/build_image.html#build-package) and [redeploy](/custom_dc/deploy_cloud.html#manage-your-service). + + +## Configure environment variables + +### Base Data Commons (datacommons.org) + +For basic usage against datacommons.org, set the required `DC_API_KEY` in your shell/startup script (e.g. `.bashrc`). +```bash +export DC_API_KEY= +``` + +### Custom Data Commons + +If you're running a against a custom Data Commons instance, we recommend using a `.env` file, which the server locates automatically, to keep all the settings in one place. All supported options are documented in . + +To set variables using a `.env` file: + +1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Or, if you plan to run the sample agent, clone the repo https://github.com/datacommonsorg/agent-toolkit/. + +1. From the directory where you saved the sample file, copy it to a new file called `.env`. For example: + ```bash + cd ~/agent-toolkit/packages/datacommons-mcp + cp .env.sample .env + ``` +1. Set the following variables: + - `DC_API_KEY`: Set to your Data Commons API key + - `DC_TYPE`: Set to `custom`. + - `CUSTOM_DC_URL`: Uncomment and set to the URL of your instance. +1. Optionally, set other variables. +1. Save the file. + +## Use Gemini CLI + +1. Install Gemini CLI: see instructions at https://github.com/google-gemini/gemini-cli#quick-install. +2. To configure Gemini CLI to recognize the Data Commons server, edit your `~/.gemini/settings.json` file to add the following: + +```jsonc +{ +// ... + "mcpServers": { + "datacommons-mcp": { + "command": "uvx", + "args": [ + "datacommons-mcp@latest", + "serve", + "stdio" + ], + "env": { + "DC_API_KEY": "" + }, + "trust": true + } + } +// ... +} +``` + +## Use the sample agent + +We provide a basic agent for interacting with the MCP Server in [packages/datacommons-mcp/examples/sample_agents/basic_agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent). To run the agent locally: + +1. If not already installed, install `uv` for managing and installing Python packages; see the instructions at . +1. From the desired directory, clone the `agent-toolkit` repo: + ```bash + git clone https://github.com/datacommonsorg/agent-toolkit.git + ``` +1. Set the following environment variables in your shell or startup script: + ```bash + export DC_API_KEY= + export GEMINI_API_KEY= + ``` +1. Go to the root directory of the repo: + ```bash + cd agent-toolkit + ``` +1. Run the agent using one of the following methods. + +### Web UI (recommended): + +1. Run the following command: + ```bash + uvx --from google-adk adk web ./packages/datacommons-mcp/examples/sample_agents/ + ``` +1. Point your browser to the address and port displayed on the screen (e.g. `http://127.0.0.1:8000/`). The Agent Development Kit Dev UI is displayed. +1. From the **Type a message** box, type your query for Data Commons or select another action. + +### Command line interface + +1. Run the following command: + ```bash + uvx --from google-adk adk run ./packages/datacommons-mcp/examples/sample_agents/basic_agent + ``` +1. Enter your queries at the `User` prompt in the terminal. + +## Use a remote server/client + +### Run a standalone server + +1. Ensure you've set up the relevant server [environment variables](#environment-variables). If you're using a `.env` file, go to the directory where the file is stored. +1. Run: + ```bash + uvx datacommons-mcp serve http [--port ] + ``` +By default, the port is 8080 if you don't set it explicitly. + +The server is addressable with the endpoint `mcp`. For example, `http://my-mcp-server:8080/mcp`. + +### Connect to an already-running server from a remote client + +Below we provide instructions for Gemini CLI and a sample ADK agent. If you're using a different client, consult its documentation to determine how to specify an HTTP URL. + +#### Gemini CLI + +To configure Gemini CLI to connect to a remote Data Commons server over HTTP, replace the `mcpServers` section in `~/.gemini/settings.json` (or other `settings.json` file) with the following: + +```jsonc +{ +// ... (additional configuration) +"mcpServers": { + "datacommons-mcp": { + "httpUrl": "http://:/mcp" + } + // ... (other mcpServers entries) + } +} +``` +#### Sample agent + +To configure the sample agent to connect to a remote Data Commons MCP server over HTTP, you need to modify the code in [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py). Set import modules and agent initialization parameters as follows: + +```python +from google.adk.tools.mcp_tool.mcp_toolset import ( + MCPToolset, + StreamableHTTPConnectionParams +) + +root_agent = LlmAgent( + # ... + tools=[McpToolset( + connection_params=StreamableHTTPConnectionParams( + url=f"http://:/mcp" + ) + )], + ) +``` +Run the agent as described in [Use the sample agent](#use-the-sample-agent) above. + + From f481c4083db48e8e00a72b4bcf58184ca79809ac Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 15 Oct 2025 18:51:36 +0000 Subject: [PATCH 014/121] Restructuring --- mcp/develop_agent.md | 2 +- mcp/index.md | 10 +++++----- mcp/run_tools.md | 39 +++++++++++++++++++++++++++------------ 3 files changed, 33 insertions(+), 18 deletions(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index 98527e6be..7603b490a 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -1,7 +1,7 @@ --- layout: default title: Develop an ADK agent -nav_order: 2 +nav_order: 3 parent: MCP - Query data interactively with an AI agent --- diff --git a/mcp/index.md b/mcp/index.md index 8e9c74c01..14b95ac8d 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -5,8 +5,12 @@ nav_order: 20 has_children: true --- +{:.no_toc} # Query data interactively with an AI agent +* TOC +{:toc} + ## Overview The [Data Commons](https://datacommons.org) [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) service gives AI agents access to the Data Commons knowledge graph and returns data related to statistical variables, topics, and observations. It allows end users to formulate complex natural-language queries interactively, get data in textual, structured or unstructured formats, and download the data as desired. For example, depending on the agent, a user can answer high-level questions such as "give me the economic indicators of the BRICS countries", view simple tables, and download a CSV file of the data in tabular format. @@ -17,7 +21,7 @@ The server is a Python binary based on the [FastMCP 2.0 framework](https://gofas At this time, there is no centrally deployed server; you run your own server, and any client you want to connect to it. -![alt text](mcp.png) +![alt text](/assets/images/mcp.png) ## Tools @@ -36,10 +40,6 @@ The server supports both standard MCP [transport protocols](https://modelcontext - Stdio: For clients that connect directly using local processes - Streamable HTTP: For clients that connect remotely or otherwise require HTTP (e.g. Typescript) -See [Run MCP tools](run_tools.md) for how to use the server with Google-based clients over Stdio. - -For an end-to-end tutorial using a server and agent over HTTP in the cloud, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb). - ## Unsupported features At the current time, the following are not supported: diff --git a/mcp/run_tools.md b/mcp/run_tools.md index bd3e4f43d..411eae20a 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -7,23 +7,26 @@ parent: MCP - Query data interactively with an AI agent # Run MCP tools -This page shows you how to run a local agent that kicks off the server in a subprocess. +This page shows you how to run a local agent and connect to a server running locally or remotely. We provide specific instructions for the following agents: - + - [Gemini CLI](https://github.com/google-gemini/gemini-cli) - Can be used for datacommons.org or a Custom Data Commons instance - Requires minimal setup - See [Use Gemini CLI](#use-gemini-cli) for this option. + - Uses [Gemini Flash 2.5](https://deepmind.google/models/gemini/flash/) + + See [Use Gemini CLI](#use-gemini-cli) for this option. - A sample basic agent based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) and [Gemini Flash 2.5](https://deepmind.google/models/gemini/flash/) - - Best for interacting with a sample ADK-based web agent + - Best for interacting with a Web UI - Can be used for datacommons.org or a Custom Data Commons instance + - Can be customized to run other LLMs - Requires some additional setup + See [Use the sample agent](#use-the-sample-agent) for this option +For an end-to-end tutorial using a server and agent over HTTP in the cloud, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb). + For other clients/agents, see the relevant documentation; you should be able to reuse the commands and arguments detailed below. ## Prerequisites @@ -36,7 +39,6 @@ For all instances: - For running the sample agent locally, install [Git](https://git-scm.com/). > **Important**: Additionally, for custom Data Commons instances: - > If you have not rebuilt your Data Commons image since the stable release of 2025-09-08, you must [sync to the latest stable release](/custom_dc/build_image.html#sync-code-to-the-stable-branch), [rebuild your image](/custom_dc/build_image.html#build-package) and [redeploy](/custom_dc/deploy_cloud.html#manage-your-service). @@ -55,7 +57,7 @@ If you're running a against a custom Data Commons instance, we recommend using a To set variables using a `.env` file: -1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Or, if you plan to run the sample agent, clone the repo https://github.com/datacommonsorg/agent-toolkit/. +1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Or, if you plan to run the sample agent, clone the repo . 1. From the directory where you saved the sample file, copy it to a new file called `.env`. For example: ```bash @@ -71,7 +73,7 @@ To set variables using a `.env` file: ## Use Gemini CLI -1. Install Gemini CLI: see instructions at https://github.com/google-gemini/gemini-cli#quick-install. +1. Install Gemini CLI: see instructions at . 2. To configure Gemini CLI to recognize the Data Commons server, edit your `~/.gemini/settings.json` file to add the following: ```jsonc @@ -94,6 +96,11 @@ To set variables using a `.env` file: // ... } ``` +1. From any directory, run `gemini`. +1. To see the Data Commons tools, use `/mcp tools`. +1. Start sending [natural-language queries](#sample-data-commons-queries). + +> **Tip**: To ensure that Gemini CLI uses the Data Commons MCP tools, and not its own `GoogleSearch` tool, include a prompt to use Data Commons in your query. For example, use a query like "Use Data Commons tools to answer the following: ..." You can also add such a prompt to a [`GEMINI.md` file](https://codelabs.developers.google.com/gemini-cli-hands-on#9) so that it's persisted across sessions. ## Use the sample agent @@ -122,7 +129,7 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco uvx --from google-adk adk web ./packages/datacommons-mcp/examples/sample_agents/ ``` 1. Point your browser to the address and port displayed on the screen (e.g. `http://127.0.0.1:8000/`). The Agent Development Kit Dev UI is displayed. -1. From the **Type a message** box, type your query for Data Commons or select another action. +1. From the **Type a message** box, type your [query for Data Commons](#sample-data-commons-queries) or select another action. ### Command line interface @@ -130,7 +137,15 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco ```bash uvx --from google-adk adk run ./packages/datacommons-mcp/examples/sample_agents/basic_agent ``` -1. Enter your queries at the `User` prompt in the terminal. +1. Enter your [queries](#sample-data-commons-queries) at the `User` prompt in the terminal. + +## Sample Data Commons queries + +The MCP tools excel at natural-language queries that involve comparisons between two or more entities, such as countries or metrics. Here are some examples of such queries: + +- "What health data do you have for Africa?" +- "Compare the life expectancy, economic inequality, and GDP growth for BRICS nations." +- "Generate a concise report on income vs diabetes in US counties." ## Use a remote server/client From 5d70c4a52cb7ffccdc224ce599f63a0b379d71e5 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 15 Oct 2025 18:53:55 +0000 Subject: [PATCH 015/121] Remove unused file --- api/python/v2/datacommons_client.html | 644 -------------------------- 1 file changed, 644 deletions(-) delete mode 100644 api/python/v2/datacommons_client.html diff --git a/api/python/v2/datacommons_client.html b/api/python/v2/datacommons_client.html deleted file mode 100644 index c76f71b31..000000000 --- a/api/python/v2/datacommons_client.html +++ /dev/null @@ -1,644 +0,0 @@ - - - - -Python: package datacommons_client - - - - - -
 
datacommons_client (version 2.1.0)
index
/usr/local/google/home/kmoscoe/api-python/datacommons_client/__init__.py
-

-

- - - - - -
 
Package Contents
       
client
-
endpoints (package)
-
models (package)
-
utils (package)
-

- - - - - -
 
Classes
       
-
builtins.object -
-
-
datacommons_client.client.DataCommonsClient -
datacommons_client.endpoints.base.API -
-
-
datacommons_client.endpoints.base.Endpoint(builtins.object) -
-
-
datacommons_client.endpoints.node.NodeEndpoint -
datacommons_client.endpoints.observation.ObservationEndpoint -
datacommons_client.endpoints.resolve.ResolveEndpoint -
-
-
-

- - - - - - - -
 
class API(builtins.object)
   API(api_key: Optional[str] = None, dc_instance: Optional[str] = None, url: Optional[str] = None)

-Represents a configured API interface to the Data Commons API.

-This class handles environment setup, resolving the base URL, building headers,
-or optionally using a fully qualified URL directly. It can be used standalone
-to interact with the API or in combination with Endpoint classes.
 
 Methods defined here:
-
__init__(self, api_key: Optional[str] = None, dc_instance: Optional[str] = None, url: Optional[str] = None)
Initializes the API instance.

-Args:
-    api_key: The API key for authentication. Defaults to None.
-    dc_instance: The Data Commons instance domain. Ignored if `url` is provided.
-                 Defaults to 'datacommons.org' if both `url` and `dc_instance` are None.
-    url: A fully qualified URL for the base API. This may be useful if more granular control
-        of the API is required (for local development, for example). If provided, dc_instance`
-         should not be provided.

-Raises:
-    ValueError: If both `dc_instance` and `url` are provided.
- -
__repr__(self) -> str
Returns a readable representation of the API object.

-Indicates the base URL and if it's authenticated.

-Returns:
-    str: A string representation of the API object.
- -
post(self, payload: dict[str, typing.Any], endpoint: Optional[str] = None, *, all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request using the configured API environment.

-If `endpoint` is provided, it will be appended to the base_url. Otherwise,
-it will just POST to the base URL.

-Args:
-    payload: The JSON payload for the POST request.
-    endpoint: An optional endpoint path to append to the base URL.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-        Defaults to True. Set to False to only fetch the first page. In that case, a
-        `next_token` key in the response will indicate if more pages are available.
-        That token can be used to fetch the next page.

-Returns:
-    A dictionary containing the merged response data.

-Raises:
-    ValueError: If the payload is not a valid dictionary.
- -
-Data descriptors defined here:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - - - -
 
class DataCommonsClient(builtins.object)
   DataCommonsClient(api_key: Optional[str] = None, *, dc_instance: Optional[str] = 'datacommons.org', url: Optional[str] = None)

-A client for interacting with the Data Commons API.

-This class provides convenient access to the V2 Data Commons API endpoints.

-Attributes:
-    api (API): An instance of the API class that handles requests.
-    node (NodeEndpoint): Provides access to node-related queries, such as fetching property labels
-        and values for individual or multiple nodes in the Data Commons knowledge graph.
-    observation (ObservationEndpoint): Handles observation-related queries, allowing retrieval of
-        statistical observations associated with entities, variables, and dates (e.g., GDP of California in 2010).
-    resolve (ResolveEndpoint): Manages resolution queries to find different DCIDs for entities.
 
 Methods defined here:
-
__init__(self, api_key: Optional[str] = None, *, dc_instance: Optional[str] = 'datacommons.org', url: Optional[str] = None)
Initializes the DataCommonsClient.

-Args:
-    api_key (Optional[str]): The API key for authentication. Defaults to None. Note that
-        custom DC instances do not currently require an API key.
-    dc_instance (Optional[str]): The Data Commons instance to use. Defaults to "datacommons.org".
-    url (Optional[str]): A custom, fully resolved URL for the Data Commons API. Defaults to None.
- -
observations_dataframe(self, variable_dcids: str | list[str], date: datacommons_client.endpoints.payloads.ObservationDate | str, entity_dcids: Union[Literal['all'], list[str]] = 'all', entity_type: Optional[str] = None, parent_entity: Optional[str] = None, property_filters: Optional[dict[str, str | list[str]]] = None)
Fetches statistical observations and returns them as a Pandas DataFrame.

-The Observation API fetches statistical observations linked to entities and variables
-at a particular date (e.g., "population of USA in 2020", "GDP of California in 2010").

-Args:
-variable_dcids (str | list[str]): One or more variable DCIDs for the observation.
-date (ObservationDate | str): The date for which observations are requested. It can be
-    a specific date, "all" to retrieve all observations, or "latest" to get the most recent observations.
-entity_dcids (Literal["all"] | list[str], optional): The entity DCIDs for which to retrieve data.
-    Defaults to "all".
-entity_type (Optional[str]): The type of entities to filter by when `entity_dcids="all"`.
-    Required if `entity_dcids="all"`. Defaults to None.
-parent_entity (Optional[str]): The parent entity under which the target entities fall.
-    Required if `entity_dcids="all"`. Defaults to None.
-property_filters (Optional[dict[str, str | list[str]]): An optional dictionary used to filter
-    the data by using observation properties like `measurementMethod`, `unit`, or `observationPeriod`.

-Returns:
-    pd.DataFrame: A DataFrame containing the requested observations.
- -
-Data descriptors defined here:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - - - -
 
class NodeEndpoint(datacommons_client.endpoints.base.Endpoint)
   NodeEndpoint(api: datacommons_client.endpoints.base.API)

-Initializes the NodeEndpoint with a given API configuration.

-Args:
-    api (API): The API instance providing the environment configuration
-        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
-
NodeEndpoint
-
datacommons_client.endpoints.base.Endpoint
-
builtins.object
-
-
-Methods defined here:
-
__getattr__(self, name)
- -
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the NodeEndpoint with a given API configuration.
- -
fetch(self, node_dcids: str | list[str], expression: str, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches properties or arcs for given nodes and properties.

-Args:
-    node_dcids (str | List[str]): The DCID(s) of the nodes to query.
-    expression (str): The property or relation expression(s) to query.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-      Defaults to True. Set to False to only fetch the first page. In that case, a
-      `next_token` key in the response will indicate if more pages are available.
-      That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    NodeResponse: The response object containing the queried data.

-Example:
-    ```python
-    response = node.fetch(
-        node_dcids=["geoId/06"],
-        expression="<-"
-    )
-    print(response)
-    ```
- -
fetch_all_classes(self, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches all Classes available in the Data Commons knowledge graph.

-Args:
-  all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-      Defaults to True. Set to False to only fetch the first page. In that case, a
-      `next_token` key in the response will indicate if more pages are available.
-      That token can be used to fetch the next page.
-  next_token: Optionally, the token to fetch the next page of results. Defaults to None.


-Returns:
-    NodeResponse: The response object containing all statistical variables.

-Example:
-    ```python
-    response = node.fetch_all_classes()
-    print(response)
-    ```
- -
fetch_entity_names(self, entity_dcids: str | list[str], language: Optional[str] = 'en', fallback_language: Optional[str] = None) -> dict[str, datacommons_client.models.node.Name]
Fetches entity names in the specified language, with optional fallback to English.
-Args:
-  entity_dcids: A single DCID or a list of DCIDs to fetch names for.
-  language: Language code (e.g., "en", "es"). Defaults to "en" (DEFAULT_NAME_LANGUAGE).
-  fallback_language: If provided, this language will be used as a fallback if the requested
-    language is not available. If not provided, no fallback will be used.
-Returns:
-  A dictionary mapping each DCID to a dictionary with the mapped name, language, and
-    the property used.
- -
fetch_place_ancestors(self, place_dcids: str | list[str], as_tree: bool = False, *, max_concurrent_requests: Optional[int] = 10) -> dict[str, list[dict[str, str]] | dict]
Fetches the full ancestry (flat or nested) for one or more entities.
-For each input DCID, this method builds the complete ancestry graph using a
-breadth-first traversal and parallel fetching.
-It returns either a flat list of unique parents or a nested tree structure for
-each entity, depending on the `as_tree` flag. The flat list matches the structure
-of the `/api/place/parent` endpoint of the DC website.
-Args:
-    place_dcids (str | list[str]): One or more DCIDs of the entities whose ancestry
-       will be fetched.
-    as_tree (bool): If True, returns a nested tree structure; otherwise, returns a flat list.
-        Defaults to False.
-    max_concurrent_requests (Optional[int]): The maximum number of concurrent requests to make.
-        Defaults to PLACES_MAX_WORKERS.
-Returns:
-    dict[str, list[dict[str, str]] | dict]: A dictionary mapping each input DCID to either:
-        - A flat list of parent dictionaries (if `as_tree` is False), or
-        - A nested ancestry tree (if `as_tree` is True). Each parent is represented by
-          a dict with 'dcid', 'name', and 'type'.
- -
fetch_place_children(self, place_dcids: str | list[str], *, children_type: Optional[str] = None, as_dict: bool = True) -> dict[str, list[datacommons_client.models.node.Node | dict]]
Fetches the direct children of one or more entities using the 'containedInPlace' property.

-Args:
-    place_dcids (str | list[str]): A single place DCID or a list of DCIDs to query.
-    children_type (str, optional): The type of the child entities to
-        fetch (e.g., 'Country', 'State', 'IPCCPlace_50'). If None, fetches all child types.
-    as_dict (bool): If True, returns a dictionary mapping each input DCID to its
-        immediate children entities. If False, returns a dictionary of Node objects.

-Returns:
-    dict[str, list[Node | dict]]: A dictionary mapping each input DCID to a list of its
-    immediate children. Each child is represented as a Node object or as a dictionary with
-    the same data.
- -
fetch_place_descendants(self, place_dcids: str | list[str], descendants_type: Optional[str] = None, as_tree: bool = False, *, max_concurrent_requests: Optional[int] = 10) -> dict[str, list[dict[str, str]] | dict]
Fetches the full descendants (flat or nested) for one or more entities.
-For each input DCID, this method builds the complete descendants graph using a
-breadth-first traversal and parallel fetching.

-It returns either a flat list of unique child or a nested tree structure for
-each entity, depending on the `as_tree` flag.

-Args:
-    place_dcids (str | list[str]): One or more DCIDs of the entities whose descendants
-       will be fetched.
-    descendants_type (Optional[str]): The type of the descendants to fetch (e.g., 'Country', 'State').
-        If None, fetches all descendant types.
-    as_tree (bool): If True, returns a nested tree structure; otherwise, returns a flat list.
-        Defaults to False.
-    max_concurrent_requests (Optional[int]): The maximum number of concurrent requests to make.
-        Defaults to PLACES_MAX_WORKERS.
-Returns:
-    dict[str, list[dict[str, str]] | dict]: A dictionary mapping each input DCID to either:
-        - A flat list of Node dictionaries (if `as_tree` is False), or
-        - A nested ancestry tree (if `as_tree` is True). Each child is represented by
-          a dict.
- -
fetch_place_parents(self, place_dcids: str | list[str], *, as_dict: bool = True) -> dict[str, list[datacommons_client.models.node.Node | dict]]
Fetches the direct parents of one or more entities using the 'containedInPlace' property.

-Args:
-    place_dcids (str | list[str]): A single place DCID or a list of DCIDs to query.
-    as_dict (bool): If True, returns a dictionary mapping each input DCID to its
-        immediate parent entities. If False, returns a dictionary of Node objects.

-Returns:
-    dict[str, list[Node | dict]]: A dictionary mapping each input DCID to a list of its
-    immediate parent entities. Each parent is represented as a Node object or
-    as a dictionary with the same data.
- -
fetch_property_labels(self, node_dcids: str | list[str], out: bool = True, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches all property labels for the given nodes.

-Args:
-    node_dcids (str | list[str]): The DCID(s) of the nodes to query.
-    out (bool): Whether to fetch outgoing properties (`->`). Defaults to True.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-      Defaults to True. Set to False to only fetch the first page. In that case, a
-      `next_token` key in the response will indicate if more pages are available.
-      That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    NodeResponse: The response object containing the property labels.

-Example:
-    ```python
-    response = node.fetch_property_labels(node_dcids="geoId/06")
-    print(response)
-    ```
- -
fetch_property_values(self, node_dcids: str | list[str], properties: str | list[str], constraints: Optional[str] = None, out: bool = True, *, all_pages: bool = True, next_token: Optional[str] = None) -> datacommons_client.endpoints.response.NodeResponse
Fetches the values of specific properties for given nodes.

-Args:
-    node_dcids (str | List[str]): The DCID(s) of the nodes to query.
-    properties (str | List[str]): The property or relation expression(s) to query.
-    constraints (Optional[str]): Additional constraints for the query. Defaults to None.
-    out (bool): Whether to fetch outgoing properties. Defaults to True.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-      Defaults to True. Set to False to only fetch the first page. In that case, a
-      `next_token` key in the response will indicate if more pages are available.
-      That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.


-Returns:
-    NodeResponse: The response object containing the property values.

-Example:
-    ```python
-    response = node.fetch_property_values(
-        node_dcids=["geoId/06"],
-        properties="name",
-        out=True
-    )
-    print(response)
-    ```
- -
-Methods inherited from datacommons_client.endpoints.base.Endpoint:
-
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

-Shows the endpoint and underlying API configuration.

-Returns:
-    str: A string representation of the Endpoint object.
- -
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

-Args:
-    payload: The JSON payload for the POST request.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-        Defaults to True. Set to False to only fetch the first page. In that case, a
-        `next_token` key in the response will indicate if more pages are available.
-        That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    A dictionary with the merged API response data.

-Raises:
-    ValueError: If the payload is not a valid dictionary.
- -
-Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - - - -
 
class ObservationEndpoint(datacommons_client.endpoints.base.Endpoint)
   ObservationEndpoint(api: datacommons_client.endpoints.base.API)

-A class to interact with the observation API endpoint.

-Args:
-    api (API): The API instance providing the environment configuration
-        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
-
ObservationEndpoint
-
datacommons_client.endpoints.base.Endpoint
-
builtins.object
-
-
-Methods defined here:
-
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the ObservationEndpoint instance.
- -
fetch(self, variable_dcids: str | list[str], date: datacommons_client.endpoints.payloads.ObservationDate | str = <ObservationDate.LATEST: 'LATEST'>, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, entity_dcids: Union[str, list[str], NoneType] = None, entity_expression: Optional[str] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches data from the observation endpoint.

-Args:
-    variable_dcids (str | list[str]): One or more variable IDs for the data.
-    date (str | ObservationDate): The date for which data is being requested.
-        Defaults to the latest observation.
-    select (list[ObservationSelect]): Fields to include in the response.
-        Defaults to ["date", "variable", "entity", "value"].
-    entity_dcids (Optional[str | list[str]]): One or more entity IDs to filter the data.
-    entity_expression (Optional[str]): A string expression to filter entities.
-    filter_facet_domains (Optional[str | list[str]]): One or more domain names to filter the data.
-    filter_facet_ids (Optional[str | list[str]]): One or more facet IDs to filter the data.

-Returns:
-    ObservationResponse: The response object containing observations for the specified query.
- -
fetch_available_statistical_variables(self, entity_dcids: str | list[str]) -> dict[str, list[str]]
Fetches available statistical variables (which have observations) for given entities.
-Args:
-    entity_dcids (str | list[str]): One or more entity DCIDs(s) to fetch variables for.
-Returns:
-    dict[str, list[str]]: A dictionary mapping entity DCIDs to their available statistical variables.
- -
fetch_observations_by_entity_dcid(self, date: datacommons_client.endpoints.payloads.ObservationDate | str, entity_dcids: str | list[str], variable_dcids: str | list[str], *, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches all observations for a given entity type.

-Args:
-    date (ObservationDate | str): The date option for the observations.
-        Use 'all' for all dates, 'latest' for the most recent data,
-        or provide a date as a string (e.g., "2024").
-    entity_dcids (str | list[str]): One or more entity IDs to filter the data.
-    variable_dcids (str | list[str]): The variable(s) to fetch observations for.
-        This can be a single variable ID or a list of IDs.
-    select (Optional[list[ObservationSelect | str]]): Fields to include in the response.
-        If not provided, defaults to ["date", "variable", "entity", "value"].
-    filter_facet_domains: Optional[str | list[str]: One or more domain names to filter the data.
-    filter_facet_ids: Optional[str | list[str]: One or more facet IDs to filter the data.

-Returns:
-    ObservationResponse: The response object containing observations for the specified entity type.

-Example:
-    To fetch all observations for Nigeria for a specific variable:

-    ```python
-    api = API()
-    ObservationEndpoint(api).fetch_observations_by_entity_dcid(
-        date="all",
-        entity_dcids="country/NGA",
-        variable_dcids="sdg/SI_POV_DAY1"
-    )
-    ```
- -
fetch_observations_by_entity_type(self, date: datacommons_client.endpoints.payloads.ObservationDate | str, parent_entity: str, entity_type: str, variable_dcids: str | list[str], *, select: Optional[list[datacommons_client.endpoints.payloads.ObservationSelect | str]] = None, filter_facet_domains: Union[str, list[str], NoneType] = None, filter_facet_ids: Union[str, list[str], NoneType] = None) -> datacommons_client.endpoints.response.ObservationResponse
Fetches all observations for a given entity type.

-Args:
-    date (ObservationDate | str): The date option for the observations.
-        Use 'all' for all dates, 'latest' for the most recent data,
-        or provide a date as a string (e.g., "2024").
-    parent_entity (str): The parent entity under which the target entities fall.
-        For example, "africa" for African countries, or "Earth" for all countries.
-    entity_type (str): The type of entities for which to fetch observations.
-        For example, "Country" or "Region".
-    variable_dcids (str | list[str]): The variable(s) to fetch observations for.
-        This can be a single variable ID or a list of IDs.
-    select (Optional[list[ObservationSelect | str]]): Fields to include in the response.
-        If not provided, defaults to ["date", "variable", "entity", "value"].
-    filter_facet_domains: Optional[str | list[str]: One or more domain names to filter the data.
-    filter_facet_ids: Optional[str | list[str]: One or more facet IDs to filter the data.

-Returns:
-    ObservationResponse: The response object containing observations for the specified entity type.

-Example:
-    To fetch all observations for African countries for a specific variable:

-    ```python
-    api = API()
-    ObservationEndpoint(api).fetch_observations_by_entity_type(
-        date="all",
-        parent_entity="africa",
-        entity_type="Country",
-        variable_dcids="sdg/SI_POV_DAY1"
-    )
-    ```
- -
-Methods inherited from datacommons_client.endpoints.base.Endpoint:
-
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

-Shows the endpoint and underlying API configuration.

-Returns:
-    str: A string representation of the Endpoint object.
- -
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

-Args:
-    payload: The JSON payload for the POST request.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-        Defaults to True. Set to False to only fetch the first page. In that case, a
-        `next_token` key in the response will indicate if more pages are available.
-        That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    A dictionary with the merged API response data.

-Raises:
-    ValueError: If the payload is not a valid dictionary.
- -
-Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - - - -
 
class ResolveEndpoint(datacommons_client.endpoints.base.Endpoint)
   ResolveEndpoint(api: datacommons_client.endpoints.base.API)

-A class to interact with the resolve API endpoint.

-Args:
-    api (API): The API instance providing the environment configuration
-        (base URL, headers, authentication) to be used for requests.
 
 
Method resolution order:
-
ResolveEndpoint
-
datacommons_client.endpoints.base.Endpoint
-
builtins.object
-
-
-Methods defined here:
-
__init__(self, api: datacommons_client.endpoints.base.API)
Initializes the ResolveEndpoint instance.
- -
fetch(self, node_ids: str | list[str], expression: str | list[str]) -> datacommons_client.endpoints.response.ResolveResponse
Fetches resolved data for the given nodes and expressions, identified by name,
- coordinates, or wiki ID.

-Args:
-    node_ids (str | list[str]): One or more node IDs to resolve.
-    expression (str): The relation expression to query.

-Returns:
-    ResolveResponse: The response object containing the resolved data.
- -
fetch_dcid_by_coordinates(self, latitude: str, longitude: str, entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their geographic coordinates.

-Args:
-    latitude (str): Latitude of the entity.
-    longitude (str): Longitude of the entity.
-    entity_type (Optional[str]): Optional type of the entities to refine results
-    (e.g., "City", "State", "Country").

-Returns:
-    ResolveResponse: The response object containing the resolved DCIDs.

-Example:
-    To find the DCID for "Mountain View" using its latitude and longitude:
-    ```python
-    latitude = "37.42"
-    longitude = "-122.08"
-    response = client.fetch_dcid_by_coordinates(latitude=latitude, longitude=longitude)
-    print(response.entities)
-    ```
-    Note:
-     - For ambiguous results, providing an entity type (e.g., "City") can help disambiguate.
-     - The coordinates should be passed as strings in decimal format (e.g., "37.42", "-122.08").
- -
fetch_dcids_by_name(self, names: str | list[str], entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their names.

-Args:
-    names (str | list[str]): One or more entity names to resolve.
-    entity_type (Optional[str]): Optional type of the entities.

-Returns:
-    ResolveResponse: The response object containing the resolved DCIDs.
- -
fetch_dcids_by_wikidata_id(self, wikidata_ids: str | list[str], entity_type: Optional[str] = None) -> datacommons_client.endpoints.response.ResolveResponse
Fetches DCIDs for entities by their Wikidata IDs.

-Args:
-    wikidata_ids (str | list[str]): One or more Wikidata IDs to resolve.
-    entity_type (Optional[str]): Optional type of the entities.

-Returns:
-    ResolveResponse: The response object containing the resolved DCIDs.
- -
-Methods inherited from datacommons_client.endpoints.base.Endpoint:
-
__repr__(self) -> str
Returns a readable representation of the Endpoint object.

-Shows the endpoint and underlying API configuration.

-Returns:
-    str: A string representation of the Endpoint object.
- -
post(self, payload: dict[str, typing.Any], all_pages: bool = True, next_token: Optional[str] = None) -> Dict[str, Any]
Makes a POST request to the specified endpoint using the API instance.

-Args:
-    payload: The JSON payload for the POST request.
-    all_pages: If True, fetch all pages of the response. If False, fetch only the first page.
-        Defaults to True. Set to False to only fetch the first page. In that case, a
-        `next_token` key in the response will indicate if more pages are available.
-        That token can be used to fetch the next page.
-    next_token: Optionally, the token to fetch the next page of results. Defaults to None.

-Returns:
-    A dictionary with the merged API response data.

-Raises:
-    ValueError: If the payload is not a valid dictionary.
- -
-Data descriptors inherited from datacommons_client.endpoints.base.Endpoint:
-
__dict__
-
dictionary for instance variables
-
-
__weakref__
-
list of weak references to the object
-
-

- - - - - -
 
Data
       __all__ = ['DataCommonsClient', 'API', 'NodeEndpoint', 'ObservationEndpoint', 'ResolveEndpoint']
- \ No newline at end of file From b241392b2fc2edb3581d94e1e008333946108ccc Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 15 Oct 2025 18:59:07 +0000 Subject: [PATCH 016/121] Add TOCs etc. --- mcp/run_tools.md | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 411eae20a..b27ffac49 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -5,10 +5,15 @@ nav_order: 2 parent: MCP - Query data interactively with an AI agent --- +{:.no_toc} # Run MCP tools This page shows you how to run a local agent and connect to a server running locally or remotely. +* TOC +{:toc} + + We provide specific instructions for the following agents: - [Gemini CLI](https://github.com/google-gemini/gemini-cli) @@ -98,7 +103,7 @@ To set variables using a `.env` file: ``` 1. From any directory, run `gemini`. 1. To see the Data Commons tools, use `/mcp tools`. -1. Start sending [natural-language queries](#sample-data-commons-queries). +1. Start sending [natural-language queries](#sample-queries). > **Tip**: To ensure that Gemini CLI uses the Data Commons MCP tools, and not its own `GoogleSearch` tool, include a prompt to use Data Commons in your query. For example, use a query like "Use Data Commons tools to answer the following: ..." You can also add such a prompt to a [`GEMINI.md` file](https://codelabs.developers.google.com/gemini-cli-hands-on#9) so that it's persisted across sessions. @@ -122,14 +127,14 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco ``` 1. Run the agent using one of the following methods. -### Web UI (recommended): +### Web UI (recommended) 1. Run the following command: ```bash uvx --from google-adk adk web ./packages/datacommons-mcp/examples/sample_agents/ ``` 1. Point your browser to the address and port displayed on the screen (e.g. `http://127.0.0.1:8000/`). The Agent Development Kit Dev UI is displayed. -1. From the **Type a message** box, type your [query for Data Commons](#sample-data-commons-queries) or select another action. +1. From the **Type a message** box, type your [query for Data Commons](#sample-queries) or select another action. ### Command line interface @@ -137,11 +142,11 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco ```bash uvx --from google-adk adk run ./packages/datacommons-mcp/examples/sample_agents/basic_agent ``` -1. Enter your [queries](#sample-data-commons-queries) at the `User` prompt in the terminal. +1. Enter your [queries](#sample-queries) at the `User` prompt in the terminal. -## Sample Data Commons queries +## Sample queries -The MCP tools excel at natural-language queries that involve comparisons between two or more entities, such as countries or metrics. Here are some examples of such queries: +The Data Commons MCP tools excel at natural-language queries that involve comparisons between two or more entities, such as countries or metrics. Here are some examples of such queries: - "What health data do you have for Africa?" - "Compare the life expectancy, economic inequality, and GDP growth for BRICS nations." From f78ccd30c9f90cd819b079fffdbaa298d40c8ccc Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 15 Oct 2025 19:06:18 +0000 Subject: [PATCH 017/121] Update var formatting --- mcp/run_tools.md | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index b27ffac49..8c6342e0b 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -52,9 +52,9 @@ For all instances: ### Base Data Commons (datacommons.org) For basic usage against datacommons.org, set the required `DC_API_KEY` in your shell/startup script (e.g. `.bashrc`). -```bash -export DC_API_KEY= -``` +

+export DC_API_KEY=YOUR API KEY
+
### Custom Data Commons @@ -81,7 +81,7 @@ To set variables using a `.env` file: 1. Install Gemini CLI: see instructions at . 2. To configure Gemini CLI to recognize the Data Commons server, edit your `~/.gemini/settings.json` file to add the following: -```jsonc +
 {
 // ...
     "mcpServers": {
@@ -93,14 +93,14 @@ To set variables using a `.env` file:
                 "stdio"
             ],
             "env": {
-                "DC_API_KEY": ""
+                "DC_API_KEY": "YOUR DATA COMMONS API KEY"
             },
             "trust": true
         }
     }
 // ...
 }
-```
+
1. From any directory, run `gemini`. 1. To see the Data Commons tools, use `/mcp tools`. 1. Start sending [natural-language queries](#sample-queries). @@ -117,10 +117,10 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco git clone https://github.com/datacommonsorg/agent-toolkit.git ``` 1. Set the following environment variables in your shell or startup script: - ```bash - export DC_API_KEY= - export GEMINI_API_KEY= - ``` +
+   export DC_API_KEY=YOUR DATA COMMONS API KEY
+   export GEMINI_API_KEY=YOUR GOOGLE AI API KEY
+   
1. Go to the root directory of the repo: ```bash cd agent-toolkit @@ -158,9 +158,9 @@ The Data Commons MCP tools excel at natural-language queries that involve compar 1. Ensure you've set up the relevant server [environment variables](#environment-variables). If you're using a `.env` file, go to the directory where the file is stored. 1. Run: - ```bash - uvx datacommons-mcp serve http [--port ] - ``` +
+   uvx datacommons-mcp serve http [--port PORT]
+   
By default, the port is 8080 if you don't set it explicitly. The server is addressable with the endpoint `mcp`. For example, `http://my-mcp-server:8080/mcp`. @@ -173,17 +173,18 @@ Below we provide instructions for Gemini CLI and a sample ADK agent. If you're u To configure Gemini CLI to connect to a remote Data Commons server over HTTP, replace the `mcpServers` section in `~/.gemini/settings.json` (or other `settings.json` file) with the following: -```jsonc +
 {
 // ... (additional configuration)
 "mcpServers": {
     "datacommons-mcp": {
-      "httpUrl": "http://:/mcp"
+      "httpUrl": "http://HOST:PORT/mcp"
     }
     // ... (other mcpServers entries)
   }
 }
-```
+
+ #### Sample agent To configure the sample agent to connect to a remote Data Commons MCP server over HTTP, you need to modify the code in [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py). Set import modules and agent initialization parameters as follows: From 923f7c525bbc2e8980facd67a44f81310e9b59ca Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Thu, 16 Oct 2025 16:31:15 +0000 Subject: [PATCH 018/121] comments from Keyur --- mcp/run_tools.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 8c6342e0b..41a3c8741 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -19,7 +19,6 @@ We provide specific instructions for the following agents: - [Gemini CLI](https://github.com/google-gemini/gemini-cli) - Can be used for datacommons.org or a Custom Data Commons instance - Requires minimal setup - - Uses [Gemini Flash 2.5](https://deepmind.google/models/gemini/flash/) See [Use Gemini CLI](#use-gemini-cli) for this option. - A sample basic agent based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) and [Gemini Flash 2.5](https://deepmind.google/models/gemini/flash/) @@ -28,7 +27,7 @@ We provide specific instructions for the following agents: - Can be customized to run other LLMs - Requires some additional setup - See [Use the sample agent](#use-the-sample-agent) for this option + See [Use the sample agent](#use-the-sample-agent) for this option. For an end-to-end tutorial using a server and agent over HTTP in the cloud, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb). From 7ec3018384cd6437ba7456aa5d0abbb49deb3adc Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Thu, 16 Oct 2025 16:44:41 +0000 Subject: [PATCH 019/121] another change from Keyur --- mcp/run_tools.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 41a3c8741..b3b4e85f2 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -15,7 +15,7 @@ This page shows you how to run a local agent and connect to a server running loc We provide specific instructions for the following agents: - + - [Gemini CLI](https://github.com/google-gemini/gemini-cli) - Can be used for datacommons.org or a Custom Data Commons instance - Requires minimal setup From 3fe9f4fbf0d73ca3d560245b8daee05e0c119e0f Mon Sep 17 00:00:00 2001 From: kmoscoe <165203920+kmoscoe@users.noreply.github.com> Date: Thu, 16 Oct 2025 17:08:15 -0400 Subject: [PATCH 020/121] Update mcp/index.md Co-authored-by: Christie Ellks --- mcp/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/index.md b/mcp/index.md index 14b95ac8d..05db32f40 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -30,7 +30,7 @@ The server currently supports the following tools: - `search_indicators`: Searches for available variables and/or topics (a hierarchy of sub-topics and member variables) for a given place or metric. Topics are only relevant for Custom Data Commons instances that have implemented them. - `get_observations`: Fetches statistical data for a given variable and place. -> Tip: If you want a deeper understanding of how the tools work, you may use the [MCP Inspector](https://modelcontextprotocol.io/legacy/tools/inspector) to make tool calls directly; see [Test with MCP Inspector](/mcp/develop_agent.html#test-with-mcp-inspector) for details. +> Tip: If you want a deeper understanding of the tools' inputs and outputs, follow [Test with MCP Inspector](/mcp/develop_agent.html#test-with-mcp-inspector) to try out some manual tool calls. ## Clients From 93db32f7ded87cd5abfc50d3e88650baf74d7509 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Thu, 16 Oct 2025 21:30:49 +0000 Subject: [PATCH 021/121] Changes from Christie --- mcp/index.md | 2 +- mcp/run_tools.md | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/mcp/index.md b/mcp/index.md index 05db32f40..b946f8b64 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -13,7 +13,7 @@ has_children: true ## Overview -The [Data Commons](https://datacommons.org) [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) service gives AI agents access to the Data Commons knowledge graph and returns data related to statistical variables, topics, and observations. It allows end users to formulate complex natural-language queries interactively, get data in textual, structured or unstructured formats, and download the data as desired. For example, depending on the agent, a user can answer high-level questions such as "give me the economic indicators of the BRICS countries", view simple tables, and download a CSV file of the data in tabular format. +The Data Commons [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) service gives AI agents access to the Data Commons knowledge graph and returns data related to statistical variables, topics, and observations. It allows end users to formulate complex natural-language queries interactively, get data in textual, structured or unstructured formats, and download the data as desired. For example, depending on the agent, a user can answer high-level questions such as "give me the economic indicators of the BRICS countries", view simple tables, and download a CSV file of the data in tabular format. The MCP server returns data from datacommons.org by default or can be configured for a Custom Data Commons instance. diff --git a/mcp/run_tools.md b/mcp/run_tools.md index b3b4e85f2..d5dfb677a 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -29,7 +29,7 @@ We provide specific instructions for the following agents: See [Use the sample agent](#use-the-sample-agent) for this option. -For an end-to-end tutorial using a server and agent over HTTP in the cloud, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb). +For an end-to-end tutorial using a server and agent over HTTP, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb). For other clients/agents, see the relevant documentation; you should be able to reuse the commands and arguments detailed below. @@ -38,8 +38,8 @@ For other clients/agents, see the relevant documentation; you should be able to For all instances: - A (free) Data Commons API key. To obtain an API key, go to and request a key for the `api.datacommons.org` domain. +- Install `uv` for managing and installing Python packages; see the instructions at . - For running the sample agent or the Colab notebook, a GCP project and a Google AI API key. For details on supported keys, see . -- For running the sample agent locally, or running the server locally in standalone mode, install `uv` for managing and installing Python packages; see the instructions at . - For running the sample agent locally, install [Git](https://git-scm.com/). > **Important**: Additionally, for custom Data Commons instances: @@ -155,7 +155,7 @@ The Data Commons MCP tools excel at natural-language queries that involve compar ### Run a standalone server -1. Ensure you've set up the relevant server [environment variables](#environment-variables). If you're using a `.env` file, go to the directory where the file is stored. +1. Ensure you've set up the relevant server [environment variables](#configure-environment-variables). If you're using a `.env` file, go to the directory where the file is stored. 1. Run:
    uvx datacommons-mcp serve http [--port PORT]

From 5cd3a31cc909a509d4c650f56f3e8ce6508648e1 Mon Sep 17 00:00:00 2001
From: kmoscoe <165203920+kmoscoe@users.noreply.github.com>
Date: Thu, 16 Oct 2025 17:35:03 -0400
Subject: [PATCH 022/121] Apply suggestion from @clincoln8

Co-authored-by: Christie Ellks 
---
 mcp/run_tools.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mcp/run_tools.md b/mcp/run_tools.md
index d5dfb677a..90dd04c6f 100644
--- a/mcp/run_tools.md
+++ b/mcp/run_tools.md
@@ -6,7 +6,7 @@ parent: MCP - Query data interactively with an AI agent
 ---
 
 {:.no_toc}
-# Run MCP tools
+# Run and connect to the server
 
 This page shows you how to run a local agent and connect to a server running locally or remotely.
 

From 655aece77987abc749c9ffb67f2861d4d4115c79 Mon Sep 17 00:00:00 2001
From: Kara Moscoe 
Date: Thu, 16 Oct 2025 21:36:41 +0000
Subject: [PATCH 023/121] More changes from Christie

---
 mcp/run_tools.md | 2 --
 1 file changed, 2 deletions(-)

diff --git a/mcp/run_tools.md b/mcp/run_tools.md
index 90dd04c6f..78aade02f 100644
--- a/mcp/run_tools.md
+++ b/mcp/run_tools.md
@@ -35,8 +35,6 @@ For other clients/agents, see the relevant documentation; you should be able to
 
 ## Prerequisites
 
-For all instances:
-
 - A (free) Data Commons API key. To obtain an API key, go to  and request a key for the `api.datacommons.org` domain.
 - Install `uv` for managing and installing Python packages; see the instructions at . 
 - For running the sample agent or the Colab notebook, a GCP project and a Google AI API key. For details on supported keys, see .

From 1944f146d4f8f6a96694d62a410157b2cd97ea99 Mon Sep 17 00:00:00 2001
From: Kara Moscoe 
Date: Mon, 20 Oct 2025 15:06:57 +0000
Subject: [PATCH 024/121] Changes from Dan

---
 mcp/develop_agent.md |  4 ++--
 mcp/index.md         |  2 ++
 mcp/run_tools.md     | 12 ++++++------
 3 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md
index 7603b490a..cf7598ccc 100644
--- a/mcp/develop_agent.md
+++ b/mcp/develop_agent.md
@@ -13,7 +13,7 @@ We provide two sample Google Agent Development Kit-based agents you can use as i
 - The sample [basic agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent) is a simple Python ADK agent you can use to develop locally. At the most basic level, you can modify its configuration, including:
    - The [AGENT_INSTRUCTIONS](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/instructions.py)
    - The [AGENT_MODEL](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py#L23)
-   - The transport layer protocol: see [Connect to a remote server](/mcp/run_tools.html#connect-to-an-already-running-server) for details.
+   - The transport layer protocol: see [Connect to a remote server](/mcp/run_tools.html#connect-to-an-already-running-server-from-a-remote-client) for details.
 
    To run the custom code, see [Use the sample agent](/mcp/run_tools.html#use-the-sample-agent).
 
@@ -23,7 +23,7 @@ If you're interested in getting a deeper understanding of Data Commons tools and
 
 To use it:
 
-1. If not already installed on your system, install [`node.js`](https://nodejs.org/en/download) and [`uv`](https://docs.astral.sh/uv/getting-started/installation/).
+1. If not already installed on your system, install [`Node.js`](https://nodejs.org/en/download) and [`uv`](https://docs.astral.sh/uv/getting-started/installation/).
 1. Ensure you've set up the relevant server [environment variables](/mcp/run_tools.html#environment-variables). If you're using a `.env` file, go to the directory where the file is stored.
 1. Run:
    ```
diff --git a/mcp/index.md b/mcp/index.md
index b946f8b64..82951910a 100644
--- a/mcp/index.md
+++ b/mcp/index.md
@@ -40,6 +40,8 @@ The server supports both standard MCP [transport protocols](https://modelcontext
 - Stdio: For clients that connect directly using local processes
 - Streamable HTTP: For clients that connect remotely or otherwise require HTTP (e.g. Typescript)
 
+See [Run and connect to the server](run_tools.md) for procedures for using [Gemini CLI](https://github.com/google-gemini/gemini-cli).
+
 ## Unsupported features
 
 At the current time, the following are not supported:
diff --git a/mcp/run_tools.md b/mcp/run_tools.md
index 78aade02f..ab3f927bb 100644
--- a/mcp/run_tools.md
+++ b/mcp/run_tools.md
@@ -8,7 +8,7 @@ parent: MCP - Query data interactively with an AI agent
 {:.no_toc}
 # Run and connect to the server
 
-This page shows you how to run a local agent and connect to a server running locally or remotely.
+This page shows you how to run a local agent and connect to a Data Commons MCP server running locally or remotely.
 
 * TOC
 {:toc}
@@ -173,12 +173,12 @@ To configure Gemini CLI to connect to a remote Data Commons server over HTTP, re
 
 {
 // ... (additional configuration)
-"mcpServers": {
-    "datacommons-mcp": {
-      "httpUrl": "http://HOST:PORT/mcp"
-    }
+  "mcpServers": {
+     "datacommons-mcp": {
+       "httpUrl": "http://HOST:PORT/mcp"
+      }
     // ... (other mcpServers entries)
-  }
+   }
 }
 
From 21d43ee652fd493bf3097c8091a76ca7fa0a842d Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 20 Oct 2025 15:10:02 +0000 Subject: [PATCH 025/121] Link fixes --- mcp/develop_agent.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index cf7598ccc..62b218812 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -24,7 +24,7 @@ If you're interested in getting a deeper understanding of Data Commons tools and To use it: 1. If not already installed on your system, install [`Node.js`](https://nodejs.org/en/download) and [`uv`](https://docs.astral.sh/uv/getting-started/installation/). -1. Ensure you've set up the relevant server [environment variables](/mcp/run_tools.html#environment-variables). If you're using a `.env` file, go to the directory where the file is stored. +1. Ensure you've set up the relevant server [environment variables](/mcp/run_tools.html#configure-environment-variables). If you're using a `.env` file, go to the directory where the file is stored. 1. Run: ``` npx @modelcontextprotocol/inspector uvx datacommons-mcp serve stdio From ab810ea96907036821b1f9a065aa5f283f2eb7c1 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 21 Oct 2025 18:25:34 +0000 Subject: [PATCH 026/121] Remove MCP inspector info --- mcp/develop_agent.md | 20 -------------------- mcp/index.md | 2 -- 2 files changed, 22 deletions(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index 62b218812..3b1eddaf9 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -17,23 +17,3 @@ We provide two sample Google Agent Development Kit-based agents you can use as i To run the custom code, see [Use the sample agent](/mcp/run_tools.html#use-the-sample-agent). -## Test with MCP Inspector - -If you're interested in getting a deeper understanding of Data Commons tools and API, the [MCP Inspector](https://modelcontextprotocol.io/legacy/tools/inspector) is a useful web UI for interactively sending tool calls to the server using JSON messages. It runs locally and spawns a server. It uses token-based OAuth for authentication, which it generates itself, so you don't need to specify any keys. - -To use it: - -1. If not already installed on your system, install [`Node.js`](https://nodejs.org/en/download) and [`uv`](https://docs.astral.sh/uv/getting-started/installation/). -1. Ensure you've set up the relevant server [environment variables](/mcp/run_tools.html#configure-environment-variables). If you're using a `.env` file, go to the directory where the file is stored. -1. Run: - ``` - npx @modelcontextprotocol/inspector uvx datacommons-mcp serve stdio - ``` -1. Open the Inspector via the pre-filled session token URL which is printed to terminal on server startup. It should look like `http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=`. -1. Click on the link to open the browser. The tool is prepopulated with all relevant variables. -1. In the far left pane, click **Connect**. -1. Click the **Tools** button to display the Data Commons tools and prompts. -1. In the left pane, select a tool. -1. In the right pane, scroll below the prompts to view the input form. -1. Enter values for required fields and click **Run Tool**. Data are shown in the **Tool Result** box. - diff --git a/mcp/index.md b/mcp/index.md index 82951910a..8d227920d 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -30,8 +30,6 @@ The server currently supports the following tools: - `search_indicators`: Searches for available variables and/or topics (a hierarchy of sub-topics and member variables) for a given place or metric. Topics are only relevant for Custom Data Commons instances that have implemented them. - `get_observations`: Fetches statistical data for a given variable and place. -> Tip: If you want a deeper understanding of the tools' inputs and outputs, follow [Test with MCP Inspector](/mcp/develop_agent.html#test-with-mcp-inspector) to try out some manual tool calls. - ## Clients To connect to the Data Commons MCP Server, you can use any available AI application that supports MCP, or your own custom agent. From 7f8521f1f3600d74c2de7d869ea2c16bc1bb374d Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 21 Oct 2025 19:06:20 +0000 Subject: [PATCH 027/121] Add more context to agent dev doc --- mcp/develop_agent.md | 21 ++++++++++++++++----- mcp/index.md | 2 +- mcp/run_tools.md | 7 ++++++- 3 files changed, 23 insertions(+), 7 deletions(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index 3b1eddaf9..7a092f840 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -10,10 +10,21 @@ parent: MCP - Query data interactively with an AI agent We provide two sample Google Agent Development Kit-based agents you can use as inspiration for building your own agent: - [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb) is a Google Colab tutorial that shows how to build an ADK Python agent step by step. -- The sample [basic agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent) is a simple Python ADK agent you can use to develop locally. At the most basic level, you can modify its configuration, including: - - The [AGENT_INSTRUCTIONS](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/instructions.py) - - The [AGENT_MODEL](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py#L23) - - The transport layer protocol: see [Connect to a remote server](/mcp/run_tools.html#connect-to-an-already-running-server-from-a-remote-client) for details. +- The sample [basic agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent) is a simple Python [Google ADK](https://google.github.io/adk-docs/) agent you can use to develop locally. - To run the custom code, see [Use the sample agent](/mcp/run_tools.html#use-the-sample-agent). +## Customize the sample agent + +You can make changes directly to the Python files in . You'll need to [restart the agent](/mcp/run_tools.html#use-the-sample-agent) any time you make changes. + +> Tip: Because the agent is configured to download the ADK dependencies at run time, you do not need to install the Google ADK. + +### Customize the model + +To change to a different LLM, edit the [`AGENT_MODEL`] constant in [/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py#L23){target="_blank"}. + +### Customize agent behavior + +The agent's behavior is determined by prompts provided in the [`AGENT_INSTRUCTIONS`] in [packages/datacommons-mcp/examples/sample_agents/basic_agent/instructions.py](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/instructions.py). + +You can add your own prompts to modify how the agent handles tool results. For example, you might want to give a prompt to "build a report for every response" or "always save tabular results to a CSV file". See the Google ADK page on [LLM agent instructions]https://google.github.io/adk-docs/agents/llm-agents/#guiding-the-agent-instructions-instruction) for tips on how to write good prompts. diff --git a/mcp/index.md b/mcp/index.md index 8d227920d..264616e82 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -27,7 +27,7 @@ At this time, there is no centrally deployed server; you run your own server, an The server currently supports the following tools: -- `search_indicators`: Searches for available variables and/or topics (a hierarchy of sub-topics and member variables) for a given place or metric. Topics are only relevant for Custom Data Commons instances that have implemented them. +- `search_indicators`: Searches for available variables and/or topics (a hierarchy of sub-topics and member variables) for a given place or metric. - `get_observations`: Fetches statistical data for a given variable and place. ## Clients diff --git a/mcp/run_tools.md b/mcp/run_tools.md index ab3f927bb..58b417847 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -143,9 +143,14 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco ## Sample queries -The Data Commons MCP tools excel at natural-language queries that involve comparisons between two or more entities, such as countries or metrics. Here are some examples of such queries: +The Data Commons MCP tools excel at natural-language queries that involve: +- Comparisons between two or more entities, such as countries or metrics +- Exploring data available for a given topic + +Here are some examples of such queries: - "What health data do you have for Africa?" +- "What data do you have on water quality in Zimbabwe?" - "Compare the life expectancy, economic inequality, and GDP growth for BRICS nations." - "Generate a concise report on income vs diabetes in US counties." From 4f880c101909932c9fdf0481ada21727db3b73a9 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 21 Oct 2025 19:09:23 +0000 Subject: [PATCH 028/121] fix formatting --- mcp/develop_agent.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index 7a092f840..2d9e1e313 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -20,11 +20,11 @@ You can make changes directly to the Python files in Date: Tue, 23 Sep 2025 10:32:31 -0700 Subject: [PATCH 029/121] Create placeholders for LLM pages --- llm/index.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 llm/index.md diff --git a/llm/index.md b/llm/index.md new file mode 100644 index 000000000..e69de29bb From 74a7ab209a7ffabbc9d628d939bbca8b12f79902 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Fri, 24 Oct 2025 14:16:22 +0000 Subject: [PATCH 030/121] merge --- mcp/index.md | 1 - 1 file changed, 1 deletion(-) diff --git a/mcp/index.md b/mcp/index.md index 264616e82..4a75836db 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -50,4 +50,3 @@ At the current time, the following are not supported: ## Disclaimer AI applications using the MCP server can make mistakes, so please double-check responses. - From 1b768e98e9ad098944e6da2163d16c858d545190 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Fri, 24 Oct 2025 14:23:15 +0000 Subject: [PATCH 031/121] Slight rewording based on comment from Christie --- mcp/develop_agent.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index 2d9e1e313..1fd7cd9d4 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -16,7 +16,7 @@ We provide two sample Google Agent Development Kit-based agents you can use as i You can make changes directly to the Python files in . You'll need to [restart the agent](/mcp/run_tools.html#use-the-sample-agent) any time you make changes. -> Tip: Because the agent is configured to download the ADK dependencies at run time, you do not need to install the Google ADK. +> Tip: You do not need to install the Google ADK; when you use the [command we provide](run_tools.md#use-the-sample-agent) to start the agent, it downloads the ADK dependencies at run time. ### Customize the model From c22bdbaf26c584834f90dedba5254c13aed7337c Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 29 Oct 2025 10:46:54 -0700 Subject: [PATCH 032/121] Lots of restructuring --- mcp/run_tools.md | 212 ++++++++++++++++++++++++++++++++--------------- 1 file changed, 146 insertions(+), 66 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 58b417847..34784b660 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -13,19 +13,33 @@ This page shows you how to run a local agent and connect to a Data Commons MCP s * TOC {:toc} - We provide specific instructions for the following agents: -- [Gemini CLI](https://github.com/google-gemini/gemini-cli) +- [Gemini CLI extension](https://geminicli.com/extensions/) + - Best for querying datacommons.org + - Provides a built-in Data Commons "agent" and prompts + - Downloads extension files locally + - Minimal setup + + See [Use the Gemini CLI extension](#use-the-gemini-cli-extension) for this option. + + > Tip: If you would like to use this option with a Custom Data Commons instance, we recommend that you develop your own extension. See xxx for details. + +- [Gemini CLI](https://geminicli.com/) - Can be used for datacommons.org or a Custom Data Commons instance - - Requires minimal setup + - No additional downloads + - Server may be run remotely + - Minimal setup See [Use Gemini CLI](#use-gemini-cli) for this option. + - A sample basic agent based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) and [Gemini Flash 2.5](https://deepmind.google/models/gemini/flash/) - Best for interacting with a Web UI - Can be used for datacommons.org or a Custom Data Commons instance - - Can be customized to run other LLMs - - Requires some additional setup + - Can be customized to run other LLMs and prompts + - Downloads agent code locally + - Server may be run remotely + - Some additional setup See [Use the sample agent](#use-the-sample-agent) for this option. @@ -35,15 +49,16 @@ For other clients/agents, see the relevant documentation; you should be able to ## Prerequisites +These are required for all agents: + - A (free) Data Commons API key. To obtain an API key, go to and request a key for the `api.datacommons.org` domain. - Install `uv` for managing and installing Python packages; see the instructions at . -- For running the sample agent or the Colab notebook, a GCP project and a Google AI API key. For details on supported keys, see . -- For running the sample agent locally, install [Git](https://git-scm.com/). + +Other requirements for specific agents are given in their respective sections. > **Important**: Additionally, for custom Data Commons instances: > If you have not rebuilt your Data Commons image since the stable release of 2025-09-08, you must [sync to the latest stable release](/custom_dc/build_image.html#sync-code-to-the-stable-branch), [rebuild your image](/custom_dc/build_image.html#build-package) and [redeploy](/custom_dc/deploy_cloud.html#manage-your-service). - ## Configure environment variables ### Base Data Commons (datacommons.org) @@ -73,14 +88,59 @@ To set variables using a `.env` file: 1. Optionally, set other variables. 1. Save the file. +## Use the Gemini CLI extension + +**Additional prerequisite**: In addition to the [standard prerequisites](#prerequisites), you must have [Git](https://git-scm.com/) installed. + +### Install + +1. Install Gemini CLI: see instructions at . +1. Install the extension directly from GitHub: + ```sh + gemini extensions install https://github.com/gemini-cli-extensions/datacommons + ``` +> Note: If you have previously configured Gemini CLI to use the Data Commons MCP Server and want to use the extension instead, be sure to delete the `datacommons-mcp` section from the relevant `settings.json` file (e.g. ~/.gemini/settings.json`). + +### Run + +1. From any directory, run `gemini`. +1. To verify that the extension is running, enter `/extensions list`. You should see `datacommons` as an active extension. +1. Start sending [natural-language queries](#sample-queries). + +### Update + +After starting up Gemini CLI, if you see a message that a newer version of the extension is available, run the following command: +``` +gemini extensions update datacommons +``` + +### Troubleshoot + +You can diagnose common errors, such as invalid API keys, by using the debug flag: +``` +gemini -d +``` + +### Uninstall + +To uninstall the extension, run: +``` +gemini extension uninstall datacommons +``` + ## Use Gemini CLI +### Install + 1. Install Gemini CLI: see instructions at . -2. To configure Gemini CLI to recognize the Data Commons server, edit your `~/.gemini/settings.json` file to add the following: + +### Configure to run a local server + +To configure Gemini CLI to recognize the Data Commons server, edit the relevant `settings.json` file (e.g. `~/.gemini/settings.json`) to add the following:
 {
-// ...
+   // ...
     "mcpServers": {
        "datacommons-mcp": {
            "command": "uvx",
@@ -90,14 +150,34 @@ To set variables using a `.env` file:
                 "stdio"
             ],
             "env": {
-                "DC_API_KEY": "YOUR DATA COMMONS API KEY"
+                "DC_API_KEY": "YOUR_DATA_COMMONS_API_KEY"
+                // If you are using a Google API key
+                "GEMINI_API_KEY": "YOUR_GOOGLE_API_KEY"
             },
             "trust": true
         }
     }
-// ...
+   // ...
 }
 
+ +### Configure to connect to a remote server + +1. Start up the MCP server in standalone mode, as described in [Run a standalone server](#run-a-standalone-server). +1. In the `settings.json` file, replace the `datacommons-mcp` specification as follows: +
+   {
+    "mcpServers": {
+        "datacommons-mcp": {
+            "httpUrl": "http://HOST:PORT/mcp"
+            // other settings as above
+         }
+      }
+   }
+   
+ +### Usage + 1. From any directory, run `gemini`. 1. To see the Data Commons tools, use `/mcp tools`. 1. Start sending [natural-language queries](#sample-queries). @@ -106,25 +186,60 @@ To set variables using a `.env` file: ## Use the sample agent -We provide a basic agent for interacting with the MCP Server in [packages/datacommons-mcp/examples/sample_agents/basic_agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent). To run the agent locally: +We provide a basic agent for interacting with the MCP Server in [packages/datacommons-mcp/examples/sample_agents/basic_agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent). + +**Additional prerequisites**: In addition to the [standard prerequisites](#prerequisites), you will need: +- A GCP project and a Google AI API key. For details on supported keys, see . +- [Git](https://git-scm.com/) installed. + +### Set the API key environment variable + +Set `GEMINI_API_KEY` (or `GOOGLE_API_KEY`) in your shell/startup script (e.g. `.bashrc`). +
+export GEMINI_API_KEY=YOUR API KEY
+
+ +### Installation + +From the desired directory, clone the `agent-toolkit` repo: +```bash +git clone https://github.com/datacommonsorg/agent-toolkit.git +``` + +### Connect to a remote server (optional) + +If you want to connect to a remote MCP server, follow this procedure before starting the agent: + +1. Start up the MCP server in standalone mode, as described in [Run a standalone server](#run-a-standalone-server). +1. Modify the code in [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py) to set import modules and agent initialization parameters as follows: + +```python +from google.adk.tools.mcp_tool.mcp_toolset import ( + MCPToolset, + StreamableHTTPConnectionParams +) + +root_agent = LlmAgent( + # ... + tools=[McpToolset( + connection_params=StreamableHTTPConnectionParams( + url=f"http://:/mcp" + ) + )], + ) +``` + +### Usage -1. If not already installed, install `uv` for managing and installing Python packages; see the instructions at . -1. From the desired directory, clone the `agent-toolkit` repo: - ```bash - git clone https://github.com/datacommonsorg/agent-toolkit.git - ``` -1. Set the following environment variables in your shell or startup script: -
-   export DC_API_KEY=YOUR DATA COMMONS API KEY
-   export GEMINI_API_KEY=YOUR GOOGLE AI API KEY
-   
1. Go to the root directory of the repo: ```bash cd agent-toolkit ``` 1. Run the agent using one of the following methods. -### Web UI (recommended) +By default, the agent will spawn a local server and connect to it over Stdio. If you want to connect to a remote server, modify the code as described in + +#### Web UI (recommended) 1. Run the following command: ```bash @@ -133,7 +248,7 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco 1. Point your browser to the address and port displayed on the screen (e.g. `http://127.0.0.1:8000/`). The Agent Development Kit Dev UI is displayed. 1. From the **Type a message** box, type your [query for Data Commons](#sample-queries) or select another action. -### Command line interface +#### Command line interface 1. Run the following command: ```bash @@ -141,6 +256,7 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco ``` 1. Enter your [queries](#sample-queries) at the `User` prompt in the terminal. + ## Sample queries The Data Commons MCP tools excel at natural-language queries that involve: @@ -154,58 +270,22 @@ Here are some examples of such queries: - "Compare the life expectancy, economic inequality, and GDP growth for BRICS nations." - "Generate a concise report on income vs diabetes in US counties." -## Use a remote server/client - -### Run a standalone server +## Run a standalone server 1. Ensure you've set up the relevant server [environment variables](#configure-environment-variables). If you're using a `.env` file, go to the directory where the file is stored. 1. Run:
-   uvx datacommons-mcp serve http [--port PORT]
+   uvx datacommons-mcp serve http [--host HOSTNAME] [--port PORT]
    
-By default, the port is 8080 if you don't set it explicitly. +By default, the host is `localhost` and the port is `8080` if you don't set these flags explicitly. The server is addressable with the endpoint `mcp`. For example, `http://my-mcp-server:8080/mcp`. -### Connect to an already-running server from a remote client +Above we provide instructions for connecting to the server over HTTP with [Gemini CLI]() and a [sample ADK agent](). If you're using a different client, consult its documentation to determine how to specify an HTTP URL. -Below we provide instructions for Gemini CLI and a sample ADK agent. If you're using a different client, consult its documentation to determine how to specify an HTTP URL. - -#### Gemini CLI - -To configure Gemini CLI to connect to a remote Data Commons server over HTTP, replace the `mcpServers` section in `~/.gemini/settings.json` (or other `settings.json` file) with the following: - -
-{
-// ... (additional configuration)
-  "mcpServers": {
-     "datacommons-mcp": {
-       "httpUrl": "http://HOST:PORT/mcp"
-      }
-    // ... (other mcpServers entries)
-   }
-}
-
+### Connect to an already-running server from a remote client -#### Sample agent -To configure the sample agent to connect to a remote Data Commons MCP server over HTTP, you need to modify the code in [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py). Set import modules and agent initialization parameters as follows: -```python -from google.adk.tools.mcp_tool.mcp_toolset import ( - MCPToolset, - StreamableHTTPConnectionParams -) - -root_agent = LlmAgent( - # ... - tools=[McpToolset( - connection_params=StreamableHTTPConnectionParams( - url=f"http://:/mcp" - ) - )], - ) -``` -Run the agent as described in [Use the sample agent](#use-the-sample-agent) above. From 5faf6efa7c19d1142a57f818eeae3b6273e11929 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 3 Nov 2025 10:04:57 -0800 Subject: [PATCH 033/121] More work on Gemini CLI extension --- mcp/index.md | 5 +-- mcp/run_tools.md | 83 +++++++++++++++++++++++++++++------------------- 2 files changed, 53 insertions(+), 35 deletions(-) diff --git a/mcp/index.md b/mcp/index.md index 4a75836db..d563d6c90 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -15,7 +15,7 @@ has_children: true The Data Commons [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) service gives AI agents access to the Data Commons knowledge graph and returns data related to statistical variables, topics, and observations. It allows end users to formulate complex natural-language queries interactively, get data in textual, structured or unstructured formats, and download the data as desired. For example, depending on the agent, a user can answer high-level questions such as "give me the economic indicators of the BRICS countries", view simple tables, and download a CSV file of the data in tabular format. -The MCP server returns data from datacommons.org by default or can be configured for a Custom Data Commons instance. +The MCP server returns data from datacommons.org by default or can be configured to query a Custom Data Commons instance. The server is a Python binary based on the [FastMCP 2.0 framework](https://gofastmcp.com). A prebuilt package is available at . @@ -38,7 +38,7 @@ The server supports both standard MCP [transport protocols](https://modelcontext - Stdio: For clients that connect directly using local processes - Streamable HTTP: For clients that connect remotely or otherwise require HTTP (e.g. Typescript) -See [Run and connect to the server](run_tools.md) for procedures for using [Gemini CLI](https://github.com/google-gemini/gemini-cli). +See [Run MCP tools](run_tools.md) for procedures for using [Gemini CLI](https://github.com/google-gemini/gemini-cli) and the [Gemini CLI Data Commons Extension](https://geminicli.com/extensions/). ## Unsupported features @@ -46,6 +46,7 @@ At the current time, the following are not supported: - Non-geographical ("custom") entities - Events - Exploring nodes and relationships in the graph +- Returning data formatted for graphic visualizations ## Disclaimer diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 34784b660..87179dca5 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -6,7 +6,7 @@ parent: MCP - Query data interactively with an AI agent --- {:.no_toc} -# Run and connect to the server +# Run MCP tools This page shows you how to run a local agent and connect to a Data Commons MCP server running locally or remotely. @@ -95,7 +95,7 @@ To set variables using a `.env` file: ### Install 1. Install Gemini CLI: see instructions at . -1. Install the extension directly from GitHub: +1. In a new terminal, install the extension directly from GitHub: ```sh gemini extensions install https://github.com/gemini-cli-extensions/datacommons ``` @@ -104,12 +104,15 @@ To set variables using a `.env` file: ### Run 1. From any directory, run `gemini`. +1. To verify that the Data commons tools are running, enter `/mcp list`. You should see `datacommons-mcp` as connected. 1. To verify that the extension is running, enter `/extensions list`. You should see `datacommons` as an active extension. 1. Start sending [natural-language queries](#sample-queries). ### Update -After starting up Gemini CLI, if you see a message that a newer version of the extension is available, run the following command: +After starting up Gemini CLI, you may see the message `You have one extension with an update available`. + +In this case, run `/extensions list`. If `datacommons` is displayed with `update available`, run the following command: ``` gemini extensions update datacommons ``` @@ -120,12 +123,28 @@ You can diagnose common errors, such as invalid API keys, by using the debug fla ``` gemini -d ``` +You can also use the `Ctrl-o` option from inside the Gemini input field. + +Here are solutions to some commonly experienced problems. + +#### Install/update/uninstall hangs and does not complete + +1. Check that you are not running the `gemini extensions` command from inside the Gemini input field. Start a new terminal and run it from the command line. +1. Check that you've spelled commands correctly, e.g. `extensions` and not `extension`. + +#### datacommons-mcp is disconnected + +This is usually due to a missing [Data Commons API key](#prerequisites). Be sure to obtain a key and export it on the command line or in a startup script (e.g. `.bashrc`). + +#### Failed to clone Git repository + +Make sure you have installed [Git](https://git-scm.com/) on your system. ### Uninstall To uninstall the extension, run: ``` -gemini extension uninstall datacommons +gemini extensions uninstall datacommons ``` ## Use Gemini CLI @@ -176,7 +195,7 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant }
-### Usage +### Send queries 1. From any directory, run `gemini`. 1. To see the Data Commons tools, use `/mcp tools`. @@ -199,37 +218,14 @@ Set `GEMINI_API_KEY` (or `GOOGLE_API_KEY`) in your shell/startup script (e.g. `. export GEMINI_API_KEY=YOUR API KEY
-### Installation +### Install From the desired directory, clone the `agent-toolkit` repo: ```bash git clone https://github.com/datacommonsorg/agent-toolkit.git ``` -### Connect to a remote server (optional) - -If you want to connect to a remote MCP server, follow this procedure before starting the agent: - -1. Start up the MCP server in standalone mode, as described in [Run a standalone server](#run-a-standalone-server). -1. Modify the code in [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py) to set import modules and agent initialization parameters as follows: - -```python -from google.adk.tools.mcp_tool.mcp_toolset import ( - MCPToolset, - StreamableHTTPConnectionParams -) - -root_agent = LlmAgent( - # ... - tools=[McpToolset( - connection_params=StreamableHTTPConnectionParams( - url=f"http://:/mcp" - ) - )], - ) -``` - -### Usage +### Run 1. Go to the root directory of the repo: ```bash @@ -237,7 +233,7 @@ root_agent = LlmAgent( ``` 1. Run the agent using one of the following methods. -By default, the agent will spawn a local server and connect to it over Stdio. If you want to connect to a remote server, modify the code as described in +By default, the agent will spawn a local server and connect to it over Stdio. If you want to connect to a remote server, modify the code as described in before using this procedure. #### Web UI (recommended) @@ -256,6 +252,28 @@ By default, the agent will spawn a local server and connect to it over Stdio. If ``` 1. Enter your [queries](#sample-queries) at the `User` prompt in the terminal. +### Connect to a remote server (optional) + +If you want to connect to a remote MCP server, follow this procedure before starting the agent: + +1. Start up the MCP server in standalone mode, as described in [Run a standalone server](#run-a-standalone-server). +1. Modify the code in [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py) to set import modules and agent initialization parameters as follows: + +```python +from google.adk.tools.mcp_tool.mcp_toolset import ( + MCPToolset, + StreamableHTTPConnectionParams +) + +root_agent = LlmAgent( + # ... + tools=[McpToolset( + connection_params=StreamableHTTPConnectionParams( + url=f"http://:/mcp" + ) + )], + ) +``` ## Sample queries @@ -281,9 +299,8 @@ By default, the host is `localhost` and the port is `8080` if you don't set thes The server is addressable with the endpoint `mcp`. For example, `http://my-mcp-server:8080/mcp`. -Above we provide instructions for connecting to the server over HTTP with [Gemini CLI]() and a [sample ADK agent](). If you're using a different client, consult its documentation to determine how to specify an HTTP URL. +You can connect to the server using [Gemini CLI](#use-gemini-cli) or the [sample ADK agent](#use-the-sample-agent). If you're using a different client from the ones documented on this page, consult its documentation to determine how to specify an HTTP URL. -### Connect to an already-running server from a remote client From 432bbb9ea78ff0645cd583951508580c68aff065 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 3 Nov 2025 10:22:03 -0800 Subject: [PATCH 034/121] More changes --- mcp/run_tools.md | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 87179dca5..1b3c8c74f 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -17,7 +17,7 @@ We provide specific instructions for the following agents: - [Gemini CLI extension](https://geminicli.com/extensions/) - Best for querying datacommons.org - - Provides a built-in Data Commons "agent" and prompts + - Provides a built-in "agent" and prompts for Data Commons - Downloads extension files locally - Minimal setup @@ -90,16 +90,19 @@ To set variables using a `.env` file: ## Use the Gemini CLI extension -**Additional prerequisite**: In addition to the [standard prerequisites](#prerequisites), you must have [Git](https://git-scm.com/) installed. +**Additional prerequisite**: In addition to the [standard prerequisites](#prerequisites), you must have the following installed: +- [Git](https://git-scm.com/) +- [Google Gemini CLI](https://github.com/google-gemini/gemini-cli#quick-install). -### Install +When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons) to your local system. -1. Install Gemini CLI: see instructions at . -1. In a new terminal, install the extension directly from GitHub: +### Install + +1. Open a new terminal and install the extension directly from GitHub: ```sh gemini extensions install https://github.com/gemini-cli-extensions/datacommons ``` -> Note: If you have previously configured Gemini CLI to use the Data Commons MCP Server and want to use the extension instead, be sure to delete the `datacommons-mcp` section from the relevant `settings.json` file (e.g. ~/.gemini/settings.json`). +> Note: If you have previously configured Gemini CLI to use the Data Commons MCP Server and want to use the extension instead, be sure to delete the `datacommons-mcp` section from the relevant `settings.json` file (e.g. `~/.gemini/settings.json`). ### Run @@ -127,15 +130,18 @@ You can also use the `Ctrl-o` option from inside the Gemini input field. Here are solutions to some commonly experienced problems. +{:.no_toc} #### Install/update/uninstall hangs and does not complete 1. Check that you are not running the `gemini extensions` command from inside the Gemini input field. Start a new terminal and run it from the command line. 1. Check that you've spelled commands correctly, e.g. `extensions` and not `extension`. +{:.no_toc} #### datacommons-mcp is disconnected This is usually due to a missing [Data Commons API key](#prerequisites). Be sure to obtain a key and export it on the command line or in a startup script (e.g. `.bashrc`). +{:.no_toc} #### Failed to clone Git repository Make sure you have installed [Git](https://git-scm.com/) on your system. @@ -233,8 +239,9 @@ git clone https://github.com/datacommonsorg/agent-toolkit.git ``` 1. Run the agent using one of the following methods. -By default, the agent will spawn a local server and connect to it over Stdio. If you want to connect to a remote server, modify the code as described in before using this procedure. +By default, the agent will spawn a local server and connect to it over Stdio. If you want to connect to a remote server, modify the code as described in [Connect to a remote server](#remote) before using this procedure. +{:.no_toc} #### Web UI (recommended) 1. Run the following command: @@ -244,6 +251,7 @@ By default, the agent will spawn a local server and connect to it over Stdio. If 1. Point your browser to the address and port displayed on the screen (e.g. `http://127.0.0.1:8000/`). The Agent Development Kit Dev UI is displayed. 1. From the **Type a message** box, type your [query for Data Commons](#sample-queries) or select another action. +{:.no_toc} #### Command line interface 1. Run the following command: @@ -252,7 +260,8 @@ By default, the agent will spawn a local server and connect to it over Stdio. If ``` 1. Enter your [queries](#sample-queries) at the `User` prompt in the terminal. -### Connect to a remote server (optional) +{: #remote} +### Configure to connect to a remote server If you want to connect to a remote MCP server, follow this procedure before starting the agent: @@ -300,9 +309,3 @@ By default, the host is `localhost` and the port is `8080` if you don't set thes The server is addressable with the endpoint `mcp`. For example, `http://my-mcp-server:8080/mcp`. You can connect to the server using [Gemini CLI](#use-gemini-cli) or the [sample ADK agent](#use-the-sample-agent). If you're using a different client from the ones documented on this page, consult its documentation to determine how to specify an HTTP URL. - - - - - - From 1154f250e0953e7918ed793c90c46d23a8b9180b Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 3 Nov 2025 10:36:31 -0800 Subject: [PATCH 035/121] more changes --- mcp/run_tools.md | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 1b3c8c74f..76935f873 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -70,7 +70,7 @@ export DC_API_KEY=YOUR API KEY ### Custom Data Commons -If you're running a against a custom Data Commons instance, we recommend using a `.env` file, which the server locates automatically, to keep all the settings in one place. All supported options are documented in . +If you're running a against a custom Data Commons instance, we recommend using a `.env` file, which the server locates automatically, to keep all the settings in one place. All supported options are documented in [packages/datacommons-mcp/.env.sample](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample). To set variables using a `.env` file: @@ -92,23 +92,23 @@ To set variables using a `.env` file: **Additional prerequisite**: In addition to the [standard prerequisites](#prerequisites), you must have the following installed: - [Git](https://git-scm.com/) -- [Google Gemini CLI](https://github.com/google-gemini/gemini-cli#quick-install). +- [Google Gemini CLI](https://github.com/google-gemini/gemini-cli#quick-install) When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons) to your local system. ### Install -1. Open a new terminal and install the extension directly from GitHub: - ```sh - gemini extensions install https://github.com/gemini-cli-extensions/datacommons - ``` +Open a new terminal and install the extension directly from GitHub: +```sh +gemini extensions install https://github.com/gemini-cli-extensions/datacommons +``` > Note: If you have previously configured Gemini CLI to use the Data Commons MCP Server and want to use the extension instead, be sure to delete the `datacommons-mcp` section from the relevant `settings.json` file (e.g. `~/.gemini/settings.json`). ### Run 1. From any directory, run `gemini`. -1. To verify that the Data commons tools are running, enter `/mcp list`. You should see `datacommons-mcp` as connected. -1. To verify that the extension is running, enter `/extensions list`. You should see `datacommons` as an active extension. +1. To verify that the Data commons tools are running, enter `/mcp list`. You should see `datacommons-mcp` listed as `Ready`. If you don't, see the [Troubleshoot](#troubleshoot) section. +1. To verify that the extension is running, enter `/extensions list`. You should see `datacommons` listed as `active`. 1. Start sending [natural-language queries](#sample-queries). ### Update @@ -139,7 +139,7 @@ Here are solutions to some commonly experienced problems. {:.no_toc} #### datacommons-mcp is disconnected -This is usually due to a missing [Data Commons API key](#prerequisites). Be sure to obtain a key and export it on the command line or in a startup script (e.g. `.bashrc`). +This is usually due to a missing [Data Commons API key](#prerequisites). Be sure to obtain a key and export it on the command line or in a startup script (e.g. `.bashrc`). If you've exported it in a startup script, be sure to start a new terminal. {:.no_toc} #### Failed to clone Git repository @@ -157,7 +157,7 @@ gemini extensions uninstall datacommons ### Install -1. Install Gemini CLI: see instructions at . +To install Gemini CLI, see the instructions at . ### Configure to run a local server @@ -219,7 +219,7 @@ We provide a basic agent for interacting with the MCP Server in [packages/dataco ### Set the API key environment variable -Set `GEMINI_API_KEY` (or `GOOGLE_API_KEY`) in your shell/startup script (e.g. `.bashrc`). +Set `GEMINI_API_KEY` (or `GOOGLE_API_KEY`) in your shell/startup script (e.g. `.bashrc`):
 export GEMINI_API_KEY=YOUR API KEY
 
@@ -233,14 +233,14 @@ git clone https://github.com/datacommonsorg/agent-toolkit.git ### Run +By default, the agent will spawn a local server and connect to it over Stdio. If you want to connect to a remote server, modify the code as described in [Connect to a remote server](#remote) before using this procedure. + 1. Go to the root directory of the repo: ```bash cd agent-toolkit ``` 1. Run the agent using one of the following methods. -By default, the agent will spawn a local server and connect to it over Stdio. If you want to connect to a remote server, modify the code as described in [Connect to a remote server](#remote) before using this procedure. - {:.no_toc} #### Web UI (recommended) From ece88822e5593e5d5043109993ac619897c9523d Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 3 Nov 2025 10:41:59 -0800 Subject: [PATCH 036/121] more changes --- mcp/run_tools.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 76935f873..272be1372 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -33,8 +33,8 @@ We provide specific instructions for the following agents: See [Use Gemini CLI](#use-gemini-cli) for this option. -- A sample basic agent based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) and [Gemini Flash 2.5](https://deepmind.google/models/gemini/flash/) - - Best for interacting with a Web UI +- A sample basic agent based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) + - Best for interacting with a Web GUI - Can be used for datacommons.org or a Custom Data Commons instance - Can be customized to run other LLMs and prompts - Downloads agent code locally @@ -56,7 +56,7 @@ These are required for all agents: Other requirements for specific agents are given in their respective sections. -> **Important**: Additionally, for custom Data Commons instances: +> **Important**: Additionally, for Custom Data Commons instances: > If you have not rebuilt your Data Commons image since the stable release of 2025-09-08, you must [sync to the latest stable release](/custom_dc/build_image.html#sync-code-to-the-stable-branch), [rebuild your image](/custom_dc/build_image.html#build-package) and [redeploy](/custom_dc/deploy_cloud.html#manage-your-service). ## Configure environment variables @@ -90,7 +90,8 @@ To set variables using a `.env` file: ## Use the Gemini CLI extension -**Additional prerequisite**: In addition to the [standard prerequisites](#prerequisites), you must have the following installed: +**Additional prerequisites** +In addition to the [standard prerequisites](#prerequisites), you must have the following installed: - [Git](https://git-scm.com/) - [Google Gemini CLI](https://github.com/google-gemini/gemini-cli#quick-install) @@ -213,7 +214,8 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant We provide a basic agent for interacting with the MCP Server in [packages/datacommons-mcp/examples/sample_agents/basic_agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent). -**Additional prerequisites**: In addition to the [standard prerequisites](#prerequisites), you will need: +**Additional prerequisites** +In addition to the [standard prerequisites](#prerequisites), you will need: - A GCP project and a Google AI API key. For details on supported keys, see . - [Git](https://git-scm.com/) installed. From f81946bff283e5f31af2f8fb3612fb6a95402fb9 Mon Sep 17 00:00:00 2001 From: kmoscoe <165203920+kmoscoe@users.noreply.github.com> Date: Mon, 3 Nov 2025 15:01:49 -0500 Subject: [PATCH 037/121] Update mcp/run_tools.md Co-authored-by: Christie Ellks --- mcp/run_tools.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 272be1372..eec9e58c6 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -93,7 +93,7 @@ To set variables using a `.env` file: **Additional prerequisites** In addition to the [standard prerequisites](#prerequisites), you must have the following installed: - [Git](https://git-scm.com/) -- [Google Gemini CLI](https://github.com/google-gemini/gemini-cli#quick-install) +- [Google Gemini CLI](https://geminicli.com/docs/get-started/) When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons) to your local system. From 1fb16409bf92121022bbef4ca7179100c45a4efa Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 3 Nov 2025 12:14:13 -0800 Subject: [PATCH 038/121] implement suggestions from Christie --- mcp/run_tools.md | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 272be1372..d8d1e74f4 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -23,7 +23,7 @@ We provide specific instructions for the following agents: See [Use the Gemini CLI extension](#use-the-gemini-cli-extension) for this option. - > Tip: If you would like to use this option with a Custom Data Commons instance, we recommend that you develop your own extension. See xxx for details. + - [Gemini CLI](https://geminicli.com/) - Can be used for datacommons.org or a Custom Data Commons instance @@ -97,14 +97,16 @@ In addition to the [standard prerequisites](#prerequisites), you must have the f When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons) to your local system. +{:.no_toc} ### Install Open a new terminal and install the extension directly from GitHub: ```sh -gemini extensions install https://github.com/gemini-cli-extensions/datacommons +gemini extensions install https://github.com/gemini-cli-extensions/datacommons [--auto-update] ``` > Note: If you have previously configured Gemini CLI to use the Data Commons MCP Server and want to use the extension instead, be sure to delete the `datacommons-mcp` section from the relevant `settings.json` file (e.g. `~/.gemini/settings.json`). +{:.no_toc} ### Run 1. From any directory, run `gemini`. @@ -112,6 +114,7 @@ gemini extensions install https://github.com/gemini-cli-extensions/datacommons 1. To verify that the extension is running, enter `/extensions list`. You should see `datacommons` listed as `active`. 1. Start sending [natural-language queries](#sample-queries). +{:.no_toc} ### Update After starting up Gemini CLI, you may see the message `You have one extension with an update available`. @@ -121,6 +124,7 @@ In this case, run `/extensions list`. If `datacommons` is displayed with `update gemini extensions update datacommons ``` +{:.no_toc} ### Troubleshoot You can diagnose common errors, such as invalid API keys, by using the debug flag: @@ -147,6 +151,7 @@ This is usually due to a missing [Data Commons API key](#prerequisites). Be sure Make sure you have installed [Git](https://git-scm.com/) on your system. +{:.no_toc} ### Uninstall To uninstall the extension, run: @@ -156,10 +161,14 @@ gemini extensions uninstall datacommons ## Use Gemini CLI +Before installing, be sure to check the [Prerequisites](#prerequisites) above. + +{:.no_toc} ### Install To install Gemini CLI, see the instructions at . +{:.no_toc} ### Configure to run a local server To configure Gemini CLI to recognize the Data Commons server, edit the relevant `settings.json` file (e.g. `~/.gemini/settings.json`) to add the following: @@ -187,6 +196,7 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant }
+{:.no_toc} ### Configure to connect to a remote server 1. Start up the MCP server in standalone mode, as described in [Run a standalone server](#run-a-standalone-server). @@ -202,6 +212,7 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant } +{:.no_toc} ### Send queries 1. From any directory, run `gemini`. @@ -219,6 +230,7 @@ In addition to the [standard prerequisites](#prerequisites), you will need: - A GCP project and a Google AI API key. For details on supported keys, see . - [Git](https://git-scm.com/) installed. +{:.no_toc} ### Set the API key environment variable Set `GEMINI_API_KEY` (or `GOOGLE_API_KEY`) in your shell/startup script (e.g. `.bashrc`): @@ -226,6 +238,7 @@ Set `GEMINI_API_KEY` (or `GOOGLE_API_KEY`) in your shell/startup script (e.g. `. export GEMINI_API_KEY=YOUR API KEY +{:.no_toc} ### Install From the desired directory, clone the `agent-toolkit` repo: @@ -233,6 +246,7 @@ From the desired directory, clone the `agent-toolkit` repo: git clone https://github.com/datacommonsorg/agent-toolkit.git ``` +{:.no_toc} ### Run By default, the agent will spawn a local server and connect to it over Stdio. If you want to connect to a remote server, modify the code as described in [Connect to a remote server](#remote) before using this procedure. @@ -263,6 +277,7 @@ By default, the agent will spawn a local server and connect to it over Stdio. If 1. Enter your [queries](#sample-queries) at the `User` prompt in the terminal. {: #remote} +{:.no_toc} ### Configure to connect to a remote server If you want to connect to a remote MCP server, follow this procedure before starting the agent: From 90ddfc3c2cc20728447418871a6942c0ec49736a Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 3 Nov 2025 14:00:10 -0800 Subject: [PATCH 039/121] Add some bullets to comparison section --- mcp/run_tools.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index db98e834b..fbaeb738b 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -17,8 +17,9 @@ We provide specific instructions for the following agents: - [Gemini CLI extension](https://geminicli.com/extensions/) - Best for querying datacommons.org - - Provides a built-in "agent" and prompts for Data Commons + - Provides a built-in "agent" and prompt file for Data Commons - Downloads extension files locally + - Uses `uv` to run the MCP server locally - Minimal setup See [Use the Gemini CLI extension](#use-the-gemini-cli-extension) for this option. @@ -28,7 +29,8 @@ We provide specific instructions for the following agents: - [Gemini CLI](https://geminicli.com/) - Can be used for datacommons.org or a Custom Data Commons instance - No additional downloads - - Server may be run remotely + - MCP server can be run locally or remotely + - You can create your own prompt file - Minimal setup See [Use Gemini CLI](#use-gemini-cli) for this option. From a9ad87f59c1b62a6e96c8633ddeed676a5cb02b6 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 3 Nov 2025 14:05:53 -0800 Subject: [PATCH 040/121] Fix update section --- Gemfile.lock | 156 +++++++++++++++++++++++++---------------------- mcp/run_tools.md | 4 +- 2 files changed, 84 insertions(+), 76 deletions(-) diff --git a/Gemfile.lock b/Gemfile.lock index 04a3bd150..bcb2794db 100644 --- a/Gemfile.lock +++ b/Gemfile.lock @@ -1,64 +1,72 @@ GEM remote: https://rubygems.org/ specs: - activesupport (7.1.1) + activesupport (7.2.3) base64 + benchmark (>= 0.3) bigdecimal - concurrent-ruby (~> 1.0, >= 1.0.2) + concurrent-ruby (~> 1.0, >= 1.3.1) connection_pool (>= 2.2.5) drb i18n (>= 1.6, < 2) + logger (>= 1.4.2) minitest (>= 5.1) - mutex_m - tzinfo (~> 2.0) - addressable (2.8.5) - public_suffix (>= 2.0.2, < 6.0) - base64 (0.1.1) - bigdecimal (3.1.4) + securerandom (>= 0.3) + tzinfo (~> 2.0, >= 2.0.5) + addressable (2.8.7) + public_suffix (>= 2.0.2, < 7.0) + base64 (0.3.0) + benchmark (0.5.0) + bigdecimal (3.3.1) coffee-script (2.4.1) coffee-script-source execjs - coffee-script-source (1.11.1) + coffee-script-source (1.12.2) colorator (1.1.0) - commonmarker (0.23.10) - concurrent-ruby (1.2.2) - connection_pool (2.4.1) - dnsruby (1.70.0) + commonmarker (0.23.12) + concurrent-ruby (1.3.5) + connection_pool (2.5.4) + dnsruby (1.73.0) + base64 (>= 0.2) + logger (~> 1.6) simpleidn (~> 0.2.1) - drb (2.1.1) - ruby2_keywords + drb (2.2.3) em-websocket (0.5.3) eventmachine (>= 0.12.9) http_parser.rb (~> 0) - ethon (0.16.0) + ethon (0.18.0) ffi (>= 1.15.0) + logger eventmachine (1.2.7) - execjs (2.9.1) - faraday (2.7.11) - base64 - faraday-net_http (>= 2.0, < 3.1) - ruby2_keywords (>= 0.0.4) - faraday-net_http (3.0.2) - ffi (1.16.3) + execjs (2.10.0) + faraday (2.14.0) + faraday-net_http (>= 2.0, < 3.5) + json + logger + faraday-net_http (3.4.1) + net-http (>= 0.5.0) + ffi (1.17.2) + ffi (1.17.2-arm64-darwin) + ffi (1.17.2-x86_64-darwin) forwardable-extended (2.6.0) - gemoji (3.0.1) - github-pages (228) - github-pages-health-check (= 1.17.9) - jekyll (= 3.9.3) - jekyll-avatar (= 0.7.0) - jekyll-coffeescript (= 1.1.1) + gemoji (4.1.0) + github-pages (230) + github-pages-health-check (= 1.18.2) + jekyll (= 3.9.5) + jekyll-avatar (= 0.8.0) + jekyll-coffeescript (= 1.2.2) jekyll-commonmark-ghpages (= 0.4.0) - jekyll-default-layout (= 0.1.4) - jekyll-feed (= 0.15.1) + jekyll-default-layout (= 0.1.5) + jekyll-feed (= 0.17.0) jekyll-gist (= 1.5.0) - jekyll-github-metadata (= 2.13.0) + jekyll-github-metadata (= 2.16.1) jekyll-include-cache (= 0.2.1) jekyll-mentions (= 1.6.0) jekyll-optional-front-matter (= 0.3.2) jekyll-paginate (= 1.1.0) jekyll-readme-index (= 0.3.0) jekyll-redirect-from (= 0.16.0) - jekyll-relative-links (= 0.6.1) + jekyll-relative-links (= 0.7.0) jekyll-remote-theme (= 0.4.3) jekyll-sass-converter (= 1.5.2) jekyll-seo-tag (= 2.8.0) @@ -78,28 +86,28 @@ GEM jekyll-theme-tactile (= 0.2.0) jekyll-theme-time-machine (= 0.2.0) jekyll-titles-from-headings (= 0.5.3) - jemoji (= 0.12.0) - kramdown (= 2.3.2) + jemoji (= 0.13.0) + kramdown (= 2.4.0) kramdown-parser-gfm (= 1.1.0) liquid (= 4.0.4) mercenary (~> 0.3) minima (= 2.5.1) nokogiri (>= 1.13.6, < 2.0) - rouge (= 3.26.0) + rouge (= 3.30.0) terminal-table (~> 1.4) - github-pages-health-check (1.17.9) + github-pages-health-check (1.18.2) addressable (~> 2.3) dnsruby (~> 1.60) - octokit (~> 4.0) - public_suffix (>= 3.0, < 5.0) + octokit (>= 4, < 8) + public_suffix (>= 3.0, < 6.0) typhoeus (~> 1.3) html-pipeline (2.14.3) activesupport (>= 2) nokogiri (>= 1.4) http_parser.rb (0.8.0) - i18n (1.14.1) + i18n (1.14.7) concurrent-ruby (~> 1.0) - jekyll (3.9.3) + jekyll (3.9.5) addressable (~> 2.4) colorator (~> 1.0) em-websocket (~> 0.5) @@ -112,11 +120,11 @@ GEM pathutil (~> 0.9) rouge (>= 1.7, < 4) safe_yaml (~> 1.0) - jekyll-avatar (0.7.0) + jekyll-avatar (0.8.0) jekyll (>= 3.0, < 5.0) - jekyll-coffeescript (1.1.1) + jekyll-coffeescript (1.2.2) coffee-script (~> 2.2) - coffee-script-source (~> 1.11.1) + coffee-script-source (~> 1.12) jekyll-commonmark (1.4.0) commonmarker (~> 0.22) jekyll-commonmark-ghpages (0.4.0) @@ -124,15 +132,15 @@ GEM jekyll (~> 3.9.0) jekyll-commonmark (~> 1.4.0) rouge (>= 2.0, < 5.0) - jekyll-default-layout (0.1.4) - jekyll (~> 3.0) - jekyll-feed (0.15.1) + jekyll-default-layout (0.1.5) + jekyll (>= 3.0, < 5.0) + jekyll-feed (0.17.0) jekyll (>= 3.7, < 5.0) jekyll-gist (1.5.0) octokit (~> 4.2) - jekyll-github-metadata (2.13.0) + jekyll-github-metadata (2.16.1) jekyll (>= 3.4, < 5.0) - octokit (~> 4.0, != 4.4.0) + octokit (>= 4, < 7, != 4.4.0) jekyll-include-cache (0.2.1) jekyll (>= 3.7, < 5.0) jekyll-last-modified-at (1.3.2) @@ -147,7 +155,7 @@ GEM jekyll (>= 3.0, < 5.0) jekyll-redirect-from (0.16.0) jekyll (>= 3.3, < 5.0) - jekyll-relative-links (0.6.1) + jekyll-relative-links (0.7.0) jekyll (>= 3.3, < 5.0) jekyll-remote-theme (0.4.3) addressable (~> 2.0) @@ -161,7 +169,7 @@ GEM jekyll-sitemap (1.4.0) jekyll (>= 3.7, < 5.0) jekyll-swiss (1.0.0) - jekyll-tabs (1.1.1) + jekyll-tabs (1.2.1) jekyll (>= 3.0, < 5.0) jekyll-theme-architect (0.2.0) jekyll (> 3.5, < 5.0) @@ -207,69 +215,69 @@ GEM jekyll (>= 3.3, < 5.0) jekyll-watch (2.2.1) listen (~> 3.0) - jemoji (0.12.0) - gemoji (~> 3.0) + jemoji (0.13.0) + gemoji (>= 3, < 5) html-pipeline (~> 2.2) jekyll (>= 3.0, < 5.0) - kramdown (2.3.2) + json (2.15.2) + kramdown (2.4.0) rexml kramdown-parser-gfm (1.1.0) kramdown (~> 2.0) liquid (4.0.4) - listen (3.8.0) + listen (3.9.0) rb-fsevent (~> 0.10, >= 0.10.3) rb-inotify (~> 0.9, >= 0.9.10) + logger (1.7.0) mercenary (0.3.6) mini_portile2 (2.8.9) minima (2.5.1) jekyll (>= 3.5, < 5.0) jekyll-feed (~> 0.9) jekyll-seo-tag (~> 2.1) - minitest (5.20.0) - mutex_m (0.1.2) - nokogiri (1.18.9) + minitest (5.26.0) + net-http (0.7.0) + uri + nokogiri (1.18.10) mini_portile2 (~> 2.8.2) racc (~> 1.4) - nokogiri (1.18.9-arm64-darwin) + nokogiri (1.18.10-arm64-darwin) racc (~> 1.4) - nokogiri (1.18.9-x86_64-darwin) + nokogiri (1.18.10-x86_64-darwin) racc (~> 1.4) octokit (4.25.1) faraday (>= 1, < 3) sawyer (~> 0.9) pathutil (0.16.2) forwardable-extended (~> 2.6) - public_suffix (4.0.7) + public_suffix (5.1.1) racc (1.8.1) rb-fsevent (0.11.2) - rb-inotify (0.10.1) + rb-inotify (0.11.1) ffi (~> 1.0) - rexml (3.3.9) - rouge (3.26.0) - ruby2_keywords (0.0.5) - rubyzip (2.3.2) + rexml (3.4.4) + rouge (3.30.0) + rubyzip (2.4.1) safe_yaml (1.0.5) sass (3.7.4) sass-listen (~> 4.0.0) sass-listen (4.0.0) rb-fsevent (~> 0.9, >= 0.9.4) rb-inotify (~> 0.9, >= 0.9.7) - sawyer (0.9.2) + sawyer (0.9.3) addressable (>= 2.3.5) faraday (>= 0.17.3, < 3) - simpleidn (0.2.1) - unf (~> 0.1.4) + securerandom (0.4.1) + simpleidn (0.2.3) terminal-table (1.8.0) unicode-display_width (~> 1.1, >= 1.1.1) - typhoeus (1.4.0) + typhoeus (1.4.1) ethon (>= 0.9.0) tzinfo (2.0.6) concurrent-ruby (~> 1.0) - unf (0.1.4) - unf_ext - unf_ext (0.0.8.2) unicode-display_width (1.8.0) - webrick (1.8.2) + uri (1.1.0) + webrick (1.9.1) PLATFORMS ruby diff --git a/mcp/run_tools.md b/mcp/run_tools.md index fbaeb738b..e35236ffe 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -121,9 +121,9 @@ gemini extensions install https://github.com/gemini-cli-extensions/datacommons [ After starting up Gemini CLI, you may see the message `You have one extension with an update available`. -In this case, run `/extensions list`. If `datacommons` is displayed with `update available`, run the following command: +In this case, run `/extensions list`. If `datacommons` is displayed with `update available`, enter the following in the Gemini input field: ``` -gemini extensions update datacommons +/extensions update datacommons ``` {:.no_toc} From 4930a5b512c10070efa57272c72acd5fe65c2f2a Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 4 Nov 2025 11:10:44 -0800 Subject: [PATCH 041/121] rename "prompt" to "context" --- mcp/run_tools.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index e35236ffe..cffc22911 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -17,7 +17,7 @@ We provide specific instructions for the following agents: - [Gemini CLI extension](https://geminicli.com/extensions/) - Best for querying datacommons.org - - Provides a built-in "agent" and prompt file for Data Commons + - Provides a built-in "agent" and context file for Data Commons - Downloads extension files locally - Uses `uv` to run the MCP server locally - Minimal setup @@ -30,7 +30,7 @@ We provide specific instructions for the following agents: - Can be used for datacommons.org or a Custom Data Commons instance - No additional downloads - MCP server can be run locally or remotely - - You can create your own prompt file + - You can create your own context file - Minimal setup See [Use Gemini CLI](#use-gemini-cli) for this option. From 9fa9f0fcafd7ab0542a4900e8203981e93ca70b4 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 12 Nov 2025 10:00:08 -0800 Subject: [PATCH 042/121] add mention of Gemini CLI extension for custom DC --- mcp/run_tools.md | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index cffc22911..4d69655f4 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -13,7 +13,7 @@ This page shows you how to run a local agent and connect to a Data Commons MCP s * TOC {:toc} -We provide specific instructions for the following agents: +We provide specific instructions for the following agents. All may be used to query datacommons.org or a Custom Data Commons instance. - [Gemini CLI extension](https://geminicli.com/extensions/) - Best for querying datacommons.org @@ -24,10 +24,7 @@ We provide specific instructions for the following agents: See [Use the Gemini CLI extension](#use-the-gemini-cli-extension) for this option. - - - [Gemini CLI](https://geminicli.com/) - - Can be used for datacommons.org or a Custom Data Commons instance - No additional downloads - MCP server can be run locally or remotely - You can create your own context file @@ -37,7 +34,6 @@ We provide specific instructions for the following agents: - A sample basic agent based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) - Best for interacting with a Web GUI - - Can be used for datacommons.org or a Custom Data Commons instance - Can be customized to run other LLMs and prompts - Downloads agent code locally - Server may be run remotely @@ -76,7 +72,7 @@ If you're running a against a custom Data Commons instance, we recommend using a To set variables using a `.env` file: -1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Or, if you plan to run the sample agent, clone the repo . +1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Alternatively, if you plan to run the sample agent, clone the repo . 1. From the directory where you saved the sample file, copy it to a new file called `.env`. For example: ```bash @@ -106,12 +102,19 @@ Open a new terminal and install the extension directly from GitHub: ```sh gemini extensions install https://github.com/gemini-cli-extensions/datacommons [--auto-update] ``` +The installation creates a local `.gemini/extensions/datacommons` directory with the required files. + > Note: If you have previously configured Gemini CLI to use the Data Commons MCP Server and want to use the extension instead, be sure to delete the `datacommons-mcp` section from the relevant `settings.json` file (e.g. `~/.gemini/settings.json`). {:.no_toc} ### Run -1. From any directory, run `gemini`. +1. If you are using a `.env` file, switch to the extensions directory: + ``` + cd ./gemini/extensions/datacommons + ``` + Copy your `.env` file to this directory, and run `gemini` from here. +1. If you're not using a `.env` file, run `gemini` from any directory. 1. To verify that the Data commons tools are running, enter `/mcp list`. You should see `datacommons-mcp` listed as `Ready`. If you don't, see the [Troubleshoot](#troubleshoot) section. 1. To verify that the extension is running, enter `/extensions list`. You should see `datacommons` listed as `active`. 1. Start sending [natural-language queries](#sample-queries). From 6c2bd5ce0e677c93a611e3a5aa12f161d383480d Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 17 Nov 2025 10:35:43 -0800 Subject: [PATCH 043/121] Update custom DC vars procedure --- mcp/run_tools.md | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 4d69655f4..7d417ad19 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -63,14 +63,21 @@ Other requirements for specific agents are given in their respective sections. For basic usage against datacommons.org, set the required `DC_API_KEY` in your shell/startup script (e.g. `.bashrc`).
-export DC_API_KEY=YOUR API KEY
+export DC_API_KEY="YOUR API KEY"
 
### Custom Data Commons -If you're running a against a custom Data Commons instance, we recommend using a `.env` file, which the server locates automatically, to keep all the settings in one place. All supported options are documented in [packages/datacommons-mcp/.env.sample](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample). +To run against a Custom Data Commons instance, you must set additional variables. All supported options are documented in [packages/datacommons-mcp/.env.sample](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample). -To set variables using a `.env` file: +The following variables are required: +- export DC_API_KEY="YOUR API KEY" +- `export DC_TYPE="custom"` +- export CUSTOM_DC_URL="YOUR_INSTANCE_URL" + +If you're using the Gemini CLI extension, just set these in your shell/startup script. + +If you're not using the extension, you may wish to use a `.env` file, which the server locates automatically, to keep all the settings in one place. To set all variables using a `.env` file: 1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Alternatively, if you plan to run the sample agent, clone the repo . @@ -79,7 +86,7 @@ To set variables using a `.env` file: cd ~/agent-toolkit/packages/datacommons-mcp cp .env.sample .env ``` -1. Set the following variables: +1. Set the following variables, without quotes: - `DC_API_KEY`: Set to your Data Commons API key - `DC_TYPE`: Set to `custom`. - `CUSTOM_DC_URL`: Uncomment and set to the URL of your instance. From 4d67fc6eb92b9cf727fd2088693ed97b9fa5ae13 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Fri, 21 Nov 2025 09:02:10 -0800 Subject: [PATCH 044/121] Remove reference to .env file in extension run --- mcp/run_tools.md | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 7d417ad19..1371188b7 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -116,12 +116,7 @@ The installation creates a local `.gemini/extensions/datacommons` directory with {:.no_toc} ### Run -1. If you are using a `.env` file, switch to the extensions directory: - ``` - cd ./gemini/extensions/datacommons - ``` - Copy your `.env` file to this directory, and run `gemini` from here. -1. If you're not using a `.env` file, run `gemini` from any directory. +1. Run `gemini` from any directory. 1. To verify that the Data commons tools are running, enter `/mcp list`. You should see `datacommons-mcp` listed as `Ready`. If you don't, see the [Troubleshoot](#troubleshoot) section. 1. To verify that the extension is running, enter `/extensions list`. You should see `datacommons` listed as `active`. 1. Start sending [natural-language queries](#sample-queries). From 2bb5ca38d719a15340325b18215810218ef30ff3 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 24 Nov 2025 13:33:43 -0800 Subject: [PATCH 045/121] slight fix --- api/python/v2/migration.md | 1520 ++++++++++++++++++++++++++++++++++++ 1 file changed, 1520 insertions(+) create mode 100644 api/python/v2/migration.md diff --git a/api/python/v2/migration.md b/api/python/v2/migration.md new file mode 100644 index 000000000..e6b54fa59 --- /dev/null +++ b/api/python/v2/migration.md @@ -0,0 +1,1520 @@ +--- +layout: default +title: Migrate from V1 to V2 +nav_order: 7 +parent: Python (V2) +grand_parent: API - Query data programmatically +published: true +--- + +{: .no_toc} +# Migrate from Python API V1 to V2 + + +Version V1 of the Data Commons Python API will be deprecated in early 2026. The [V2](index.md) APIs are significantly different from V1. This document summarizes the important differences that you should be aware of and provides examples of translating queries from V1 to V2. + +* TOC +{:toc} + +## Summary of changes + +| Feature | V1 | V2 | +|---------|----|----| +| API key | Not required | Required: get from | +| Custom Data Commons supported | No | Yes: see details in [Create a client](index.md#create-a-client) | +| Pandas support | Separate package | Module in the same package: see details in [Install](index.md#install) | +| Sessions | Managed by the `datacommons` package object | Managed by a `datacommons_client` object that you must create: see details in [Create a client](index.md#create-a-client) | +| Classes/methods | 7 methods, members of `datacommons` class | 3 classes representing REST endpoints `node`, `observation` and `resolve`; several member functions for each endpoint class. Variations of methods in V1 are represented as function parameters in V2. See [Request endpoints and responses](index.md#request-endpoints-and-responses) | +| Pandas classes/methods | 3 methods, all members of `datacommons_pandas` class | 1 method, member of `datacommons_client` class. Variations of the Pandas methods in V1 are represented as parameters in V2. See [Observations DataFrame](pandas.md) | +| Pagination | Required for queries resulting in large data volumes | Optional: see [Pagination](node.md#pagination) | +| DCID lookup method | No | Yes: [`resolve`](resolve.md) endpoint methods | +| Statistical facets | With the `get_stat_value` and `get_stat_series` methods, Data Commons chooses the most "relevant" facet to answer the query; typically this is the facet that has the most recent data. | For all Observation methods, results from all available facets are returned by default (if you don"t apply a filter); for details, see [Observation response](/observation.html#response) | +| Statistical facet filtering | The `get_stat_value`, `get_stat_series` and Pandas `build_time_series` methods allow you to filter results by specific facet fields, such as measurement method, unit, observation period, etc. | The `observations_dataframe` method allows you to filter results by specific facet fields. Observation methods only allow filtering results by the facet domain or ID; for details, see [Observation fetch](observation.md#fetch). | +| Response contents | Simple structures mostly containing values only | Nested structures containing values and additional properties and metadata | +| Different response formats | No | Yes: for details, see [Response formatting](index.md#response-formatting). | + +## V1 function equivalences in V2 + +This section shows you how to translate from a given V1 function to the equivalent code in V2. Examples of both versions are given in the [Examples](#examples) section. + +| `datacommmons` V1 function | V2 equivalent | +|-------------|------------------| +| `get_triples` | No direct equivalent; triples are not returned. Instead you indicate the directionality of the relationship in the triple, i.e. incoming or outgoing edges, using [`node.fetch`](node.md#fetch) and a [relation expression](/api/rest/v2/index.html#relation-expressions) | +| `get_places_in` | [`node.fetch_place_descendants`](node.md#fetch_place_descendants) | +| `get_stat_value` | [`observation.fetch_observations_by_entity_dcid`](observation.md#fetch_observations_by_entity_dcid) with a single place and variable | +| `get_stat_series` | [`observation.fetch_observations_by_entity_dcid`](observation.md#fetch_observations_by_entity_dcid) with a single place and variable, and the `date` parameter set to `all` | +| `get_stat_all` | [`observation.fetch_observations_by_entity_dcid`](observation.md#fetch_observations_by_entity_dcid) with an array of places and/or variables and `date` parameter set to `all` | +| `get_property_labels` | [`node.fetch_property_labels`](node.md#fetch_property_labels) | +| `get_property_values` | [`node.fetch_property_values`](node.md#fetch_property_values) | + +| `datacommons_pandas` V1 function | V2 equivalent | +|----------------------------------|------------------| +| `build_time_series` | [`observations_dataframe`](pandas.md) with a single place and variable and the `date` parameter set to `all` | +| `build_time_series_dataframe` | [`observations_dataframe`](pandas.md) with an array of places, a single variable and the `date` parameter set to `all` | +| `build_multivariate_dataframe` | [`observations_dataframe`](pandas.md) with an array of places and/or variables and the `date` parameter set to `latest` | + +## Examples + +### datacommons package examples + +The following examples show equivalent API requests and responses using the V1 `datacommons` package and V2. + +{: .no_toc} +#### Example 1: Get triples associated with a single place + +This example retrieves triples associated with zip code 94043. In V1, the `get_triples` method returns all triples, in which the zip code is the subject or the object. In V2, you cannot get both directions in a single request; you must send one request for the outgoing relationships and one for the incoming relationships. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_triples(["zip/94043"]) +``` +{% endtab %} + +{% tab request V2 request %} +Request 1: +```python +client.node.fetch(node_dcids=["zip/94043"], expression="->*") +``` +Request 2: +```python +client.node.fetch(node_dcids=["zip/94043"], expression="<-*") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{ "zip/94043": [ + // Outgoing relations + ("zip/94043", "containedInPlace", "country/USA"), + ("zip/94043", "containedInPlace", "geoId/06085"), + ("zip/94043", "containedInPlace", "geoId/0608592830"), + ("zip/94043", "containedInPlace", "geoId/0616"), + ("zip/94043", "geoId", "zip/94043"), + //... + ("zip/94043", "landArea", "SquareMeter21906343"), + ("zip/94043", "latitude", "37.411913"), + ("zip/94043", "longitude", "-122.068919"), + ("zip/94043", "name", "94043"), + ("zip/94043", "provenance", "dc/base/BaseGeos"), + ("zip/94043", "typeOf", "CensusZipCodeTabulationArea"), + ("zip/94043", "usCensusGeoId", "860Z200US94043"), + ("zip/94043", "waterArea", "SquareMeter0"), + // Incoming relations + ("EpaParentCompany/AlphabetInc", "locatedIn", "zip/94043"), + ("EpaParentCompany/Google", "locatedIn", "zip/94043"), + ("epaGhgrpFacilityId/1005910", "containedInPlace", "zip/94043"), + ("epaSuperfundSiteId/CA2170090078", "containedInPlace", "zip/94043"), + ("epaSuperfundSiteId/CAD009111444", "containedInPlace", "zip/94043"), + ("epaSuperfundSiteId/CAD009138488", "containedInPlace", "zip/94043"), + ("epaSuperfundSiteId/CAD009205097", "containedInPlace", "zip/94043"), + ("epaSuperfundSiteId/CAD009212838", "containedInPlace", "zip/94043"), + ("epaSuperfundSiteId/CAD061620217", "containedInPlace", "zip/94043"), + ("epaSuperfundSiteId/CAD095989778", "containedInPlace", "zip/94043"), + //... + ] +} +``` +{% endtab %} + +{% tab response V2 response %} +Response 1 (outgoing relations): +```python +{"data": {"zip/94043": {"arcs": { + "longitude": {"nodes": [{"provenanceId": "dc/base/BaseGeos", + "value": "-122.068919"}]}, + "name": {"nodes": [{"provenanceId": "dc/base/BaseGeos", + "value": "94043"}]}, + "typeOf": {"nodes": [{"dcid": "CensusZipCodeTabulationArea", + "name": "CensusZipCodeTabulationArea", + "provenanceId": "dc/base/BaseGeos", + "types": ["Class"]}]}, + "usCensusGeoId": {"nodes": [{"provenanceId": "dc/base/BaseGeos", + "value": "860Z200US94043"}]}, + "containedInPlace": {"nodes": [{"dcid": "country/USA", + "name": "United States", + "provenanceId": "dc/base/BaseGeos", + "types": ["Country"]}, + {"dcid": "geoId/06085", + "name": "Santa Clara County", + "provenanceId": "dc/base/BaseGeos", + "types": ["AdministrativeArea2", "County"]}, + {"dcid": "geoId/0608592830", + "name": "San Jose CCD", + "provenanceId": "dc/base/BaseGeos", + "types": ["CensusCountyDivision"]}, + {"dcid": "geoId/0616", + "name": "Congressional District 16 (113th Congress), California", + "provenanceId": "dc/base/BaseGeos", + "types": ["CongressionalDistrict"]}]}, + //... + "geoOverlaps": {"nodes": [{"dcid": "geoId/06085504601", + "name": "Census Tract 5046.01, Santa Clara County, California", + "provenanceId": "dc/base/BaseGeos", + "types": ["CensusTract"]}, + {"dcid": "geoId/06085504700", + "name": "Census Tract 5047, Santa Clara County, California", + "provenanceId": "dc/base/BaseGeos", + "types": ["CensusTract"]}, + {"dcid": "geoId/06085509108", + "name": "Census Tract 5091.08, Santa Clara County, California", + "provenanceId": "dc/base/BaseGeos", + "types": ["CensusTract"]}, + //... + "landArea": {"nodes": [{"dcid": "SquareMeter21906343", + "name": "SquareMeter 21906343", + "provenanceId": "dc/base/BaseGeos", + "types": ["Quantity"]}]}, + "latitude": {"nodes": [{"provenanceId": "dc/base/BaseGeos", + "value": "37.411913"}]}, + "provenance": {"nodes": [{"dcid": "dc/base/BaseGeos", + "name": "BaseGeos", + "provenanceId": "dc/base/BaseGeos", + "types": ["Provenance"]}]}}}}} +``` +Response 2 (incoming relations): + +```python +{"data": {"zip/94043": {"arcs": { + "locatedIn": {"nodes": [ + {"dcid": "EpaParentCompany/AlphabetInc", + "name": "AlphabetInc", + "provenanceId": "dc/base/EPA_ParentCompanies", + "types": ["EpaParentCompany"]}, + {"dcid": "EpaParentCompany/Google", + "name": "Google", + "provenanceId": "dc/base/EPA_ParentCompanies", + "types": ["EpaParentCompany"]}]}, + "containedInPlace": {"nodes": [ + {"dcid": "epaGhgrpFacilityId/1005910", + "name": "City Of Mountain View (Shoreline Landfill)", + "provenanceId": "dc/base/EPA_GHGRPFacilities", + "types": ["EpaReportingFacility"]}, + {"dcid": "epaSuperfundSiteId/CA2170090078", + "name": "Moffett Naval Air Station", + "provenanceId": "dc/base/EPA_Superfund_Sites", + "types": ["SuperfundSite"]}, + {"dcid": "epaSuperfundSiteId/CAD009111444", + "name": "Teledyne Semiconductor", + "provenanceId": "dc/base/EPA_Superfund_Sites", + "types": ["SuperfundSite"]}, + {"dcid": "epaSuperfundSiteId/CAD009138488", + "name": "Spectra-Physics Inc.", + "provenanceId": "dc/base/EPA_Superfund_Sites", + "types": ["SuperfundSite"]}, + //... + ] + } + } +} +``` +{% endtab %} + +{% endtabs %} + +
+ +{: .no_toc} +#### Example 2: Get a list of places in another place + +This example retrieves a list of counties in the U.S. state of Delaware. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_places_in(["geoId/10"], "County") +``` + +{% endtab %} + +{% tab request V2 request %} + +```python +client.node.fetch_place_children(place_dcids="geoId/10", children_type="County") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{"geoId/10": ["geoId/10001", "geoId/10003", "geoId/10005"]} +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{"geoId/10": [ + {"dcid": "geoId/10001", "name": "Kent County"}, + {"dcid": "geoId/10003", "name": "New Castle County"}, + {"dcid": "geoId/10005", "name": "Sussex County"}]} +``` + +{% endtab %} + +{% endtabs %} + +
+ +{: .no_toc} +#### Example 3: Get the latest value of a single statistical variable for a single place + +This example gets the latest count of men in the state of California. Note that the V1 method `get_stat_value` returns a single value, automatically selecting the most "relevant" data source, while the V2 method returns all data sources ("facets"), i.e. multiple values for the same variable, as well as metadata for all the sources. Comparing the results, you can see that the V1 method has selected facet 3999249536, which has the most recent date, and comes from the U.S. Census PEP survey. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_stat_value("geoId/05", "Count_Person_Male") +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observation.fetch_observations_by_entity_dcid(date="latest", entity_dcids="geoId/05", variable_dcids="Count_Person_Male") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +1524533 +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{"byVariable": {"Count_Person_Male": {"byEntity": {"geoId/05": {"orderedFacets": [ + {"earliestDate": "2023", + "facetId": "1145703171", + "latestDate": "2023", + "obsCount": 1, + "observations": [{"date": "2023", "value": 1495958.0}]}, + {"earliestDate": "2024", + "facetId": "3999249536", + "latestDate": "2024", + "obsCount": 1, + "observations": [{"date": "2024", "value": 1524533.0}]}, + {"earliestDate": "2023", + "facetId": "1964317807", + "latestDate": "2023", + "obsCount": 1, + "observations": [{"date": "2023", "value": 1495958.0}]}, + {"earliestDate": "2023", + "facetId": "10983471", + "latestDate": "2023", + "obsCount": 1, + "observations": [{"date": "2023", "value": 1495096.943}]}, + {"earliestDate": "2023", + "facetId": "196790193", + "latestDate": "2023", + "obsCount": 1, + "observations": [{"date": "2023", "value": 1495096.943}]}, + {"earliestDate": "2021", + "facetId": "4181918134", + "latestDate": "2021", + "obsCount": 1, + "observations": [{"date": "2021", "value": 1493178.0}]}, + {"earliestDate": "2020", + "facetId": "2825511676", + "latestDate": "2020", + "obsCount": 1, + "observations": [{"date": "2020", "value": 1486856.0}]}, + {"earliestDate": "2019", + "facetId": "1226172227", + "latestDate": "2019", + "obsCount": 1, + "observations": [{"date": "2019", "value": 1474705.0}]}]}}}}, + "facets": {"2825511676": {"importName": "CDC_Mortality_UnderlyingCause", + "provenanceUrl": "https://wonder.cdc.gov/ucd-icd10.html"}, + "1226172227": {"importName": "CensusACS1YearSurvey", + "measurementMethod": "CensusACS1yrSurvey", + "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}, + "1145703171": {"importName": "CensusACS5YearSurvey", + "measurementMethod": "CensusACS5yrSurvey", + "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}, + "3999249536": {"importName": "USCensusPEP_Sex", + "measurementMethod": "CensusPEPSurvey_PartialAggregate", + "observationPeriod": "P1Y", + "provenanceUrl": "https://www.census.gov/programs-surveys/popest.html"}, + "1964317807": {"importName": "CensusACS5YearSurvey_SubjectTables_S0101", + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "provenanceUrl": "https://data.census.gov/table?q=S0101:+Age+and+Sex&tid=ACSST1Y2022.S0101"}, + "10983471": {"importName": "CensusACS5YearSurvey_SubjectTables_S2601A", + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2601A&tid=ACSST5Y2019.S2601A"}, + "196790193": {"importName": "CensusACS5YearSurvey_SubjectTables_S2602", + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2602&tid=ACSST5Y2019.S2602"}, + "4181918134": {"importName": "OECDRegionalDemography_Population", + "measurementMethod": "OECDRegionalStatistics", + "observationPeriod": "P1Y", + "provenanceUrl": "https://data-explorer.oecd.org/vis?fs[0]=Topic%2C0%7CRegional%252C%20rural%20and%20urban%20development%23GEO%23&pg=40&fc=Topic&bp=true&snb=117&df[ds]=dsDisseminateFinalDMZ&df[id]=DSD_REG_DEMO%40DF_POP_5Y&df[ag]=OECD.CFE.EDS&df[vs]=2.0&dq=A.......&to[TIME_PERIOD]=false&vw=tb&pd=%2C"}}} +``` +{% endtab %} + +{% endtabs %} + +
+ +{: #example-4} +{: .no_toc} +#### Example 4: Get all values of a single statistical variable for a single place + +This example retrieves the number of men in the state of California for all years available. As in example 3 above, V1 returns data from a single facet (which appears to be 1145703171, the U.S. Census ACS 5-year survey). V2 returns data for all available facets. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_stat_series("geoId/05", "Count_Person_Male") +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observation.fetch_observations_by_entity_dcid(date="all", entity_dcids="geoId/05", variable_dcids="Count_Person_Male") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{"2023": 1495958, + "2017": 1461651, + "2022": 1491622, + "2015": 1451913, + "2021": 1483520, + "2018": 1468412, + "2011": 1421287, + "2016": 1456694, + "2012": 1431252, + "2019": 1471760, + "2013": 1439862, + "2014": 1447235, + "2020": 1478511} +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{"byVariable": {"Count_Person_Male": {"byEntity": {"geoId/05": {"orderedFacets": [ + {"earliestDate": "2011", + "facetId": "1145703171", + "latestDate": "2023", + "obsCount": 13, + "observations": [ + {"date": "2011", "value": 1421287.0}, + {"date": "2012", "value": 1431252.0}, + {"date": "2013", "value": 1439862.0}, + {"date": "2014", "value": 1447235.0}, + {"date": "2015", "value": 1451913.0}, + {"date": "2016", "value": 1456694.0}, + {"date": "2017", "value": 1461651.0}, + {"date": "2018", "value": 1468412.0}, + {"date": "2019", "value": 1471760.0}, + {"date": "2020", "value": 1478511.0}, + {"date": "2021", "value": 1483520.0}, + {"date": "2022", "value": 1491622.0}, + {"date": "2023", "value": 1495958.0}]}, + {"earliestDate": "1970", + "facetId": "3999249536", + "latestDate": "2024", + "obsCount": 55, + "observations": [ + {"date": "1970", "value": 937034.0}, + {"date": "1971", "value": 956802.0}, + {"date": "1972", "value": 979822.0}, + {"date": "1973", "value": 999264.0}, + {"date": "1974", "value": 1019259.0}, + {"date": "1975", "value": 1047112.0}, + {"date": "1976", "value": 1051166.0}, + {"date": "1977", "value": 1069003.0}, + {"date": "1978", "value": 1084374.0}, + {"date": "1979", "value": 1097123.0}, + {"date": "1980", "value": 1105739.0}, + {"date": "1981", "value": 1107249.0}, + {"date": "1982", "value": 1107142.0}, + {"date": "1983", "value": 1112460.0}, + {"date": "1984", "value": 1119061.0}, + {"date": "1985", "value": 1122425.0}, + {"date": "1986", "value": 1124357.0}, + {"date": "1987", "value": 1129353.0}, + {"date": "1988", "value": 1129014.0}, + {"date": "1989", "value": 1130916.0}, + {"date": "1990", "value": 1136163.0}, + //... + "facets": {"1964317807": {"importName": "CensusACS5YearSurvey_SubjectTables_S0101", + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "provenanceUrl": "https://data.census.gov/table?q=S0101:+Age+and+Sex&tid=ACSST1Y2022.S0101"}, + "10983471": {"importName": "CensusACS5YearSurvey_SubjectTables_S2601A", + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2601A&tid=ACSST5Y2019.S2601A"}, + "196790193": {"importName": "CensusACS5YearSurvey_SubjectTables_S2602", + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2602&tid=ACSST5Y2019.S2602"}, + //... +}} +``` +{% endtab %} + +{% endtabs %} + +
+ +{: #example-5} +{: .no_toc} +#### Example 5: Get the all values of a single statistical variable for a single place, selecting the facet to return + +This example gets the nominal GDP for Italy, filtering for facets that show the results in U.S. dollars. In V1, this is done directly with the `unit` parameter. In V2, we use the domain to specify the same facet. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_stat_series("country/ITA", "Amount_EconomicActivity_GrossDomesticProduction_Nominal", unit="USDollar") +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observation.fetch_observations_by_entity_dcid(date="all", entity_dcids="country/ITA",variable_dcids="Amount_EconomicActivity_GrossDomesticProduction_Nominal", filter_facet_domains="worldbank.org") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{'2003': 1582930016538.82, + '2002': 1281746271196.04, + '1961': 46649487320.4225, + '1986': 641862313287.44, + '1974': 200024444775.231, + '2000': 1149661363439.38, + '2015': 1845428048839.1, + '2001': 1172041488805.87, + '1966': 76622444787.3696, + '1971': 124959712858.92598, + '1999': 1255004736463.98, + //... + '1979': 394584507107.9, + '2016': 1887111188176.93, + '1981': 431695533980.583, + '2024': 2372774547793.12, + '1985': 453259761687.456, + '1975': 228220643534.994, + '1960': 42012422612.3955, + '1991': 1249092439519.28} +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{'byVariable': {'Amount_EconomicActivity_GrossDomesticProduction_Nominal': {'byEntity': {'country/ITA': {'orderedFacets': [{'earliestDate': '1960', + 'facetId': '3496587042', + 'latestDate': '2024', + 'obsCount': 65, + 'observations': [{'date': '1960', 'value': 42012422612.3955}, + {'date': '1961', 'value': 46649487320.4225}, + {'date': '1962', 'value': 52413872628.0045}, + {'date': '1963', 'value': 60035924617.9277}, + {'date': '1964', 'value': 65720771779.4768}, + {'date': '1965', 'value': 70717012186.1774}, + {'date': '1966', 'value': 76622444787.3696}, + {'date': '1967', 'value': 84401995573.2456}, + {'date': '1968', 'value': 91485448147.84}, + {'date': '1969', 'value': 100996667239.335}, + ..// + {'date': '2022', 'value': 2104067630319.46}, + {'date': '2023', 'value': 2304605139862.79}, + {'date': '2024', 'value': 2372774547793.12}]}]}}}}, + 'facets': {'3496587042': {'importName': 'WorldDevelopmentIndicators', + 'observationPeriod': 'P1Y', + 'provenanceUrl': 'https://datacatalog.worldbank.org/dataset/world-development-indicators/', + 'unit': 'USDollar'}}} +``` +{% endtab %} + +{% endtabs %} + +
+ +{: #example-6} +{: .no_toc} +#### Example 6: Get all values of a single statistical variables for multiple places + +This example retrieves the number of people with doctoral degrees in the states of Minnesota and Wisconsin for all years available. Note that the `get_stat_all` method behaves more like V2 and returns data for all facets (in this case, there is only one), as well as metadata for all facets. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_stat_all(["geoId/27","geoId/55"], ["Count_Person_EducationalAttainmentDoctorateDegree"]) +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observation.fetch_observations_by_entity_dcid(date="all", variable_dcids="Count_Person_EducationalAttainmentDoctorateDegree", entity_dcids=["geoId/27","geoId/55"]) +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{"geoId/27": {"Count_Person_EducationalAttainmentDoctorateDegree": {"sourceSeries": [ + {"val": + {"2016": 50039, + "2017": 52737, + "2015": 47323, + "2013": 42511, + "2012": 40961, + "2022": 60300, + "2023": 63794, + "2014": 44713, + "2021": 58452, + "2019": 55185, + "2020": 56170, + "2018": 54303}, + "measurementMethod": "CensusACS5yrSurvey", + "importName": "CensusACS5YearSurvey", + "provenanceDomain": "census.gov", + "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}]}}, + "geoId/55": {"Count_Person_EducationalAttainmentDoctorateDegree": {"sourceSeries": [ + {"val": + {"2020": 49385, + "2017": 43737, + "2022": 53667, + "2014": 40133, + "2021": 52306, + "2023": 55286, + "2016": 42590, + "2012": 38052, + "2013": 38711, + "2019": 47496, + "2018": 46071, + "2015": 41387}, + "measurementMethod": "CensusACS5yrSurvey", + "importName": "CensusACS5YearSurvey", + "provenanceDomain": "census.gov", + "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}]}}} +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{"byVariable": {"Count_Person_EducationalAttainmentDoctorateDegree": {"byEntity": { + "geoId/55": {"orderedFacets": [{"earliestDate": "2012", + "facetId": "1145703171", + "latestDate": "2023", + "obsCount": 12, + "observations": [ + {"date": "2012", "value": 38052.0}, + {"date": "2013", "value": 38711.0}, + {"date": "2014", "value": 40133.0}, + {"date": "2015", "value": 41387.0}, + {"date": "2016", "value": 42590.0}, + {"date": "2017", "value": 43737.0}, + {"date": "2018", "value": 46071.0}, + {"date": "2019", "value": 47496.0}, + {"date": "2020", "value": 49385.0}, + {"date": "2021", "value": 52306.0}, + {"date": "2022", "value": 53667.0}, + {"date": "2023", "value": 55286.0}]}]}, + "geoId/27": {"orderedFacets": [{"earliestDate": "2012", + "facetId": "1145703171", + "latestDate": "2023", + "obsCount": 12, + "observations": [ + {"date": "2012", "value": 40961.0}, + {"date": "2013", "value": 42511.0}, + {"date": "2014", "value": 44713.0}, + {"date": "2015", "value": 47323.0}, + {"date": "2016", "value": 50039.0}, + {"date": "2017", "value": 52737.0}, + {"date": "2018", "value": 54303.0}, + {"date": "2019", "value": 55185.0}, + {"date": "2020", "value": 56170.0}, + {"date": "2021", "value": 58452.0}, + {"date": "2022", "value": 60300.0}, + {"date": "2023", "value": 63794.0}]}]}}}}, + "facets": {"1145703171": {"importName": "CensusACS5YearSurvey", + "measurementMethod": "CensusACS5yrSurvey", + "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}}} +``` +{% endtab %} + +{% endtabs %} + +
+ +{: #example-7} +{: .no_toc} +#### Example 7: Get all values of multiple statistical variables for a single place + +This example retrieves the total population as well as the male population of the state of Arkansas for all available years. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_stat_all(["geoId/05"], ["Count_Person", "Count_Person_Male"]) +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observation.fetch_observations_by_entity_dcid(date="all", entity_dcids="geoId/05", variable_dcids=["Count_Person","Count_Person_Male"]) +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{"geoId/05": {"Count_Person": {"sourceSeries": [{"val": { + "2019": 3020985, + "1936": 1892000, + "2013": 2960459, + "1980": 2286435, + "1904": 1419000, + "2023": 3069463, + "2010": 2921998, + "1946": 1797000, + "1967": 1901000, + "1902": 1360000, + "1962": 1853000, + "1993": 2423743, + "1991": 2370666, + "1986": 2331984, + "2009": 2896843, + "2014": 2968759, + "1933": 1854000, + "1954": 1734000, + "1921": 1769000, + "1929": 1852000, + "1956": 1704000, + "1949": 1844000, + //... + "measurementMethod": "CensusPEPSurvey", + "observationPeriod": "P1Y", + "importName": "USCensusPEP_Annual_Population", + "provenanceDomain": "census.gov", + "provenanceUrl": "https://www.census.gov/programs-surveys/popest.html"}, + {"val": { + "2022": 3018669, + "2018": 2990671, + "2020": 3011873, + "2016": 2968472, + "2013": 2933369, + "2019": 2999370, + "2021": 3006309, + "2015": 2958208, + "2011": 2895928, + "2023": 3032651, + "2014": 2947036, + "2012": 2916372, + "2017": 2977944}, + "measurementMethod": "CensusACS5yrSurvey", + "importName": "CensusACS5YearSurvey", + "provenanceDomain": "census.gov", + "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}, + {"val": {"2000": 2673400, "2020": 3011524, "2010": 2915918}, + "measurementMethod": "USDecennialCensus", + "importName": "USDecennialCensus_RedistrictingRelease", + "provenanceDomain": "census.gov", + "provenanceUrl": "https://www.census.gov/programs-surveys/decennial-census/about/rdo/summary-files.html"}, + //... + "Count_Person_Male": {"sourceSeries": [{"val": { + "2015": 1451913, + "2021": 1483520, + "2020": 1478511, + "2023": 1495958, + "2016": 1456694, + "2022": 1491622, + "2019": 1471760, + "2013": 1439862, + "2018": 1468412, + "2014": 1447235, + "2011": 1421287, + "2012": 1431252, + "2017": 1461651}, + "measurementMethod": "CensusACS5yrSurvey", + "importName": "CensusACS5YearSurvey", + "provenanceDomain": "census.gov", + "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}, + {"val": { + "1975": 1047112, + "1995": 1228626, + "2023": 1513837, + "1991": 1150369, + "2019": 1482909, + "1990": 1136163, + "1998": 1277869, + "1989": 1130916, + "2011": 1444411, + "2021": 1495032, + "2013": 1453888, + "1992": 1167203, + "2004": 1346638, + "2022": 1503494, + "1982": 1107142, + "1978": 1084374, + //... + "measurementMethod": "CensusPEPSurvey_PartialAggregate", + "observationPeriod": "P1Y", + "importName": "USCensusPEP_Sex", + "provenanceDomain": "census.gov", + "isDcAggregate": True, + "provenanceUrl": "https://www.census.gov/programs-surveys/popest.html"}, + {"val": {"2013": 1439862, + "2018": 1468412, + "2011": 1421287, + "2015": 1451913, + "2020": 1478511, + "2017": 1461651, + "2021": 1483520, + "2019": 1471760, + "2014": 1447235, + "2012": 1431252, + "2010": 1408945, + "2022": 1491622, + "2023": 1495958, + "2016": 1456694}, + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "importName": "CensusACS5YearSurvey_SubjectTables_S0101", + "provenanceDomain": "census.gov", + "provenanceUrl": "https://data.census.gov/table?q=S0101:+Age+and+Sex&tid=ACSST1Y2022.S0101"}, + //... +]}}} +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{"byVariable": {"Count_Person": {"byEntity": { + "geoId/05": {"orderedFacets": [ + {"earliestDate": "1900", + "facetId": "2176550201", + "latestDate": "2024", + "obsCount": 125, + "observations": [{"date": "1900", "value": 1314000.0}, + {"date": "1901", "value": 1341000.0}, + {"date": "1902", "value": 1360000.0}, + {"date": "1903", "value": 1384000.0}, + {"date": "1904", "value": 1419000.0}, + {"date": "1905", "value": 1447000.0}, + {"date": "1906", "value": 1465000.0}, + {"date": "1907", "value": 1484000.0}, + //... + {"earliestDate": "2011", + "facetId": "1145703171", + "latestDate": "2023", + "obsCount": 13, + "observations": [{"date": "2011", "value": 2895928.0}, + {"date": "2012", "value": 2916372.0}, + {"date": "2013", "value": 2933369.0}, + {"date": "2014", "value": 2947036.0}, + {"date": "2015", "value": 2958208.0}, + {"date": "2016", "value": 2968472.0}, + {"date": "2017", "value": 2977944.0}, + {"date": "2018", "value": 2990671.0}, + {"date": "2019", "value": 2999370.0}, + {"date": "2020", "value": 3011873.0}, + {"date": "2021", "value": 3006309.0}, + {"date": "2022", "value": 3018669.0}, + {"date": "2023", "value": 3032651.0}]}, + {"earliestDate": "2000", + "facetId": "1541763368", + "latestDate": "2020", + "obsCount": 3, + "observations": [{"date": "2000", "value": 2673400.0}, + {"date": "2010", "value": 2915918.0}, + {"date": "2020", "value": 3011524.0}]}, + //... + "Count_Person_Male": {"byEntity": { + "geoId/05": {"orderedFacets": [{"earliestDate": "2011", + "facetId": "1145703171", + "latestDate": "2023", + "obsCount": 13, + "observations": [{"date": "2011", "value": 1421287.0}, + {"date": "2012", "value": 1431252.0}, + {"date": "2013", "value": 1439862.0}, + {"date": "2014", "value": 1447235.0}, + {"date": "2015", "value": 1451913.0}, + {"date": "2016", "value": 1456694.0}, + {"date": "2017", "value": 1461651.0}, + {"date": "2018", "value": 1468412.0}, + {"date": "2019", "value": 1471760.0}, + {"date": "2020", "value": 1478511.0}, + {"date": "2021", "value": 1483520.0}, + {"date": "2022", "value": 1491622.0}, + {"date": "2023", "value": 1495958.0}]}, + {"earliestDate": "1970", + "facetId": "3999249536", + "latestDate": "2024", + "obsCount": 55, + "observations": [{"date": "1970", "value": 937034.0}, + {"date": "1971", "value": 956802.0}, + {"date": "1972", "value": 979822.0}, + {"date": "1973", "value": 999264.0}, + {"date": "1974", "value": 1019259.0}, + {"date": "1975", "value": 1047112.0}, + {"date": "1976", "value": 1051166.0}, + {"date": "1977", "value": 1069003.0}, + {"date": "1978", "value": 1084374.0}, + {"date": "1979", "value": 1097123.0}, + {"date": "1980", "value": 1105739.0}, + //... + {"earliestDate": "2010", + "facetId": "1964317807", + "latestDate": "2023", + "obsCount": 14, + "observations": [{"date": "2010", "value": 1408945.0}, + {"date": "2011", "value": 1421287.0}, + {"date": "2012", "value": 1431252.0}, + {"date": "2013", "value": 1439862.0}, + {"date": "2014", "value": 1447235.0}, + {"date": "2015", "value": 1451913.0}, + {"date": "2016", "value": 1456694.0}, + {"date": "2017", "value": 1461651.0}, + //... + {"earliestDate": "2010", + "facetId": "10983471", + "latestDate": "2023", + "obsCount": 14, + "observations": [{"date": "2010", "value": 1407615.16}, + {"date": "2011", "value": 1421900.648}, + {"date": "2012", "value": 1431938.652}, + {"date": "2013", "value": 1440284.179}, + {"date": "2014", "value": 1446994.676}, + {"date": "2015", "value": 1452480.128}, + {"date": "2016", "value": 1457519.752}, + {"date": "2017", "value": 1462170.504}, + //... + {"earliestDate": "2017", + "facetId": "196790193", + "latestDate": "2023", + "obsCount": 7, + "observations": [{"date": "2017", "value": 1462170.504}, + {"date": "2018", "value": 1468419.461}, + {"date": "2019", "value": 1472690.67}, + {"date": "2020", "value": 1478829.643}, + {"date": "2021", "value": 1482110.337}, + {"date": "2022", "value": 1491222.486}, + {"date": "2023", "value": 1495096.943}]}, + //... + "facets": {"10983471": {"importName": "CensusACS5YearSurvey_SubjectTables_S2601A", + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2601A&tid=ACSST5Y2019.S2601A"}, + "2176550201": {"importName": "USCensusPEP_Annual_Population", + "measurementMethod": "CensusPEPSurvey", + "observationPeriod": "P1Y", + "provenanceUrl": "https://www.census.gov/programs-surveys/popest.html"}, + "196790193": {"importName": "CensusACS5YearSurvey_SubjectTables_S2602", + "measurementMethod": "CensusACS5yrSurveySubjectTable", + "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2602&tid=ACSST5Y2019.S2602"}, + //... +}} +``` +{% endtab %} + +{% endtabs %} + +
+ +{: .no_toc} +#### Example 8: Get all outgoing property labels for a single node + +This example retrieves the outwardly directed property labels (but not the values) of Wisconsin"s eighth congressional district. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_property_labels(["geoId/5508"]) +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.node.fetch_property_labels(node_dcids="geoId/5508") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{"geoId/5508": [ + "containedInPlace", + "geoId", + "geopythonCoordinates", + "geoOverlaps", + "kmlCoordinates", + "landArea", + "latitude", + "longitude", + "name", + "provenance", + "typeOf", + "usCensusGeoId", + "waterArea"]} +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{"data": {"geoId/5508": {"properties": [ + "containedInPlace", + "geoId", + "geopythonCoordinates", + "geoOverlaps", + "kmlCoordinates", + "landArea", + "latitude", + "longitude", + "name", + "provenance", + "typeOf", + "usCensusGeoId", + "waterArea"]}}} +``` +{% endtab %} + +{% endtabs %} + +
+ +{: .no_toc} +#### Example 9: Get the value(s) of a single outgoing property of a node (place) + +This example retrieves the common names of the country of Côte d"Ivoire. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_property_values(["country/CIV"],"name") +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.node.fetch_property_values(node_dcids="country/CIV", properties="name") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{"country/CIV": ["Côte d"Ivoire", "Ivory Coast"]} +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{"data": {"country/CIV": {"arcs": {"name": {"nodes": [ + {"provenanceId": "dc/base/WikidataOtherIdGeos", + "value": "Côte d"Ivoire"}, + {"provenanceId": "dc/base/WikidataOtherIdGeos", + "value": "Ivory Coast"}]}}}}} +``` +{% endtab %} + +{% endtabs %} + +
+ +{: .no_toc} +#### Example 10: Retrieve the values of a single outgoing property for multiple nodes (places) + +This example gets the the addresses of Stuyvesant High School in New York and Gunn High School in California. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons.get_property_values(["nces/360007702877","nces/062961004587"],"address") +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.node.fetch_property_values(node_dcids=["nces/360007702877","nces/062961004587"], properties="address") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python +{"nces/360007702877": ["345 Chambers St New York NY 10282-1099"], + "nces/062961004587": ["780 Arastradero Rd. Palo Alto 94306-3827"]} +``` +{% endtab %} + +{% tab response V2 response %} + +```python +{"data": {"nces/360007702877": {"arcs": {"address": {"nodes": [{"provenanceId": "dc/base/NCES_PublicSchool", + "value": "345 Chambers St New York NY 10282-1099"}]}}}, + "nces/062961004587": {"arcs": {"address": {"nodes": [{"provenanceId": "dc/base/NCES_PublicSchool", + "value": "780 Arastradero Rd. Palo Alto 94306-3827"}]}}}}} +``` +{% endtab %} + +{% endtabs %} + +
+ +### datacommons_pandas package examples + +The following examples show equivalent API requests and responses using the V1 `datacommons_pandas` package and V2. + +{: .no_toc} +#### Example 1: Get all values of a single statistical variable for a single place + +This example is the same as [example 4](#example-4) above, but returns a Pandas DataFrame object. Note that V1 selects a single facet, while V2 returns all facets. To restrict the V2 method to a single facet, you could use the `property_filters` parameter. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons_pandas.build_time_series("geoId/05", "Count_Person_Male") +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observations_dataframe(variable_dcids="Count_Person_Male", date="all", entity_dcids="geoId/05") +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python + 0 +2023 1495958 +2012 1431252 +2022 1491622 +2018 1468412 +2014 1447235 +2020 1478511 +2011 1421287 +2016 1456694 +2017 1461651 +2015 1451913 +2019 1471760 +2021 1483520 +2013 1439862 + +dtype: int64 +``` +{% endtab %} + +{% tab response V2 response %} + +```python + date entity entity_name variable variable_name facetId importName measurementMethod observationPeriod provenanceUrl unit value +0 2011 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1421287.0 +1 2012 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1431252.0 +2 2013 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1439862.0 +3 2014 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1447235.0 +4 2015 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1451913.0 +... ... ... ... ... ... ... ... ... ... ... ... ... +162 2015 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1463576.0 +163 2016 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1468782.0 +164 2017 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1479682.0 +165 2018 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1476680.0 +166 2019 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1474705.0 +167 rows × 12 columns +``` +{% endtab %} + +{% endtabs %} + +
+ +{: .no_toc} +#### Example 2: Get the all values of a single statistical variable for a single place, selecting the facet to return + +This example is the same as [example 5](#example-5) above, but returns a Pandas DataFrame object. + +
+{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons_pandas.build_time_series("country/ITA", "Amount_EconomicActivity_GrossDomesticProduction_Nominal", unit="USDollar") +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observations_dataframe(variable_dcids="Amount_EconomicActivity_GrossDomesticProduction_Nominal", date="all", entity_dcids="country/ITA", property_filters={"unit": ["USDollar"]}) +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python + 0 +1988 8.936639e+11 +1990 1.183945e+12 +1970 1.136567e+11 +1966 7.662244e+10 +1992 1.323204e+12 +... ... +2007 2.222524e+12 +2022 2.104068e+12 +2021 2.179208e+12 +1977 2.581900e+11 +2020 1.907481e+12 +65 rows × 1 columns + + +dtype: float64 +``` +{% endtab %} + +{% tab response V2 response %} + +```python + date entity entity_name variable variable_name facetId importName measurementMethod observationPeriod provenanceUrl unit value +0 1960 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 4.201242e+10 +1 1961 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 4.664949e+10 +2 1962 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 5.241387e+10 +3 1963 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 6.003592e+10 +4 1964 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 6.572077e+10 +... ... ... ... ... ... ... ... ... ... ... ... ... +60 2020 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 1.907481e+12 +61 2021 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 2.179208e+12 +62 2022 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 2.104068e+12 +63 2023 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 2.304605e+12 +64 2024 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 2.372775e+12 +65 rows × 12 columns +``` +{% endtab %} + +{% endtabs %} + +
+ +{: .no_toc} +#### Example 3: Get all values of a single statistical variable for multiple places + +This example compares the historic populations of Sudan and South Sudan. Note that V1 selects a single facet, while V2 returns all facets. To restrict the V2 method to a single facet, you could use the `property_filters` parameter. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons_pandas.build_time_series_dataframe(["country/SSD","country/SDN"], "Count_Person") +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observations_dataframe(variable_dcids="Count_Person", date="all", entity_dcids=["country/SSD", "country/SDN"]) +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python + 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 ... 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 +place +country/SDN 8364489 8634941 8919028 9218077 9531109 9858030 10197578 10550597 10917999 11298936 ... 40024431 41259892 42714306 44230596 45548175 46789231 48066924 49383346 50042791 50448963 +country/SSD 2931559 2976724 3024308 3072669 3129918 3189835 3236423 3277648 3321528 3365533 ... 11107561 10830102 10259154 10122977 10423384 10698467 10865780 11021177 11483374 11943408 +2 rows × 65 columns +``` +{% endtab %} + +{% tab response V2 response %} + +```python + date entity entity_name variable variable_name facetId importName measurementMethod observationPeriod provenanceUrl unit value +0 1960 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 8364489.0 +1 1961 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 8634941.0 +2 1962 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 8919028.0 +3 1963 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 9218077.0 +4 1964 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 9531109.0 +... ... ... ... ... ... ... ... ... ... ... ... ... +167 2016 country/SSD South Sudan Count_Person Total population 473499523 Subnational_Demographics_Stats WorldBankSubnationalPopulationEstimate P1Y https://databank.worldbank.org/source/subnatio... None 12231000.0 +168 2024 country/SSD South Sudan Count_Person Total population 1456184638 WikipediaStatsData Wikipedia None https://www.wikipedia.org None 12703714.0 +169 2008 country/SSD South Sudan Count_Person Total population 2458695583 WikidataPopulation WikidataPopulation None https://www.wikidata.org/wiki/Wikidata:Main_Page None 8260490.0 +170 2015 country/SSD South Sudan Count_Person Total population 2458695583 WikidataPopulation WikidataPopulation None https://www.wikidata.org/wiki/Wikidata:Main_Page None 12340000.0 +171 2017 country/SSD South Sudan Count_Person Total population 2458695583 WikidataPopulation WikidataPopulation None https://www.wikidata.org/wiki/Wikidata:Main_Page None 12575714.0 +172 rows × 12 columns +``` +{% endtab %} + +{% endtabs %} + +
+ +{: .no_toc} +#### Example 4: Get all values of multiple statistical variables for multiple places + +This example compares the current populations, median ages, and unemployment rates of the US, California, and Santa Clara County. To restrict the V2 method to a single facet, you could use the `property_filters` parameter. + +
+ +{% tabs request %} + +{% tab request V1 request %} + +```python +datacommons_pandas.build_multivariate_dataframe(["country/USA", "geoId/06", "geoId/06085"],["Count_Person", "Median_Age_Person", "UnemploymentRate_Person"]) +``` +{% endtab %} + +{% tab request V2 request %} + +```python +client.observations_dataframe(variable_dcids=["Count_Person", "Median_Age_Person", "UnemploymentRate_Person"], date="latest", entity_dcids=["country/USA", "geoId/06", "geoId/06085"]) +``` +{% endtab %} + +{% endtabs %} + +
+ +
+ +{% tabs response %} + +{% tab response V1 response %} + +```python + Median_Age_Person Count_Person UnemploymentRate_Person +place +country/USA 38.7 332387540 4.3 +geoId/06 37.6 39242785 5.5 +geoId/06085 37.9 1903297 NaN + +``` +{% endtab %} + +{% tab response V2 response %} + +```python + date entity entity_name variable variable_name facetId importName measurementMethod observationPeriod provenanceUrl unit value +0 2024 geoId/06085 Santa Clara County Count_Person Total population 2176550201 USCensusPEP_Annual_Population CensusPEPSurvey P1Y https://www.census.gov/programs-surveys/popest... None 1926325.0 +1 2023 geoId/06085 Santa Clara County Count_Person Total population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1903297.0 +2 2020 geoId/06085 Santa Clara County Count_Person Total population 1541763368 USDecennialCensus_RedistrictingRelease USDecennialCensus None https://www.census.gov/programs-surveys/decenn... None 1936259.0 +3 2024 geoId/06085 Santa Clara County Count_Person Total population 2390551605 USCensusPEP_AgeSexRaceHispanicOrigin CensusPEPSurvey_Race2000Onwards P1Y https://www2.census.gov/programs-surveys/popes... None 1926325.0 +4 2023 geoId/06085 Santa Clara County Count_Person Total population 1964317807 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... None 1903297.0 +5 2022 geoId/06085 Santa Clara County Count_Person Total population 2564251937 CDC_Social_Vulnerability_Index None None https://www.atsdr.cdc.gov/place-health/php/svi... None 1916831.0 +6 2020 geoId/06085 Santa Clara County Count_Person Total population 2825511676 CDC_Mortality_UnderlyingCause None None https://wonder.cdc.gov/ucd-icd10.html None 1907105.0 +7 2019 geoId/06085 Santa Clara County Count_Person Total population 2517965213 CensusPEP CensusPEPSurvey None https://www.census.gov/programs-surveys/popest... None 1927852.0 +8 2019 geoId/06085 Santa Clara County Count_Person Total population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1927852.0 +9 2024 country/USA United States of America Count_Person Total population 2176550201 USCensusPEP_Annual_Population CensusPEPSurvey P1Y https://www.census.gov/programs-surveys/popest... None 340110988.0 +10 2023 country/USA United States of America Count_Person Total population 2645850372 CensusACS5YearSurvey_AggCountry CensusACS5yrSurvey None https://www.census.gov/ None 335642425.0 +11 2023 country/USA United States of America Count_Person Total population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 332387540.0 +12 2020 country/USA United States of America Count_Person Total population 1541763368 USDecennialCensus_RedistrictingRelease USDecennialCensus None https://www.census.gov/programs-surveys/decenn... None 331449281.0 +13 2024 country/USA United States of America Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 340110988.0 +14 2024 country/USA United States of America Count_Person Total population 2390551605 USCensusPEP_AgeSexRaceHispanicOrigin CensusPEPSurvey_Race2000Onwards P1Y https://www2.census.gov/programs-surveys/popes... None 340110988.0 +15 2023 country/USA United States of America Count_Person Total population 4181918134 OECDRegionalDemography_Population OECDRegionalStatistics P1Y https://data-explorer.oecd.org/vis?fs[0]=Topic... None 334914895.0 +16 2023 country/USA United States of America Count_Person Total population 1964317807 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... None 332387540.0 +17 2023 country/USA United States of America Count_Person Total population 10983471 CensusACS5YearSurvey_SubjectTables_S2601A CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2601A&... None 332387540.0 +18 2023 country/USA United States of America Count_Person Total population 196790193 CensusACS5YearSurvey_SubjectTables_S2602 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2602&t... None 332387540.0 +19 2023 country/USA United States of America Count_Person Total population 217147238 CensusACS5YearSurvey_SubjectTables_S2603 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2603&t... None 332387540.0 +20 2020 country/USA United States of America Count_Person Total population 2825511676 CDC_Mortality_UnderlyingCause None None https://wonder.cdc.gov/ucd-icd10.html None 329484123.0 +21 2019 country/USA United States of America Count_Person Total population 2517965213 CensusPEP CensusPEPSurvey None https://www.census.gov/programs-surveys/popest... None 328239523.0 +22 2019 country/USA United States of America Count_Person Total population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 328239523.0 +23 2024 geoId/06 California Count_Person Total population 2176550201 USCensusPEP_Annual_Population CensusPEPSurvey P1Y https://www.census.gov/programs-surveys/popest... None 39431263.0 +24 2023 geoId/06 California Count_Person Total population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 39242785.0 +25 2020 geoId/06 California Count_Person Total population 1541763368 USDecennialCensus_RedistrictingRelease USDecennialCensus None https://www.census.gov/programs-surveys/decenn... None 39538223.0 +26 2023 geoId/06 California Count_Person Total population 4181918134 OECDRegionalDemography_Population OECDRegionalStatistics P1Y https://data-explorer.oecd.org/vis?fs[0]=Topic... None 38965193.0 +27 2023 geoId/06 California Count_Person Total population 1964317807 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... None 39242785.0 +28 2023 geoId/06 California Count_Person Total population 10983471 CensusACS5YearSurvey_SubjectTables_S2601A CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2601A&... None 39242785.0 +29 2023 geoId/06 California Count_Person Total population 196790193 CensusACS5YearSurvey_SubjectTables_S2602 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2602&t... None 39242785.0 +30 2020 geoId/06 California Count_Person Total population 2825511676 CDC_Mortality_UnderlyingCause None None https://wonder.cdc.gov/ucd-icd10.html None 39368078.0 +31 2019 geoId/06 California Count_Person Total population 2517965213 CensusPEP CensusPEPSurvey None https://www.census.gov/programs-surveys/popest... None 39512223.0 +32 2019 geoId/06 California Count_Person Total population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 39512223.0 +33 2023 geoId/06085 Santa Clara County Median_Age_Person Median age of population 3795540742 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... Year 37.9 +34 2023 geoId/06085 Santa Clara County Median_Age_Person Median age of population 815809675 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... Years 37.9 +35 2023 country/USA United States of America Median_Age_Person Median age of population 3795540742 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... Year 38.7 +36 2023 country/USA United States of America Median_Age_Person Median age of population 815809675 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... Years 38.7 +37 2023 country/USA United States of America Median_Age_Person Median age of population 2763329611 CensusACS5YearSurvey_SubjectTables_S2601A CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2601A&... Years 38.7 +38 2023 country/USA United States of America Median_Age_Person Median age of population 3690003977 CensusACS5YearSurvey_SubjectTables_S2602 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2602&t... Years 38.7 +39 2023 country/USA United States of America Median_Age_Person Median age of population 4219092424 CensusACS5YearSurvey_SubjectTables_S2603 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2603&t... Years 38.7 +40 2023 geoId/06 California Median_Age_Person Median age of population 3795540742 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... Year 37.6 +41 2023 geoId/06 California Median_Age_Person Median age of population 815809675 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... Years 37.6 +42 2023 geoId/06 California Median_Age_Person Median age of population 2763329611 CensusACS5YearSurvey_SubjectTables_S2601A CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2601A&... Years 37.6 +43 2023 geoId/06 California Median_Age_Person Median age of population 3690003977 CensusACS5YearSurvey_SubjectTables_S2602 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2602&t... Years 37.6 +44 2025-08 country/USA United States of America UnemploymentRate_Person Unemployment rate 3707913853 BLS_CPS BLSSeasonallyAdjusted P1M https://www.bls.gov/cps/ None 4.3 +45 2025-06 country/USA United States of America UnemploymentRate_Person Unemployment rate 1714978719 BLS_CPS BLSSeasonallyAdjusted P3M https://www.bls.gov/cps/ None 4.2 +46 2025-08 geoId/06 California UnemploymentRate_Person Unemployment rate 324358135 BLS_LAUS BLSSeasonallyUnadjusted P1M https://www.bls.gov/lau/ None 5.8 +47 2024 geoId/06 California UnemploymentRate_Person Unemployment rate 2978659163 BLS_LAUS BLSSeasonallyUnadjusted P1Y https://www.bls.gov/lau/ None 5.3 +48 2025-08 geoId/06 California UnemploymentRate_Person Unemployment rate 1249140336 BLS_LAUS BLSSeasonallyAdjusted P1M https://www.bls.gov/lau/ None 5.5 +49 2025-08 geoId/06085 Santa Clara County UnemploymentRate_Person Unemployment rate 324358135 BLS_LAUS BLSSeasonallyUnadjusted P1M https://www.bls.gov/lau/ None 4.6 +50 2024 geoId/06085 Santa Clara County UnemploymentRate_Person Unemployment rate 2978659163 BLS_LAUS BLSSeasonallyUnadjusted P1Y https://www.bls.gov/lau/ None 4.1 +51 2022 geoId/06085 Santa Clara County UnemploymentRate_Person Unemployment rate 2564251937 CDC_Social_Vulnerability_Index None None https://www.atsdr.cdc.gov/place-health/php/svi... None 4.4 +``` +{% endtab %} + +{% endtabs %} + +
From e3a71931a8d84e94175624b7797fd1bb025315f8 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 24 Nov 2025 15:43:54 -0800 Subject: [PATCH 046/121] Add section on creating a custom extension --- mcp/develop_agent.md | 89 +++++++++++++++++++++++++++++++++++++++++--- mcp/index.md | 2 +- mcp/run_tools.md | 6 ++- 3 files changed, 88 insertions(+), 9 deletions(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index 1fd7cd9d4..ce7901404 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -1,20 +1,97 @@ --- layout: default -title: Develop an ADK agent +title: Develop a custom agent nav_order: 3 parent: MCP - Query data interactively with an AI agent --- -# Develop your own ADK agent +# Develop your own agent -We provide two sample Google Agent Development Kit-based agents you can use as inspiration for building your own agent: +This page shows you how to develop a custom Data Commons agent, using two approaches: + +- Write a [custom Gemini CLI extension]() + - Simple to set up, no code required + - Minimal customization possible, mostly LLM prompts + - Requires Gemini CLI as the client + +- Write a [custom Google ADK agent](#customize-the-sample-agent) + - Some code required + - Any customization possible + - Provides a UI client as part of the framework + +## Create a custom Gemini CLI extension + +Before sure you have installed the [required prerequisites](/mcp/run_tools.html#extension). + +### Create the extension + +To create your own Data Commons Gemini CLI extension: + +1. From the directory in which you want to create the extension, run the following command: +
+   gemini extensions new EXTENSION_NAME
+   
+ The extension name can be whatever you want; however, it must not collide with an existing extension name, so do not use `datacommons`. Gemini will create a subdirectory with the same name, with a skeleton configuration file `gemini-extension.json`. +1. Switch to the subdirectory that has been created: +
+   cd EXTENSION_NAME
+   
+1. Create a new Markdown file `GEMINI.md`. +1. Add prompts to specify how Gemini should handle user queries and tool results. See for a good example to get you started. +1. Modify `gemini-extension.json` to add the following configuration: +
+    {
+        "name": "EXTENSION_NAME",
+        "version": "1.0.0",
+        "description": "EXTENSION_DESCRIPTION",
+        "mcpServers": {
+            "datacommons-mcp": {
+                "command": "uvx",
+                "args": [
+                    "datacommons-mcp@latest",
+                    "serve",
+                    "stdio",
+                    "--skip-api-key-validation"
+                ],
+                "env": {
+                    "DC_API_KEY": "YOUR_DATA_COMMONS_API_KEY"
+                    // Set these if you are running against a Custom Data Commons instance
+                    "DC_TYPE="custom",
+	                "CUSTOM_DC_URL"="INSTANCE_URL"
+               }
+            }
+        }
+    }
+    
+ The extension name is the one you created in step 1. In the `description` field, provide a brief description of your extension. If you release the extension publicly, this description will show up on . + For additional options, see . +1. Run the following command to install your new extension locally: + ``` + gemini extensions link . + ``` +### Run the extension locally + +1. From any directory, run `gemini`. +1. In the input box, enter `/extensions list` to verify that your extension is active. +1. Optionally, if you have already installed the Data Commons extension but do not want to use it, exit Gemini and from the command line, run: + ``` + gemini extensions disable datacommons + ``` +1. Restart `gemini`. +1. If you want to verify that `datacommons` is disabled, run `/extensions list` again. +1. Start sending queries! + +### Make your extension public + +If you would like to release your extension publicly for others to use, see for full details. -- [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb) is a Google Colab tutorial that shows how to build an ADK Python agent step by step. -- The sample [basic agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent) is a simple Python [Google ADK](https://google.github.io/adk-docs/) agent you can use to develop locally. ## Customize the sample agent -You can make changes directly to the Python files in . You'll need to [restart the agent](/mcp/run_tools.html#use-the-sample-agent) any time you make changes. +We provide two sample Google Agent Development Kit-based agents you can use as inspiration for building your own agent: + +- [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb) is a Google Colab tutorial that shows how to build an ADK Python agent step by step. +- The sample [basic agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent) is a simple Python [Google ADK](https://google.github.io/adk-docs/) agent you can use to develop locally. You can make changes directly to the Python files. You'll need to [restart the agent](/mcp/run_tools.html#use-the-sample-agent) any time you make changes. > Tip: You do not need to install the Google ADK; when you use the [command we provide](run_tools.md#use-the-sample-agent) to start the agent, it downloads the ADK dependencies at run time. diff --git a/mcp/index.md b/mcp/index.md index d563d6c90..2bd3e4c85 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -38,7 +38,7 @@ The server supports both standard MCP [transport protocols](https://modelcontext - Stdio: For clients that connect directly using local processes - Streamable HTTP: For clients that connect remotely or otherwise require HTTP (e.g. Typescript) -See [Run MCP tools](run_tools.md) for procedures for using [Gemini CLI](https://github.com/google-gemini/gemini-cli) and the [Gemini CLI Data Commons Extension](https://geminicli.com/extensions/). +See [Run MCP tools](run_tools.md) for procedures for using [Gemini CLI](https://github.com/google-gemini/gemini-cli) and the [Gemini CLI Data Commons extension](https://geminicli.com/extensions/). ## Unsupported features diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 1371188b7..0c2696a49 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -75,9 +75,9 @@ The following variables are required: - `export DC_TYPE="custom"` - export CUSTOM_DC_URL="YOUR_INSTANCE_URL" -If you're using the Gemini CLI extension, just set these in your shell/startup script. +You can set these in your shell/startup script, or use an `.env` file, which the server locates automatically, to keep all the settings in one place. (If you are using Gemini CLI, you can also set the `env` option in the [`settings.json` file](#gemini). -If you're not using the extension, you may wish to use a `.env` file, which the server locates automatically, to keep all the settings in one place. To set all variables using a `.env` file: +To set all variables using a `.env` file: 1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Alternatively, if you plan to run the sample agent, clone the repo . @@ -93,6 +93,7 @@ If you're not using the extension, you may wish to use a `.env` file, which the 1. Optionally, set other variables. 1. Save the file. +{: #extension} ## Use the Gemini CLI extension **Additional prerequisites** @@ -176,6 +177,7 @@ Before installing, be sure to check the [Prerequisites](#prerequisites) above. To install Gemini CLI, see the instructions at . {:.no_toc} +{: #gemini} ### Configure to run a local server To configure Gemini CLI to recognize the Data Commons server, edit the relevant `settings.json` file (e.g. `~/.gemini/settings.json`) to add the following: From 25cad3a81bdf1aed0f073fb2f7223b4bf77d57f9 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 25 Nov 2025 09:59:24 -0800 Subject: [PATCH 047/121] Some rewording --- mcp/develop_agent.md | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index ce7901404..35237b169 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -21,7 +21,7 @@ This page shows you how to develop a custom Data Commons agent, using two approa ## Create a custom Gemini CLI extension -Before sure you have installed the [required prerequisites](/mcp/run_tools.html#extension). +Before you start, be sure you have installed the [required prerequisites](/mcp/run_tools.html#extension). ### Create the extension @@ -36,14 +36,16 @@ To create your own Data Commons Gemini CLI extension:
    cd EXTENSION_NAME
    
-1. Create a new Markdown file `GEMINI.md`. -1. Add prompts to specify how Gemini should handle user queries and tool results. See for a good example to get you started. +1. Create a new Markdown file (with a `.md` suffix). You can name it however you want, or just use the default, `GEMINI.md`. +1. Write natural-language prompts to specify how Gemini should handle user queries and tool results. See for a good example to get you started. Also see the Google ADK page on [LLM agent instructions](https://google.github.io/adk-docs/agents/llm-agents/#guiding-the-agent-instructions-instruction){: target="_blank"} for tips on how to write good prompts. 1. Modify `gemini-extension.json` to add the following configuration:
     {
         "name": "EXTENSION_NAME",
         "version": "1.0.0",
         "description": "EXTENSION_DESCRIPTION",
+        // Only needed if the file name is not GEMINI.md
+        "contextFileName": "MARKDOWN_FILE_NAME"
         "mcpServers": {
             "datacommons-mcp": {
                 "command": "uvx",
@@ -64,7 +66,7 @@ To create your own Data Commons Gemini CLI extension:
     }
     
The extension name is the one you created in step 1. In the `description` field, provide a brief description of your extension. If you release the extension publicly, this description will show up on . - For additional options, see . + For additional options, see the [Gemini CLI extension documentation](https://geminicli.com/docs/extensions/#how-it-works){: target="_blank"}. 1. Run the following command to install your new extension locally: ``` gemini extensions link . @@ -83,7 +85,7 @@ To create your own Data Commons Gemini CLI extension: ### Make your extension public -If you would like to release your extension publicly for others to use, see for full details. +If you would like to release your extension publicly for others to use, we recommend using a Github repository. See the [Gemini CLI extension release documentation](https://geminicli.com/docs/extensions/extension-releasing/){: target="_blank"} for full details. ## Customize the sample agent @@ -103,5 +105,5 @@ To change to a different LLM, edit the `AGENT_MODEL` constant in [packages/datac The agent's behavior is determined by prompts provided in the `AGENT_INSTRUCTIONS` in [packages/datacommons-mcp/examples/sample_agents/basic_agent/instructions.py](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/instructions.py){: target="_blank"}. -You can add your own prompts to modify how the agent handles tool results. For example, you might want to give a prompt to "build a report for every response" or "always save tabular results to a CSV file". See the Google ADK page on [LLM agent instructions](https://google.github.io/adk-docs/agents/llm-agents/#guiding-the-agent-instructions-instruction){: target="_blank"} for tips on how to write good prompts. +You can add your own prompts to modify how the agent handles tool results. See the Google ADK page on [LLM agent instructions](https://google.github.io/adk-docs/agents/llm-agents/#guiding-the-agent-instructions-instruction){: target="_blank"} for tips on how to write good prompts. From 7c71b3ae1081a02ca85379e010c72bc85cf51ff5 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 25 Nov 2025 10:06:10 -0800 Subject: [PATCH 048/121] Fix formatting --- mcp/develop_agent.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index 35237b169..81f208131 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -45,7 +45,7 @@ To create your own Data Commons Gemini CLI extension: "version": "1.0.0", "description": "EXTENSION_DESCRIPTION", // Only needed if the file name is not GEMINI.md - "contextFileName": "MARKDOWN_FILE_NAME" + "contextFileName": "MARKDOWN_FILE_NAME", "mcpServers": { "datacommons-mcp": { "command": "uvx", @@ -59,18 +59,20 @@ To create your own Data Commons Gemini CLI extension: "DC_API_KEY": "YOUR_DATA_COMMONS_API_KEY" // Set these if you are running against a Custom Data Commons instance "DC_TYPE="custom", - "CUSTOM_DC_URL"="INSTANCE_URL" + "CUSTOM_DC_URL"="INSTANCE_URL" } } } } The extension name is the one you created in step 1. In the `description` field, provide a brief description of your extension. If you release the extension publicly, this description will show up on . + For additional options, see the [Gemini CLI extension documentation](https://geminicli.com/docs/extensions/#how-it-works){: target="_blank"}. 1. Run the following command to install your new extension locally: ``` gemini extensions link . ``` + ### Run the extension locally 1. From any directory, run `gemini`. From 8dd7f75fa80cf640457f13777774c2681d6703b7 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 25 Nov 2025 10:29:10 -0800 Subject: [PATCH 049/121] More revisions and edits --- mcp/develop_agent.md | 4 ++-- mcp/run_tools.md | 37 +++++++++++++++++++++---------------- 2 files changed, 23 insertions(+), 18 deletions(-) diff --git a/mcp/develop_agent.md b/mcp/develop_agent.md index 81f208131..f4e93a1a0 100644 --- a/mcp/develop_agent.md +++ b/mcp/develop_agent.md @@ -72,7 +72,7 @@ To create your own Data Commons Gemini CLI extension: ``` gemini extensions link . ``` - + ### Run the extension locally 1. From any directory, run `gemini`. @@ -101,7 +101,7 @@ We provide two sample Google Agent Development Kit-based agents you can use as i ### Customize the model -To change to a different LLM, edit the `AGENT_MODEL` constant in [packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py#L23){: target="_blank"}. +To change to a different LLM or model version, edit the `AGENT_MODEL` constant in [packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py#L23){: target="_blank"}. ### Customize agent behavior diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 0c2696a49..fd9503706 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -15,32 +15,26 @@ This page shows you how to run a local agent and connect to a Data Commons MCP s We provide specific instructions for the following agents. All may be used to query datacommons.org or a Custom Data Commons instance. -- [Gemini CLI extension](https://geminicli.com/extensions/) +- [Gemini CLI extension](#use-the-gemini-cli-extension) - Best for querying datacommons.org - Provides a built-in "agent" and context file for Data Commons - Downloads extension files locally - Uses `uv` to run the MCP server locally - Minimal setup - See [Use the Gemini CLI extension](#use-the-gemini-cli-extension) for this option. - -- [Gemini CLI](https://geminicli.com/) +- [Gemini CLI](#use-gemini-cli) - No additional downloads - MCP server can be run locally or remotely - - You can create your own context file + - You can create your own LLM context file - Minimal setup - See [Use Gemini CLI](#use-gemini-cli) for this option. - -- A sample basic agent based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) +- A [sample basic agent](#use-the-sample-agent) based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) - Best for interacting with a Web GUI - - Can be customized to run other LLMs and prompts + - Can be used to run other LLMs and prompts - Downloads agent code locally - Server may be run remotely - Some additional setup - See [Use the sample agent](#use-the-sample-agent) for this option. - For an end-to-end tutorial using a server and agent over HTTP, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb). For other clients/agents, see the relevant documentation; you should be able to reuse the commands and arguments detailed below. @@ -59,6 +53,11 @@ Other requirements for specific agents are given in their respective sections. ## Configure environment variables +You can set these in the following ways: +1. In your shell/startup script (e.g. `.bashrc`). This is the recommended option for most use cases. +1. [Use an `.env` file](#env), which the server locates automatically. This is useful for Custom Data Commons with multiple options, to keep all settings in one place. +1. If you are using Gemini CLI (not the extension), you can use the `env` option in the [`settings.json` file](#gemini). + ### Base Data Commons (datacommons.org) For basic usage against datacommons.org, set the required `DC_API_KEY` in your shell/startup script (e.g. `.bashrc`). @@ -75,9 +74,9 @@ The following variables are required: - `export DC_TYPE="custom"` - export CUSTOM_DC_URL="YOUR_INSTANCE_URL" -You can set these in your shell/startup script, or use an `.env` file, which the server locates automatically, to keep all the settings in one place. (If you are using Gemini CLI, you can also set the `env` option in the [`settings.json` file](#gemini). - -To set all variables using a `.env` file: +{: #env} +{: .no_toc} +#### Set variables with an `.env` file: 1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Alternatively, if you plan to run the sample agent, clone the repo . @@ -97,6 +96,7 @@ To set all variables using a `.env` file: ## Use the Gemini CLI extension **Additional prerequisites** + In addition to the [standard prerequisites](#prerequisites), you must have the following installed: - [Git](https://git-scm.com/) - [Google Gemini CLI](https://geminicli.com/docs/get-started/) @@ -196,7 +196,11 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant "env": { "DC_API_KEY": "YOUR_DATA_COMMONS_API_KEY" // If you are using a Google API key - "GEMINI_API_KEY": "YOUR_GOOGLE_API_KEY" + "GEMINI_API_KEY": "YOUR_GOOGLE_API_KEY", + + // Only use these to run against a Custom Data Commons instance + "DC_TYPE": "custom", + "CUSTOM_DC_URL": "INSTANCE_URL" }, "trust": true } @@ -222,7 +226,7 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant {:.no_toc} -### Send queries +### Run 1. From any directory, run `gemini`. 1. To see the Data Commons tools, use `/mcp tools`. @@ -235,6 +239,7 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant We provide a basic agent for interacting with the MCP Server in [packages/datacommons-mcp/examples/sample_agents/basic_agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent). **Additional prerequisites** + In addition to the [standard prerequisites](#prerequisites), you will need: - A GCP project and a Google AI API key. For details on supported keys, see . - [Git](https://git-scm.com/) installed. From ab6fb59b8c475a6573156d7043f519c468129c8e Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 25 Nov 2025 10:35:00 -0800 Subject: [PATCH 050/121] Remove file from other branch --- api/python/v2/migration.md | 1520 ------------------------------------ 1 file changed, 1520 deletions(-) delete mode 100644 api/python/v2/migration.md diff --git a/api/python/v2/migration.md b/api/python/v2/migration.md deleted file mode 100644 index e6b54fa59..000000000 --- a/api/python/v2/migration.md +++ /dev/null @@ -1,1520 +0,0 @@ ---- -layout: default -title: Migrate from V1 to V2 -nav_order: 7 -parent: Python (V2) -grand_parent: API - Query data programmatically -published: true ---- - -{: .no_toc} -# Migrate from Python API V1 to V2 - - -Version V1 of the Data Commons Python API will be deprecated in early 2026. The [V2](index.md) APIs are significantly different from V1. This document summarizes the important differences that you should be aware of and provides examples of translating queries from V1 to V2. - -* TOC -{:toc} - -## Summary of changes - -| Feature | V1 | V2 | -|---------|----|----| -| API key | Not required | Required: get from | -| Custom Data Commons supported | No | Yes: see details in [Create a client](index.md#create-a-client) | -| Pandas support | Separate package | Module in the same package: see details in [Install](index.md#install) | -| Sessions | Managed by the `datacommons` package object | Managed by a `datacommons_client` object that you must create: see details in [Create a client](index.md#create-a-client) | -| Classes/methods | 7 methods, members of `datacommons` class | 3 classes representing REST endpoints `node`, `observation` and `resolve`; several member functions for each endpoint class. Variations of methods in V1 are represented as function parameters in V2. See [Request endpoints and responses](index.md#request-endpoints-and-responses) | -| Pandas classes/methods | 3 methods, all members of `datacommons_pandas` class | 1 method, member of `datacommons_client` class. Variations of the Pandas methods in V1 are represented as parameters in V2. See [Observations DataFrame](pandas.md) | -| Pagination | Required for queries resulting in large data volumes | Optional: see [Pagination](node.md#pagination) | -| DCID lookup method | No | Yes: [`resolve`](resolve.md) endpoint methods | -| Statistical facets | With the `get_stat_value` and `get_stat_series` methods, Data Commons chooses the most "relevant" facet to answer the query; typically this is the facet that has the most recent data. | For all Observation methods, results from all available facets are returned by default (if you don"t apply a filter); for details, see [Observation response](/observation.html#response) | -| Statistical facet filtering | The `get_stat_value`, `get_stat_series` and Pandas `build_time_series` methods allow you to filter results by specific facet fields, such as measurement method, unit, observation period, etc. | The `observations_dataframe` method allows you to filter results by specific facet fields. Observation methods only allow filtering results by the facet domain or ID; for details, see [Observation fetch](observation.md#fetch). | -| Response contents | Simple structures mostly containing values only | Nested structures containing values and additional properties and metadata | -| Different response formats | No | Yes: for details, see [Response formatting](index.md#response-formatting). | - -## V1 function equivalences in V2 - -This section shows you how to translate from a given V1 function to the equivalent code in V2. Examples of both versions are given in the [Examples](#examples) section. - -| `datacommmons` V1 function | V2 equivalent | -|-------------|------------------| -| `get_triples` | No direct equivalent; triples are not returned. Instead you indicate the directionality of the relationship in the triple, i.e. incoming or outgoing edges, using [`node.fetch`](node.md#fetch) and a [relation expression](/api/rest/v2/index.html#relation-expressions) | -| `get_places_in` | [`node.fetch_place_descendants`](node.md#fetch_place_descendants) | -| `get_stat_value` | [`observation.fetch_observations_by_entity_dcid`](observation.md#fetch_observations_by_entity_dcid) with a single place and variable | -| `get_stat_series` | [`observation.fetch_observations_by_entity_dcid`](observation.md#fetch_observations_by_entity_dcid) with a single place and variable, and the `date` parameter set to `all` | -| `get_stat_all` | [`observation.fetch_observations_by_entity_dcid`](observation.md#fetch_observations_by_entity_dcid) with an array of places and/or variables and `date` parameter set to `all` | -| `get_property_labels` | [`node.fetch_property_labels`](node.md#fetch_property_labels) | -| `get_property_values` | [`node.fetch_property_values`](node.md#fetch_property_values) | - -| `datacommons_pandas` V1 function | V2 equivalent | -|----------------------------------|------------------| -| `build_time_series` | [`observations_dataframe`](pandas.md) with a single place and variable and the `date` parameter set to `all` | -| `build_time_series_dataframe` | [`observations_dataframe`](pandas.md) with an array of places, a single variable and the `date` parameter set to `all` | -| `build_multivariate_dataframe` | [`observations_dataframe`](pandas.md) with an array of places and/or variables and the `date` parameter set to `latest` | - -## Examples - -### datacommons package examples - -The following examples show equivalent API requests and responses using the V1 `datacommons` package and V2. - -{: .no_toc} -#### Example 1: Get triples associated with a single place - -This example retrieves triples associated with zip code 94043. In V1, the `get_triples` method returns all triples, in which the zip code is the subject or the object. In V2, you cannot get both directions in a single request; you must send one request for the outgoing relationships and one for the incoming relationships. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_triples(["zip/94043"]) -``` -{% endtab %} - -{% tab request V2 request %} -Request 1: -```python -client.node.fetch(node_dcids=["zip/94043"], expression="->*") -``` -Request 2: -```python -client.node.fetch(node_dcids=["zip/94043"], expression="<-*") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{ "zip/94043": [ - // Outgoing relations - ("zip/94043", "containedInPlace", "country/USA"), - ("zip/94043", "containedInPlace", "geoId/06085"), - ("zip/94043", "containedInPlace", "geoId/0608592830"), - ("zip/94043", "containedInPlace", "geoId/0616"), - ("zip/94043", "geoId", "zip/94043"), - //... - ("zip/94043", "landArea", "SquareMeter21906343"), - ("zip/94043", "latitude", "37.411913"), - ("zip/94043", "longitude", "-122.068919"), - ("zip/94043", "name", "94043"), - ("zip/94043", "provenance", "dc/base/BaseGeos"), - ("zip/94043", "typeOf", "CensusZipCodeTabulationArea"), - ("zip/94043", "usCensusGeoId", "860Z200US94043"), - ("zip/94043", "waterArea", "SquareMeter0"), - // Incoming relations - ("EpaParentCompany/AlphabetInc", "locatedIn", "zip/94043"), - ("EpaParentCompany/Google", "locatedIn", "zip/94043"), - ("epaGhgrpFacilityId/1005910", "containedInPlace", "zip/94043"), - ("epaSuperfundSiteId/CA2170090078", "containedInPlace", "zip/94043"), - ("epaSuperfundSiteId/CAD009111444", "containedInPlace", "zip/94043"), - ("epaSuperfundSiteId/CAD009138488", "containedInPlace", "zip/94043"), - ("epaSuperfundSiteId/CAD009205097", "containedInPlace", "zip/94043"), - ("epaSuperfundSiteId/CAD009212838", "containedInPlace", "zip/94043"), - ("epaSuperfundSiteId/CAD061620217", "containedInPlace", "zip/94043"), - ("epaSuperfundSiteId/CAD095989778", "containedInPlace", "zip/94043"), - //... - ] -} -``` -{% endtab %} - -{% tab response V2 response %} -Response 1 (outgoing relations): -```python -{"data": {"zip/94043": {"arcs": { - "longitude": {"nodes": [{"provenanceId": "dc/base/BaseGeos", - "value": "-122.068919"}]}, - "name": {"nodes": [{"provenanceId": "dc/base/BaseGeos", - "value": "94043"}]}, - "typeOf": {"nodes": [{"dcid": "CensusZipCodeTabulationArea", - "name": "CensusZipCodeTabulationArea", - "provenanceId": "dc/base/BaseGeos", - "types": ["Class"]}]}, - "usCensusGeoId": {"nodes": [{"provenanceId": "dc/base/BaseGeos", - "value": "860Z200US94043"}]}, - "containedInPlace": {"nodes": [{"dcid": "country/USA", - "name": "United States", - "provenanceId": "dc/base/BaseGeos", - "types": ["Country"]}, - {"dcid": "geoId/06085", - "name": "Santa Clara County", - "provenanceId": "dc/base/BaseGeos", - "types": ["AdministrativeArea2", "County"]}, - {"dcid": "geoId/0608592830", - "name": "San Jose CCD", - "provenanceId": "dc/base/BaseGeos", - "types": ["CensusCountyDivision"]}, - {"dcid": "geoId/0616", - "name": "Congressional District 16 (113th Congress), California", - "provenanceId": "dc/base/BaseGeos", - "types": ["CongressionalDistrict"]}]}, - //... - "geoOverlaps": {"nodes": [{"dcid": "geoId/06085504601", - "name": "Census Tract 5046.01, Santa Clara County, California", - "provenanceId": "dc/base/BaseGeos", - "types": ["CensusTract"]}, - {"dcid": "geoId/06085504700", - "name": "Census Tract 5047, Santa Clara County, California", - "provenanceId": "dc/base/BaseGeos", - "types": ["CensusTract"]}, - {"dcid": "geoId/06085509108", - "name": "Census Tract 5091.08, Santa Clara County, California", - "provenanceId": "dc/base/BaseGeos", - "types": ["CensusTract"]}, - //... - "landArea": {"nodes": [{"dcid": "SquareMeter21906343", - "name": "SquareMeter 21906343", - "provenanceId": "dc/base/BaseGeos", - "types": ["Quantity"]}]}, - "latitude": {"nodes": [{"provenanceId": "dc/base/BaseGeos", - "value": "37.411913"}]}, - "provenance": {"nodes": [{"dcid": "dc/base/BaseGeos", - "name": "BaseGeos", - "provenanceId": "dc/base/BaseGeos", - "types": ["Provenance"]}]}}}}} -``` -Response 2 (incoming relations): - -```python -{"data": {"zip/94043": {"arcs": { - "locatedIn": {"nodes": [ - {"dcid": "EpaParentCompany/AlphabetInc", - "name": "AlphabetInc", - "provenanceId": "dc/base/EPA_ParentCompanies", - "types": ["EpaParentCompany"]}, - {"dcid": "EpaParentCompany/Google", - "name": "Google", - "provenanceId": "dc/base/EPA_ParentCompanies", - "types": ["EpaParentCompany"]}]}, - "containedInPlace": {"nodes": [ - {"dcid": "epaGhgrpFacilityId/1005910", - "name": "City Of Mountain View (Shoreline Landfill)", - "provenanceId": "dc/base/EPA_GHGRPFacilities", - "types": ["EpaReportingFacility"]}, - {"dcid": "epaSuperfundSiteId/CA2170090078", - "name": "Moffett Naval Air Station", - "provenanceId": "dc/base/EPA_Superfund_Sites", - "types": ["SuperfundSite"]}, - {"dcid": "epaSuperfundSiteId/CAD009111444", - "name": "Teledyne Semiconductor", - "provenanceId": "dc/base/EPA_Superfund_Sites", - "types": ["SuperfundSite"]}, - {"dcid": "epaSuperfundSiteId/CAD009138488", - "name": "Spectra-Physics Inc.", - "provenanceId": "dc/base/EPA_Superfund_Sites", - "types": ["SuperfundSite"]}, - //... - ] - } - } -} -``` -{% endtab %} - -{% endtabs %} - -
- -{: .no_toc} -#### Example 2: Get a list of places in another place - -This example retrieves a list of counties in the U.S. state of Delaware. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_places_in(["geoId/10"], "County") -``` - -{% endtab %} - -{% tab request V2 request %} - -```python -client.node.fetch_place_children(place_dcids="geoId/10", children_type="County") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{"geoId/10": ["geoId/10001", "geoId/10003", "geoId/10005"]} -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{"geoId/10": [ - {"dcid": "geoId/10001", "name": "Kent County"}, - {"dcid": "geoId/10003", "name": "New Castle County"}, - {"dcid": "geoId/10005", "name": "Sussex County"}]} -``` - -{% endtab %} - -{% endtabs %} - -
- -{: .no_toc} -#### Example 3: Get the latest value of a single statistical variable for a single place - -This example gets the latest count of men in the state of California. Note that the V1 method `get_stat_value` returns a single value, automatically selecting the most "relevant" data source, while the V2 method returns all data sources ("facets"), i.e. multiple values for the same variable, as well as metadata for all the sources. Comparing the results, you can see that the V1 method has selected facet 3999249536, which has the most recent date, and comes from the U.S. Census PEP survey. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_stat_value("geoId/05", "Count_Person_Male") -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observation.fetch_observations_by_entity_dcid(date="latest", entity_dcids="geoId/05", variable_dcids="Count_Person_Male") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -1524533 -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{"byVariable": {"Count_Person_Male": {"byEntity": {"geoId/05": {"orderedFacets": [ - {"earliestDate": "2023", - "facetId": "1145703171", - "latestDate": "2023", - "obsCount": 1, - "observations": [{"date": "2023", "value": 1495958.0}]}, - {"earliestDate": "2024", - "facetId": "3999249536", - "latestDate": "2024", - "obsCount": 1, - "observations": [{"date": "2024", "value": 1524533.0}]}, - {"earliestDate": "2023", - "facetId": "1964317807", - "latestDate": "2023", - "obsCount": 1, - "observations": [{"date": "2023", "value": 1495958.0}]}, - {"earliestDate": "2023", - "facetId": "10983471", - "latestDate": "2023", - "obsCount": 1, - "observations": [{"date": "2023", "value": 1495096.943}]}, - {"earliestDate": "2023", - "facetId": "196790193", - "latestDate": "2023", - "obsCount": 1, - "observations": [{"date": "2023", "value": 1495096.943}]}, - {"earliestDate": "2021", - "facetId": "4181918134", - "latestDate": "2021", - "obsCount": 1, - "observations": [{"date": "2021", "value": 1493178.0}]}, - {"earliestDate": "2020", - "facetId": "2825511676", - "latestDate": "2020", - "obsCount": 1, - "observations": [{"date": "2020", "value": 1486856.0}]}, - {"earliestDate": "2019", - "facetId": "1226172227", - "latestDate": "2019", - "obsCount": 1, - "observations": [{"date": "2019", "value": 1474705.0}]}]}}}}, - "facets": {"2825511676": {"importName": "CDC_Mortality_UnderlyingCause", - "provenanceUrl": "https://wonder.cdc.gov/ucd-icd10.html"}, - "1226172227": {"importName": "CensusACS1YearSurvey", - "measurementMethod": "CensusACS1yrSurvey", - "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}, - "1145703171": {"importName": "CensusACS5YearSurvey", - "measurementMethod": "CensusACS5yrSurvey", - "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}, - "3999249536": {"importName": "USCensusPEP_Sex", - "measurementMethod": "CensusPEPSurvey_PartialAggregate", - "observationPeriod": "P1Y", - "provenanceUrl": "https://www.census.gov/programs-surveys/popest.html"}, - "1964317807": {"importName": "CensusACS5YearSurvey_SubjectTables_S0101", - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "provenanceUrl": "https://data.census.gov/table?q=S0101:+Age+and+Sex&tid=ACSST1Y2022.S0101"}, - "10983471": {"importName": "CensusACS5YearSurvey_SubjectTables_S2601A", - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2601A&tid=ACSST5Y2019.S2601A"}, - "196790193": {"importName": "CensusACS5YearSurvey_SubjectTables_S2602", - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2602&tid=ACSST5Y2019.S2602"}, - "4181918134": {"importName": "OECDRegionalDemography_Population", - "measurementMethod": "OECDRegionalStatistics", - "observationPeriod": "P1Y", - "provenanceUrl": "https://data-explorer.oecd.org/vis?fs[0]=Topic%2C0%7CRegional%252C%20rural%20and%20urban%20development%23GEO%23&pg=40&fc=Topic&bp=true&snb=117&df[ds]=dsDisseminateFinalDMZ&df[id]=DSD_REG_DEMO%40DF_POP_5Y&df[ag]=OECD.CFE.EDS&df[vs]=2.0&dq=A.......&to[TIME_PERIOD]=false&vw=tb&pd=%2C"}}} -``` -{% endtab %} - -{% endtabs %} - -
- -{: #example-4} -{: .no_toc} -#### Example 4: Get all values of a single statistical variable for a single place - -This example retrieves the number of men in the state of California for all years available. As in example 3 above, V1 returns data from a single facet (which appears to be 1145703171, the U.S. Census ACS 5-year survey). V2 returns data for all available facets. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_stat_series("geoId/05", "Count_Person_Male") -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observation.fetch_observations_by_entity_dcid(date="all", entity_dcids="geoId/05", variable_dcids="Count_Person_Male") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{"2023": 1495958, - "2017": 1461651, - "2022": 1491622, - "2015": 1451913, - "2021": 1483520, - "2018": 1468412, - "2011": 1421287, - "2016": 1456694, - "2012": 1431252, - "2019": 1471760, - "2013": 1439862, - "2014": 1447235, - "2020": 1478511} -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{"byVariable": {"Count_Person_Male": {"byEntity": {"geoId/05": {"orderedFacets": [ - {"earliestDate": "2011", - "facetId": "1145703171", - "latestDate": "2023", - "obsCount": 13, - "observations": [ - {"date": "2011", "value": 1421287.0}, - {"date": "2012", "value": 1431252.0}, - {"date": "2013", "value": 1439862.0}, - {"date": "2014", "value": 1447235.0}, - {"date": "2015", "value": 1451913.0}, - {"date": "2016", "value": 1456694.0}, - {"date": "2017", "value": 1461651.0}, - {"date": "2018", "value": 1468412.0}, - {"date": "2019", "value": 1471760.0}, - {"date": "2020", "value": 1478511.0}, - {"date": "2021", "value": 1483520.0}, - {"date": "2022", "value": 1491622.0}, - {"date": "2023", "value": 1495958.0}]}, - {"earliestDate": "1970", - "facetId": "3999249536", - "latestDate": "2024", - "obsCount": 55, - "observations": [ - {"date": "1970", "value": 937034.0}, - {"date": "1971", "value": 956802.0}, - {"date": "1972", "value": 979822.0}, - {"date": "1973", "value": 999264.0}, - {"date": "1974", "value": 1019259.0}, - {"date": "1975", "value": 1047112.0}, - {"date": "1976", "value": 1051166.0}, - {"date": "1977", "value": 1069003.0}, - {"date": "1978", "value": 1084374.0}, - {"date": "1979", "value": 1097123.0}, - {"date": "1980", "value": 1105739.0}, - {"date": "1981", "value": 1107249.0}, - {"date": "1982", "value": 1107142.0}, - {"date": "1983", "value": 1112460.0}, - {"date": "1984", "value": 1119061.0}, - {"date": "1985", "value": 1122425.0}, - {"date": "1986", "value": 1124357.0}, - {"date": "1987", "value": 1129353.0}, - {"date": "1988", "value": 1129014.0}, - {"date": "1989", "value": 1130916.0}, - {"date": "1990", "value": 1136163.0}, - //... - "facets": {"1964317807": {"importName": "CensusACS5YearSurvey_SubjectTables_S0101", - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "provenanceUrl": "https://data.census.gov/table?q=S0101:+Age+and+Sex&tid=ACSST1Y2022.S0101"}, - "10983471": {"importName": "CensusACS5YearSurvey_SubjectTables_S2601A", - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2601A&tid=ACSST5Y2019.S2601A"}, - "196790193": {"importName": "CensusACS5YearSurvey_SubjectTables_S2602", - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2602&tid=ACSST5Y2019.S2602"}, - //... -}} -``` -{% endtab %} - -{% endtabs %} - -
- -{: #example-5} -{: .no_toc} -#### Example 5: Get the all values of a single statistical variable for a single place, selecting the facet to return - -This example gets the nominal GDP for Italy, filtering for facets that show the results in U.S. dollars. In V1, this is done directly with the `unit` parameter. In V2, we use the domain to specify the same facet. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_stat_series("country/ITA", "Amount_EconomicActivity_GrossDomesticProduction_Nominal", unit="USDollar") -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observation.fetch_observations_by_entity_dcid(date="all", entity_dcids="country/ITA",variable_dcids="Amount_EconomicActivity_GrossDomesticProduction_Nominal", filter_facet_domains="worldbank.org") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{'2003': 1582930016538.82, - '2002': 1281746271196.04, - '1961': 46649487320.4225, - '1986': 641862313287.44, - '1974': 200024444775.231, - '2000': 1149661363439.38, - '2015': 1845428048839.1, - '2001': 1172041488805.87, - '1966': 76622444787.3696, - '1971': 124959712858.92598, - '1999': 1255004736463.98, - //... - '1979': 394584507107.9, - '2016': 1887111188176.93, - '1981': 431695533980.583, - '2024': 2372774547793.12, - '1985': 453259761687.456, - '1975': 228220643534.994, - '1960': 42012422612.3955, - '1991': 1249092439519.28} -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{'byVariable': {'Amount_EconomicActivity_GrossDomesticProduction_Nominal': {'byEntity': {'country/ITA': {'orderedFacets': [{'earliestDate': '1960', - 'facetId': '3496587042', - 'latestDate': '2024', - 'obsCount': 65, - 'observations': [{'date': '1960', 'value': 42012422612.3955}, - {'date': '1961', 'value': 46649487320.4225}, - {'date': '1962', 'value': 52413872628.0045}, - {'date': '1963', 'value': 60035924617.9277}, - {'date': '1964', 'value': 65720771779.4768}, - {'date': '1965', 'value': 70717012186.1774}, - {'date': '1966', 'value': 76622444787.3696}, - {'date': '1967', 'value': 84401995573.2456}, - {'date': '1968', 'value': 91485448147.84}, - {'date': '1969', 'value': 100996667239.335}, - ..// - {'date': '2022', 'value': 2104067630319.46}, - {'date': '2023', 'value': 2304605139862.79}, - {'date': '2024', 'value': 2372774547793.12}]}]}}}}, - 'facets': {'3496587042': {'importName': 'WorldDevelopmentIndicators', - 'observationPeriod': 'P1Y', - 'provenanceUrl': 'https://datacatalog.worldbank.org/dataset/world-development-indicators/', - 'unit': 'USDollar'}}} -``` -{% endtab %} - -{% endtabs %} - -
- -{: #example-6} -{: .no_toc} -#### Example 6: Get all values of a single statistical variables for multiple places - -This example retrieves the number of people with doctoral degrees in the states of Minnesota and Wisconsin for all years available. Note that the `get_stat_all` method behaves more like V2 and returns data for all facets (in this case, there is only one), as well as metadata for all facets. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_stat_all(["geoId/27","geoId/55"], ["Count_Person_EducationalAttainmentDoctorateDegree"]) -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observation.fetch_observations_by_entity_dcid(date="all", variable_dcids="Count_Person_EducationalAttainmentDoctorateDegree", entity_dcids=["geoId/27","geoId/55"]) -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{"geoId/27": {"Count_Person_EducationalAttainmentDoctorateDegree": {"sourceSeries": [ - {"val": - {"2016": 50039, - "2017": 52737, - "2015": 47323, - "2013": 42511, - "2012": 40961, - "2022": 60300, - "2023": 63794, - "2014": 44713, - "2021": 58452, - "2019": 55185, - "2020": 56170, - "2018": 54303}, - "measurementMethod": "CensusACS5yrSurvey", - "importName": "CensusACS5YearSurvey", - "provenanceDomain": "census.gov", - "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}]}}, - "geoId/55": {"Count_Person_EducationalAttainmentDoctorateDegree": {"sourceSeries": [ - {"val": - {"2020": 49385, - "2017": 43737, - "2022": 53667, - "2014": 40133, - "2021": 52306, - "2023": 55286, - "2016": 42590, - "2012": 38052, - "2013": 38711, - "2019": 47496, - "2018": 46071, - "2015": 41387}, - "measurementMethod": "CensusACS5yrSurvey", - "importName": "CensusACS5YearSurvey", - "provenanceDomain": "census.gov", - "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}]}}} -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{"byVariable": {"Count_Person_EducationalAttainmentDoctorateDegree": {"byEntity": { - "geoId/55": {"orderedFacets": [{"earliestDate": "2012", - "facetId": "1145703171", - "latestDate": "2023", - "obsCount": 12, - "observations": [ - {"date": "2012", "value": 38052.0}, - {"date": "2013", "value": 38711.0}, - {"date": "2014", "value": 40133.0}, - {"date": "2015", "value": 41387.0}, - {"date": "2016", "value": 42590.0}, - {"date": "2017", "value": 43737.0}, - {"date": "2018", "value": 46071.0}, - {"date": "2019", "value": 47496.0}, - {"date": "2020", "value": 49385.0}, - {"date": "2021", "value": 52306.0}, - {"date": "2022", "value": 53667.0}, - {"date": "2023", "value": 55286.0}]}]}, - "geoId/27": {"orderedFacets": [{"earliestDate": "2012", - "facetId": "1145703171", - "latestDate": "2023", - "obsCount": 12, - "observations": [ - {"date": "2012", "value": 40961.0}, - {"date": "2013", "value": 42511.0}, - {"date": "2014", "value": 44713.0}, - {"date": "2015", "value": 47323.0}, - {"date": "2016", "value": 50039.0}, - {"date": "2017", "value": 52737.0}, - {"date": "2018", "value": 54303.0}, - {"date": "2019", "value": 55185.0}, - {"date": "2020", "value": 56170.0}, - {"date": "2021", "value": 58452.0}, - {"date": "2022", "value": 60300.0}, - {"date": "2023", "value": 63794.0}]}]}}}}, - "facets": {"1145703171": {"importName": "CensusACS5YearSurvey", - "measurementMethod": "CensusACS5yrSurvey", - "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}}} -``` -{% endtab %} - -{% endtabs %} - -
- -{: #example-7} -{: .no_toc} -#### Example 7: Get all values of multiple statistical variables for a single place - -This example retrieves the total population as well as the male population of the state of Arkansas for all available years. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_stat_all(["geoId/05"], ["Count_Person", "Count_Person_Male"]) -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observation.fetch_observations_by_entity_dcid(date="all", entity_dcids="geoId/05", variable_dcids=["Count_Person","Count_Person_Male"]) -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{"geoId/05": {"Count_Person": {"sourceSeries": [{"val": { - "2019": 3020985, - "1936": 1892000, - "2013": 2960459, - "1980": 2286435, - "1904": 1419000, - "2023": 3069463, - "2010": 2921998, - "1946": 1797000, - "1967": 1901000, - "1902": 1360000, - "1962": 1853000, - "1993": 2423743, - "1991": 2370666, - "1986": 2331984, - "2009": 2896843, - "2014": 2968759, - "1933": 1854000, - "1954": 1734000, - "1921": 1769000, - "1929": 1852000, - "1956": 1704000, - "1949": 1844000, - //... - "measurementMethod": "CensusPEPSurvey", - "observationPeriod": "P1Y", - "importName": "USCensusPEP_Annual_Population", - "provenanceDomain": "census.gov", - "provenanceUrl": "https://www.census.gov/programs-surveys/popest.html"}, - {"val": { - "2022": 3018669, - "2018": 2990671, - "2020": 3011873, - "2016": 2968472, - "2013": 2933369, - "2019": 2999370, - "2021": 3006309, - "2015": 2958208, - "2011": 2895928, - "2023": 3032651, - "2014": 2947036, - "2012": 2916372, - "2017": 2977944}, - "measurementMethod": "CensusACS5yrSurvey", - "importName": "CensusACS5YearSurvey", - "provenanceDomain": "census.gov", - "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}, - {"val": {"2000": 2673400, "2020": 3011524, "2010": 2915918}, - "measurementMethod": "USDecennialCensus", - "importName": "USDecennialCensus_RedistrictingRelease", - "provenanceDomain": "census.gov", - "provenanceUrl": "https://www.census.gov/programs-surveys/decennial-census/about/rdo/summary-files.html"}, - //... - "Count_Person_Male": {"sourceSeries": [{"val": { - "2015": 1451913, - "2021": 1483520, - "2020": 1478511, - "2023": 1495958, - "2016": 1456694, - "2022": 1491622, - "2019": 1471760, - "2013": 1439862, - "2018": 1468412, - "2014": 1447235, - "2011": 1421287, - "2012": 1431252, - "2017": 1461651}, - "measurementMethod": "CensusACS5yrSurvey", - "importName": "CensusACS5YearSurvey", - "provenanceDomain": "census.gov", - "provenanceUrl": "https://www.census.gov/programs-surveys/acs/data/data-via-ftp.html"}, - {"val": { - "1975": 1047112, - "1995": 1228626, - "2023": 1513837, - "1991": 1150369, - "2019": 1482909, - "1990": 1136163, - "1998": 1277869, - "1989": 1130916, - "2011": 1444411, - "2021": 1495032, - "2013": 1453888, - "1992": 1167203, - "2004": 1346638, - "2022": 1503494, - "1982": 1107142, - "1978": 1084374, - //... - "measurementMethod": "CensusPEPSurvey_PartialAggregate", - "observationPeriod": "P1Y", - "importName": "USCensusPEP_Sex", - "provenanceDomain": "census.gov", - "isDcAggregate": True, - "provenanceUrl": "https://www.census.gov/programs-surveys/popest.html"}, - {"val": {"2013": 1439862, - "2018": 1468412, - "2011": 1421287, - "2015": 1451913, - "2020": 1478511, - "2017": 1461651, - "2021": 1483520, - "2019": 1471760, - "2014": 1447235, - "2012": 1431252, - "2010": 1408945, - "2022": 1491622, - "2023": 1495958, - "2016": 1456694}, - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "importName": "CensusACS5YearSurvey_SubjectTables_S0101", - "provenanceDomain": "census.gov", - "provenanceUrl": "https://data.census.gov/table?q=S0101:+Age+and+Sex&tid=ACSST1Y2022.S0101"}, - //... -]}}} -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{"byVariable": {"Count_Person": {"byEntity": { - "geoId/05": {"orderedFacets": [ - {"earliestDate": "1900", - "facetId": "2176550201", - "latestDate": "2024", - "obsCount": 125, - "observations": [{"date": "1900", "value": 1314000.0}, - {"date": "1901", "value": 1341000.0}, - {"date": "1902", "value": 1360000.0}, - {"date": "1903", "value": 1384000.0}, - {"date": "1904", "value": 1419000.0}, - {"date": "1905", "value": 1447000.0}, - {"date": "1906", "value": 1465000.0}, - {"date": "1907", "value": 1484000.0}, - //... - {"earliestDate": "2011", - "facetId": "1145703171", - "latestDate": "2023", - "obsCount": 13, - "observations": [{"date": "2011", "value": 2895928.0}, - {"date": "2012", "value": 2916372.0}, - {"date": "2013", "value": 2933369.0}, - {"date": "2014", "value": 2947036.0}, - {"date": "2015", "value": 2958208.0}, - {"date": "2016", "value": 2968472.0}, - {"date": "2017", "value": 2977944.0}, - {"date": "2018", "value": 2990671.0}, - {"date": "2019", "value": 2999370.0}, - {"date": "2020", "value": 3011873.0}, - {"date": "2021", "value": 3006309.0}, - {"date": "2022", "value": 3018669.0}, - {"date": "2023", "value": 3032651.0}]}, - {"earliestDate": "2000", - "facetId": "1541763368", - "latestDate": "2020", - "obsCount": 3, - "observations": [{"date": "2000", "value": 2673400.0}, - {"date": "2010", "value": 2915918.0}, - {"date": "2020", "value": 3011524.0}]}, - //... - "Count_Person_Male": {"byEntity": { - "geoId/05": {"orderedFacets": [{"earliestDate": "2011", - "facetId": "1145703171", - "latestDate": "2023", - "obsCount": 13, - "observations": [{"date": "2011", "value": 1421287.0}, - {"date": "2012", "value": 1431252.0}, - {"date": "2013", "value": 1439862.0}, - {"date": "2014", "value": 1447235.0}, - {"date": "2015", "value": 1451913.0}, - {"date": "2016", "value": 1456694.0}, - {"date": "2017", "value": 1461651.0}, - {"date": "2018", "value": 1468412.0}, - {"date": "2019", "value": 1471760.0}, - {"date": "2020", "value": 1478511.0}, - {"date": "2021", "value": 1483520.0}, - {"date": "2022", "value": 1491622.0}, - {"date": "2023", "value": 1495958.0}]}, - {"earliestDate": "1970", - "facetId": "3999249536", - "latestDate": "2024", - "obsCount": 55, - "observations": [{"date": "1970", "value": 937034.0}, - {"date": "1971", "value": 956802.0}, - {"date": "1972", "value": 979822.0}, - {"date": "1973", "value": 999264.0}, - {"date": "1974", "value": 1019259.0}, - {"date": "1975", "value": 1047112.0}, - {"date": "1976", "value": 1051166.0}, - {"date": "1977", "value": 1069003.0}, - {"date": "1978", "value": 1084374.0}, - {"date": "1979", "value": 1097123.0}, - {"date": "1980", "value": 1105739.0}, - //... - {"earliestDate": "2010", - "facetId": "1964317807", - "latestDate": "2023", - "obsCount": 14, - "observations": [{"date": "2010", "value": 1408945.0}, - {"date": "2011", "value": 1421287.0}, - {"date": "2012", "value": 1431252.0}, - {"date": "2013", "value": 1439862.0}, - {"date": "2014", "value": 1447235.0}, - {"date": "2015", "value": 1451913.0}, - {"date": "2016", "value": 1456694.0}, - {"date": "2017", "value": 1461651.0}, - //... - {"earliestDate": "2010", - "facetId": "10983471", - "latestDate": "2023", - "obsCount": 14, - "observations": [{"date": "2010", "value": 1407615.16}, - {"date": "2011", "value": 1421900.648}, - {"date": "2012", "value": 1431938.652}, - {"date": "2013", "value": 1440284.179}, - {"date": "2014", "value": 1446994.676}, - {"date": "2015", "value": 1452480.128}, - {"date": "2016", "value": 1457519.752}, - {"date": "2017", "value": 1462170.504}, - //... - {"earliestDate": "2017", - "facetId": "196790193", - "latestDate": "2023", - "obsCount": 7, - "observations": [{"date": "2017", "value": 1462170.504}, - {"date": "2018", "value": 1468419.461}, - {"date": "2019", "value": 1472690.67}, - {"date": "2020", "value": 1478829.643}, - {"date": "2021", "value": 1482110.337}, - {"date": "2022", "value": 1491222.486}, - {"date": "2023", "value": 1495096.943}]}, - //... - "facets": {"10983471": {"importName": "CensusACS5YearSurvey_SubjectTables_S2601A", - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2601A&tid=ACSST5Y2019.S2601A"}, - "2176550201": {"importName": "USCensusPEP_Annual_Population", - "measurementMethod": "CensusPEPSurvey", - "observationPeriod": "P1Y", - "provenanceUrl": "https://www.census.gov/programs-surveys/popest.html"}, - "196790193": {"importName": "CensusACS5YearSurvey_SubjectTables_S2602", - "measurementMethod": "CensusACS5yrSurveySubjectTable", - "provenanceUrl": "https://data.census.gov/cedsci/table?q=S2602&tid=ACSST5Y2019.S2602"}, - //... -}} -``` -{% endtab %} - -{% endtabs %} - -
- -{: .no_toc} -#### Example 8: Get all outgoing property labels for a single node - -This example retrieves the outwardly directed property labels (but not the values) of Wisconsin"s eighth congressional district. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_property_labels(["geoId/5508"]) -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.node.fetch_property_labels(node_dcids="geoId/5508") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{"geoId/5508": [ - "containedInPlace", - "geoId", - "geopythonCoordinates", - "geoOverlaps", - "kmlCoordinates", - "landArea", - "latitude", - "longitude", - "name", - "provenance", - "typeOf", - "usCensusGeoId", - "waterArea"]} -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{"data": {"geoId/5508": {"properties": [ - "containedInPlace", - "geoId", - "geopythonCoordinates", - "geoOverlaps", - "kmlCoordinates", - "landArea", - "latitude", - "longitude", - "name", - "provenance", - "typeOf", - "usCensusGeoId", - "waterArea"]}}} -``` -{% endtab %} - -{% endtabs %} - -
- -{: .no_toc} -#### Example 9: Get the value(s) of a single outgoing property of a node (place) - -This example retrieves the common names of the country of Côte d"Ivoire. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_property_values(["country/CIV"],"name") -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.node.fetch_property_values(node_dcids="country/CIV", properties="name") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{"country/CIV": ["Côte d"Ivoire", "Ivory Coast"]} -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{"data": {"country/CIV": {"arcs": {"name": {"nodes": [ - {"provenanceId": "dc/base/WikidataOtherIdGeos", - "value": "Côte d"Ivoire"}, - {"provenanceId": "dc/base/WikidataOtherIdGeos", - "value": "Ivory Coast"}]}}}}} -``` -{% endtab %} - -{% endtabs %} - -
- -{: .no_toc} -#### Example 10: Retrieve the values of a single outgoing property for multiple nodes (places) - -This example gets the the addresses of Stuyvesant High School in New York and Gunn High School in California. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons.get_property_values(["nces/360007702877","nces/062961004587"],"address") -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.node.fetch_property_values(node_dcids=["nces/360007702877","nces/062961004587"], properties="address") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python -{"nces/360007702877": ["345 Chambers St New York NY 10282-1099"], - "nces/062961004587": ["780 Arastradero Rd. Palo Alto 94306-3827"]} -``` -{% endtab %} - -{% tab response V2 response %} - -```python -{"data": {"nces/360007702877": {"arcs": {"address": {"nodes": [{"provenanceId": "dc/base/NCES_PublicSchool", - "value": "345 Chambers St New York NY 10282-1099"}]}}}, - "nces/062961004587": {"arcs": {"address": {"nodes": [{"provenanceId": "dc/base/NCES_PublicSchool", - "value": "780 Arastradero Rd. Palo Alto 94306-3827"}]}}}}} -``` -{% endtab %} - -{% endtabs %} - -
- -### datacommons_pandas package examples - -The following examples show equivalent API requests and responses using the V1 `datacommons_pandas` package and V2. - -{: .no_toc} -#### Example 1: Get all values of a single statistical variable for a single place - -This example is the same as [example 4](#example-4) above, but returns a Pandas DataFrame object. Note that V1 selects a single facet, while V2 returns all facets. To restrict the V2 method to a single facet, you could use the `property_filters` parameter. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons_pandas.build_time_series("geoId/05", "Count_Person_Male") -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observations_dataframe(variable_dcids="Count_Person_Male", date="all", entity_dcids="geoId/05") -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python - 0 -2023 1495958 -2012 1431252 -2022 1491622 -2018 1468412 -2014 1447235 -2020 1478511 -2011 1421287 -2016 1456694 -2017 1461651 -2015 1451913 -2019 1471760 -2021 1483520 -2013 1439862 - -dtype: int64 -``` -{% endtab %} - -{% tab response V2 response %} - -```python - date entity entity_name variable variable_name facetId importName measurementMethod observationPeriod provenanceUrl unit value -0 2011 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1421287.0 -1 2012 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1431252.0 -2 2013 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1439862.0 -3 2014 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1447235.0 -4 2015 geoId/05 Arkansas Count_Person_Male Male population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1451913.0 -... ... ... ... ... ... ... ... ... ... ... ... ... -162 2015 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1463576.0 -163 2016 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1468782.0 -164 2017 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1479682.0 -165 2018 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1476680.0 -166 2019 geoId/05 Arkansas Count_Person_Male Male population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1474705.0 -167 rows × 12 columns -``` -{% endtab %} - -{% endtabs %} - -
- -{: .no_toc} -#### Example 2: Get the all values of a single statistical variable for a single place, selecting the facet to return - -This example is the same as [example 5](#example-5) above, but returns a Pandas DataFrame object. - -
-{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons_pandas.build_time_series("country/ITA", "Amount_EconomicActivity_GrossDomesticProduction_Nominal", unit="USDollar") -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observations_dataframe(variable_dcids="Amount_EconomicActivity_GrossDomesticProduction_Nominal", date="all", entity_dcids="country/ITA", property_filters={"unit": ["USDollar"]}) -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python - 0 -1988 8.936639e+11 -1990 1.183945e+12 -1970 1.136567e+11 -1966 7.662244e+10 -1992 1.323204e+12 -... ... -2007 2.222524e+12 -2022 2.104068e+12 -2021 2.179208e+12 -1977 2.581900e+11 -2020 1.907481e+12 -65 rows × 1 columns - - -dtype: float64 -``` -{% endtab %} - -{% tab response V2 response %} - -```python - date entity entity_name variable variable_name facetId importName measurementMethod observationPeriod provenanceUrl unit value -0 1960 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 4.201242e+10 -1 1961 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 4.664949e+10 -2 1962 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 5.241387e+10 -3 1963 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 6.003592e+10 -4 1964 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 6.572077e+10 -... ... ... ... ... ... ... ... ... ... ... ... ... -60 2020 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 1.907481e+12 -61 2021 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 2.179208e+12 -62 2022 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 2.104068e+12 -63 2023 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 2.304605e+12 -64 2024 country/ITA Italy Amount_EconomicActivity_GrossDomesticProductio... Nominal gross domestic product 3496587042 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... USDollar 2.372775e+12 -65 rows × 12 columns -``` -{% endtab %} - -{% endtabs %} - -
- -{: .no_toc} -#### Example 3: Get all values of a single statistical variable for multiple places - -This example compares the historic populations of Sudan and South Sudan. Note that V1 selects a single facet, while V2 returns all facets. To restrict the V2 method to a single facet, you could use the `property_filters` parameter. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons_pandas.build_time_series_dataframe(["country/SSD","country/SDN"], "Count_Person") -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observations_dataframe(variable_dcids="Count_Person", date="all", entity_dcids=["country/SSD", "country/SDN"]) -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python - 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 ... 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 -place -country/SDN 8364489 8634941 8919028 9218077 9531109 9858030 10197578 10550597 10917999 11298936 ... 40024431 41259892 42714306 44230596 45548175 46789231 48066924 49383346 50042791 50448963 -country/SSD 2931559 2976724 3024308 3072669 3129918 3189835 3236423 3277648 3321528 3365533 ... 11107561 10830102 10259154 10122977 10423384 10698467 10865780 11021177 11483374 11943408 -2 rows × 65 columns -``` -{% endtab %} - -{% tab response V2 response %} - -```python - date entity entity_name variable variable_name facetId importName measurementMethod observationPeriod provenanceUrl unit value -0 1960 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 8364489.0 -1 1961 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 8634941.0 -2 1962 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 8919028.0 -3 1963 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 9218077.0 -4 1964 country/SDN Sudan Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 9531109.0 -... ... ... ... ... ... ... ... ... ... ... ... ... -167 2016 country/SSD South Sudan Count_Person Total population 473499523 Subnational_Demographics_Stats WorldBankSubnationalPopulationEstimate P1Y https://databank.worldbank.org/source/subnatio... None 12231000.0 -168 2024 country/SSD South Sudan Count_Person Total population 1456184638 WikipediaStatsData Wikipedia None https://www.wikipedia.org None 12703714.0 -169 2008 country/SSD South Sudan Count_Person Total population 2458695583 WikidataPopulation WikidataPopulation None https://www.wikidata.org/wiki/Wikidata:Main_Page None 8260490.0 -170 2015 country/SSD South Sudan Count_Person Total population 2458695583 WikidataPopulation WikidataPopulation None https://www.wikidata.org/wiki/Wikidata:Main_Page None 12340000.0 -171 2017 country/SSD South Sudan Count_Person Total population 2458695583 WikidataPopulation WikidataPopulation None https://www.wikidata.org/wiki/Wikidata:Main_Page None 12575714.0 -172 rows × 12 columns -``` -{% endtab %} - -{% endtabs %} - -
- -{: .no_toc} -#### Example 4: Get all values of multiple statistical variables for multiple places - -This example compares the current populations, median ages, and unemployment rates of the US, California, and Santa Clara County. To restrict the V2 method to a single facet, you could use the `property_filters` parameter. - -
- -{% tabs request %} - -{% tab request V1 request %} - -```python -datacommons_pandas.build_multivariate_dataframe(["country/USA", "geoId/06", "geoId/06085"],["Count_Person", "Median_Age_Person", "UnemploymentRate_Person"]) -``` -{% endtab %} - -{% tab request V2 request %} - -```python -client.observations_dataframe(variable_dcids=["Count_Person", "Median_Age_Person", "UnemploymentRate_Person"], date="latest", entity_dcids=["country/USA", "geoId/06", "geoId/06085"]) -``` -{% endtab %} - -{% endtabs %} - -
- -
- -{% tabs response %} - -{% tab response V1 response %} - -```python - Median_Age_Person Count_Person UnemploymentRate_Person -place -country/USA 38.7 332387540 4.3 -geoId/06 37.6 39242785 5.5 -geoId/06085 37.9 1903297 NaN - -``` -{% endtab %} - -{% tab response V2 response %} - -```python - date entity entity_name variable variable_name facetId importName measurementMethod observationPeriod provenanceUrl unit value -0 2024 geoId/06085 Santa Clara County Count_Person Total population 2176550201 USCensusPEP_Annual_Population CensusPEPSurvey P1Y https://www.census.gov/programs-surveys/popest... None 1926325.0 -1 2023 geoId/06085 Santa Clara County Count_Person Total population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1903297.0 -2 2020 geoId/06085 Santa Clara County Count_Person Total population 1541763368 USDecennialCensus_RedistrictingRelease USDecennialCensus None https://www.census.gov/programs-surveys/decenn... None 1936259.0 -3 2024 geoId/06085 Santa Clara County Count_Person Total population 2390551605 USCensusPEP_AgeSexRaceHispanicOrigin CensusPEPSurvey_Race2000Onwards P1Y https://www2.census.gov/programs-surveys/popes... None 1926325.0 -4 2023 geoId/06085 Santa Clara County Count_Person Total population 1964317807 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... None 1903297.0 -5 2022 geoId/06085 Santa Clara County Count_Person Total population 2564251937 CDC_Social_Vulnerability_Index None None https://www.atsdr.cdc.gov/place-health/php/svi... None 1916831.0 -6 2020 geoId/06085 Santa Clara County Count_Person Total population 2825511676 CDC_Mortality_UnderlyingCause None None https://wonder.cdc.gov/ucd-icd10.html None 1907105.0 -7 2019 geoId/06085 Santa Clara County Count_Person Total population 2517965213 CensusPEP CensusPEPSurvey None https://www.census.gov/programs-surveys/popest... None 1927852.0 -8 2019 geoId/06085 Santa Clara County Count_Person Total population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 1927852.0 -9 2024 country/USA United States of America Count_Person Total population 2176550201 USCensusPEP_Annual_Population CensusPEPSurvey P1Y https://www.census.gov/programs-surveys/popest... None 340110988.0 -10 2023 country/USA United States of America Count_Person Total population 2645850372 CensusACS5YearSurvey_AggCountry CensusACS5yrSurvey None https://www.census.gov/ None 335642425.0 -11 2023 country/USA United States of America Count_Person Total population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 332387540.0 -12 2020 country/USA United States of America Count_Person Total population 1541763368 USDecennialCensus_RedistrictingRelease USDecennialCensus None https://www.census.gov/programs-surveys/decenn... None 331449281.0 -13 2024 country/USA United States of America Count_Person Total population 3981252704 WorldDevelopmentIndicators None P1Y https://datacatalog.worldbank.org/dataset/worl... None 340110988.0 -14 2024 country/USA United States of America Count_Person Total population 2390551605 USCensusPEP_AgeSexRaceHispanicOrigin CensusPEPSurvey_Race2000Onwards P1Y https://www2.census.gov/programs-surveys/popes... None 340110988.0 -15 2023 country/USA United States of America Count_Person Total population 4181918134 OECDRegionalDemography_Population OECDRegionalStatistics P1Y https://data-explorer.oecd.org/vis?fs[0]=Topic... None 334914895.0 -16 2023 country/USA United States of America Count_Person Total population 1964317807 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... None 332387540.0 -17 2023 country/USA United States of America Count_Person Total population 10983471 CensusACS5YearSurvey_SubjectTables_S2601A CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2601A&... None 332387540.0 -18 2023 country/USA United States of America Count_Person Total population 196790193 CensusACS5YearSurvey_SubjectTables_S2602 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2602&t... None 332387540.0 -19 2023 country/USA United States of America Count_Person Total population 217147238 CensusACS5YearSurvey_SubjectTables_S2603 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2603&t... None 332387540.0 -20 2020 country/USA United States of America Count_Person Total population 2825511676 CDC_Mortality_UnderlyingCause None None https://wonder.cdc.gov/ucd-icd10.html None 329484123.0 -21 2019 country/USA United States of America Count_Person Total population 2517965213 CensusPEP CensusPEPSurvey None https://www.census.gov/programs-surveys/popest... None 328239523.0 -22 2019 country/USA United States of America Count_Person Total population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 328239523.0 -23 2024 geoId/06 California Count_Person Total population 2176550201 USCensusPEP_Annual_Population CensusPEPSurvey P1Y https://www.census.gov/programs-surveys/popest... None 39431263.0 -24 2023 geoId/06 California Count_Person Total population 1145703171 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 39242785.0 -25 2020 geoId/06 California Count_Person Total population 1541763368 USDecennialCensus_RedistrictingRelease USDecennialCensus None https://www.census.gov/programs-surveys/decenn... None 39538223.0 -26 2023 geoId/06 California Count_Person Total population 4181918134 OECDRegionalDemography_Population OECDRegionalStatistics P1Y https://data-explorer.oecd.org/vis?fs[0]=Topic... None 38965193.0 -27 2023 geoId/06 California Count_Person Total population 1964317807 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... None 39242785.0 -28 2023 geoId/06 California Count_Person Total population 10983471 CensusACS5YearSurvey_SubjectTables_S2601A CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2601A&... None 39242785.0 -29 2023 geoId/06 California Count_Person Total population 196790193 CensusACS5YearSurvey_SubjectTables_S2602 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2602&t... None 39242785.0 -30 2020 geoId/06 California Count_Person Total population 2825511676 CDC_Mortality_UnderlyingCause None None https://wonder.cdc.gov/ucd-icd10.html None 39368078.0 -31 2019 geoId/06 California Count_Person Total population 2517965213 CensusPEP CensusPEPSurvey None https://www.census.gov/programs-surveys/popest... None 39512223.0 -32 2019 geoId/06 California Count_Person Total population 1226172227 CensusACS1YearSurvey CensusACS1yrSurvey None https://www.census.gov/programs-surveys/acs/da... None 39512223.0 -33 2023 geoId/06085 Santa Clara County Median_Age_Person Median age of population 3795540742 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... Year 37.9 -34 2023 geoId/06085 Santa Clara County Median_Age_Person Median age of population 815809675 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... Years 37.9 -35 2023 country/USA United States of America Median_Age_Person Median age of population 3795540742 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... Year 38.7 -36 2023 country/USA United States of America Median_Age_Person Median age of population 815809675 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... Years 38.7 -37 2023 country/USA United States of America Median_Age_Person Median age of population 2763329611 CensusACS5YearSurvey_SubjectTables_S2601A CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2601A&... Years 38.7 -38 2023 country/USA United States of America Median_Age_Person Median age of population 3690003977 CensusACS5YearSurvey_SubjectTables_S2602 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2602&t... Years 38.7 -39 2023 country/USA United States of America Median_Age_Person Median age of population 4219092424 CensusACS5YearSurvey_SubjectTables_S2603 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2603&t... Years 38.7 -40 2023 geoId/06 California Median_Age_Person Median age of population 3795540742 CensusACS5YearSurvey CensusACS5yrSurvey None https://www.census.gov/programs-surveys/acs/da... Year 37.6 -41 2023 geoId/06 California Median_Age_Person Median age of population 815809675 CensusACS5YearSurvey_SubjectTables_S0101 CensusACS5yrSurveySubjectTable None https://data.census.gov/table?q=S0101:+Age+and... Years 37.6 -42 2023 geoId/06 California Median_Age_Person Median age of population 2763329611 CensusACS5YearSurvey_SubjectTables_S2601A CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2601A&... Years 37.6 -43 2023 geoId/06 California Median_Age_Person Median age of population 3690003977 CensusACS5YearSurvey_SubjectTables_S2602 CensusACS5yrSurveySubjectTable None https://data.census.gov/cedsci/table?q=S2602&t... Years 37.6 -44 2025-08 country/USA United States of America UnemploymentRate_Person Unemployment rate 3707913853 BLS_CPS BLSSeasonallyAdjusted P1M https://www.bls.gov/cps/ None 4.3 -45 2025-06 country/USA United States of America UnemploymentRate_Person Unemployment rate 1714978719 BLS_CPS BLSSeasonallyAdjusted P3M https://www.bls.gov/cps/ None 4.2 -46 2025-08 geoId/06 California UnemploymentRate_Person Unemployment rate 324358135 BLS_LAUS BLSSeasonallyUnadjusted P1M https://www.bls.gov/lau/ None 5.8 -47 2024 geoId/06 California UnemploymentRate_Person Unemployment rate 2978659163 BLS_LAUS BLSSeasonallyUnadjusted P1Y https://www.bls.gov/lau/ None 5.3 -48 2025-08 geoId/06 California UnemploymentRate_Person Unemployment rate 1249140336 BLS_LAUS BLSSeasonallyAdjusted P1M https://www.bls.gov/lau/ None 5.5 -49 2025-08 geoId/06085 Santa Clara County UnemploymentRate_Person Unemployment rate 324358135 BLS_LAUS BLSSeasonallyUnadjusted P1M https://www.bls.gov/lau/ None 4.6 -50 2024 geoId/06085 Santa Clara County UnemploymentRate_Person Unemployment rate 2978659163 BLS_LAUS BLSSeasonallyUnadjusted P1Y https://www.bls.gov/lau/ None 4.1 -51 2022 geoId/06085 Santa Clara County UnemploymentRate_Person Unemployment rate 2564251937 CDC_Social_Vulnerability_Index None None https://www.atsdr.cdc.gov/place-health/php/svi... None 4.4 -``` -{% endtab %} - -{% endtabs %} - -
From 9bc3f7ff720bd494ed59b87b64e2a17a898d2ff3 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 25 Nov 2025 11:51:45 -0800 Subject: [PATCH 051/121] minor edits --- mcp/run_tools.md | 6 ------ 1 file changed, 6 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index fd9503706..3642dd92b 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -246,12 +246,6 @@ In addition to the [standard prerequisites](#prerequisites), you will need: {:.no_toc} ### Set the API key environment variable - -Set `GEMINI_API_KEY` (or `GOOGLE_API_KEY`) in your shell/startup script (e.g. `.bashrc`): -
-export GEMINI_API_KEY=YOUR API KEY
-
- {:.no_toc} ### Install From f99cd8183875ac7d1c6b1f234f2d83c8bdc2e65c Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 3 Dec 2025 15:08:16 -0800 Subject: [PATCH 052/121] Add Windows command --- mcp/run_tools.md | 76 ++++++++++++++++++++++++++++-------------------- 1 file changed, 45 insertions(+), 31 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 3642dd92b..dc9b3b5ad 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -13,7 +13,7 @@ This page shows you how to run a local agent and connect to a Data Commons MCP s * TOC {:toc} -We provide specific instructions for the following agents. All may be used to query datacommons.org or a Custom Data Commons instance. +We provide specific instructions for the following agents. All may be used to query datacommons.org or a [Custom Data Commons instance](/custom_dc). - [Gemini CLI extension](#use-the-gemini-cli-extension) - Best for querying datacommons.org @@ -28,14 +28,14 @@ We provide specific instructions for the following agents. All may be used to qu - You can create your own LLM context file - Minimal setup -- A [sample basic agent](#use-the-sample-agent) based on the Google [Agent Development Kit](https://google.github.io/adk-docs/) +- A [sample basic agent](#use-the-sample-agent) based on the Google [Agent Development Kit](https://google.github.io/adk-docs/){: target="_blank"} - Best for interacting with a Web GUI - Can be used to run other LLMs and prompts - Downloads agent code locally - Server may be run remotely - Some additional setup -For an end-to-end tutorial using a server and agent over HTTP, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb). +For an end-to-end tutorial using a server and agent over HTTP, see the sample Data Commons Colab notebook, [Try Data Commons MCP Tools with a Custom Agent](https://github.com/datacommonsorg/agent-toolkit/blob/main/notebooks/datacommons_mcp_tools_with_custom_agent.ipynb){: target="_blank"}. For other clients/agents, see the relevant documentation; you should be able to reuse the commands and arguments detailed below. @@ -43,8 +43,8 @@ For other clients/agents, see the relevant documentation; you should be able to These are required for all agents: -- A (free) Data Commons API key. To obtain an API key, go to and request a key for the `api.datacommons.org` domain. -- Install `uv` for managing and installing Python packages; see the instructions at . +- A (free) Data Commons API key. To obtain an API key, go to {: target="_blank"} and request a key for the `api.datacommons.org` domain. +- Install `uv` for managing and installing Python packages; see the instructions at {: target="_blank"}. Other requirements for specific agents are given in their respective sections. @@ -61,31 +61,47 @@ You can set these in the following ways: ### Base Data Commons (datacommons.org) For basic usage against datacommons.org, set the required `DC_API_KEY` in your shell/startup script (e.g. `.bashrc`). -
-export DC_API_KEY="YOUR API KEY"
-
+ +
+
    +
  • Linux or Mac shell
  • +
  • Windows Powershell
  • +
+
+
+
+   export DC_API_KEY="YOUR API KEY"
+
+
+
+   $env:DC_API_KEY="YOUR API KEY"
+
+
+
### Custom Data Commons -To run against a Custom Data Commons instance, you must set additional variables. All supported options are documented in [packages/datacommons-mcp/.env.sample](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample). +To run against a Custom Data Commons instance, you must set additional variables. All supported options are documented in [packages/datacommons-mcp/.env.sample](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample){: target="_blank"}. The following variables are required: -- export DC_API_KEY="YOUR API KEY" -- `export DC_TYPE="custom"` -- export CUSTOM_DC_URL="YOUR_INSTANCE_URL" +- DC_API_KEY="YOUR API KEY" +- `DC_TYPE="custom"` +- CUSTOM_DC_URL="YOUR_INSTANCE_URL" + +You can also set additional variables as described in the `.env.sample` file. {: #env} {: .no_toc} #### Set variables with an `.env` file: -1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample) to the desired directory. Alternatively, if you plan to run the sample agent, clone the repo . +1. From Github, download the file [`.env.sample`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/.env.sample){: target="_blank"} to the desired directory. Alternatively, if you plan to run the sample agent, clone the repo {: target="_blank"}. 1. From the directory where you saved the sample file, copy it to a new file called `.env`. For example: ```bash cd ~/agent-toolkit/packages/datacommons-mcp cp .env.sample .env ``` -1. Set the following variables, without quotes: +1. Set the following required variables, without quotes: - `DC_API_KEY`: Set to your Data Commons API key - `DC_TYPE`: Set to `custom`. - `CUSTOM_DC_URL`: Uncomment and set to the URL of your instance. @@ -98,10 +114,11 @@ The following variables are required: **Additional prerequisites** In addition to the [standard prerequisites](#prerequisites), you must have the following installed: -- [Git](https://git-scm.com/) -- [Google Gemini CLI](https://geminicli.com/docs/get-started/) +- [Git](https://git-scm.com/){: target="_blank"} +- [Node.js](https://nodejs.org/en/download){: target="_blank"} +- [Google Gemini CLI](https://geminicli.com/docs/get-started/installation/){: target="_blank"} -When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons) to your local system. +When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons){: target="_blank"} to your local system. {:.no_toc} ### Install @@ -157,7 +174,7 @@ This is usually due to a missing [Data Commons API key](#prerequisites). Be sure {:.no_toc} #### Failed to clone Git repository -Make sure you have installed [Git](https://git-scm.com/) on your system. +Make sure you have installed [Git](https://git-scm.com/){: target="_blank"} on your system. {:.no_toc} ### Uninstall @@ -169,12 +186,9 @@ gemini extensions uninstall datacommons ## Use Gemini CLI -Before installing, be sure to check the [Prerequisites](#prerequisites) above. - -{:.no_toc} -### Install - -To install Gemini CLI, see the instructions at . +In addition to the [standard prerequisites](#prerequisites), you must have the following installed: +- [Node.js](https://nodejs.org/en/download){: target="_blank"} +- [Google Gemini CLI](https://geminicli.com/docs/get-started/installation/){: target="_blank"} {:.no_toc} {: #gemini} @@ -232,20 +246,18 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant 1. To see the Data Commons tools, use `/mcp tools`. 1. Start sending [natural-language queries](#sample-queries). -> **Tip**: To ensure that Gemini CLI uses the Data Commons MCP tools, and not its own `GoogleSearch` tool, include a prompt to use Data Commons in your query. For example, use a query like "Use Data Commons tools to answer the following: ..." You can also add such a prompt to a [`GEMINI.md` file](https://codelabs.developers.google.com/gemini-cli-hands-on#9) so that it's persisted across sessions. +> **Tip**: To ensure that Gemini CLI uses the Data Commons MCP tools, and not its own `GoogleSearch` tool, include a prompt to use Data Commons in your query. For example, use a query like "Use Data Commons tools to answer the following: ..." You can also add such a prompt to a [`GEMINI.md` file](https://codelabs.developers.google.com/gemini-cli-hands-on#9){: target="_blank"} so that it's persisted across sessions. ## Use the sample agent -We provide a basic agent for interacting with the MCP Server in [packages/datacommons-mcp/examples/sample_agents/basic_agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent). +We provide a basic agent for interacting with the MCP Server in [packages/datacommons-mcp/examples/sample_agents/basic_agent](https://github.com/datacommonsorg/agent-toolkit/tree/main/packages/datacommons-mcp/examples/sample_agents/basic_agent){: target="_blank"}. **Additional prerequisites** In addition to the [standard prerequisites](#prerequisites), you will need: -- A GCP project and a Google AI API key. For details on supported keys, see . -- [Git](https://git-scm.com/) installed. +- A GCP project and a Google AI API key. For details on supported keys, see {: target="_blank"}. +- [Git](https://git-scm.com/){: target="_blank"} installed. -{:.no_toc} -### Set the API key environment variable {:.no_toc} ### Install @@ -291,7 +303,7 @@ By default, the agent will spawn a local server and connect to it over Stdio. If If you want to connect to a remote MCP server, follow this procedure before starting the agent: 1. Start up the MCP server in standalone mode, as described in [Run a standalone server](#run-a-standalone-server). -1. Modify the code in [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py) to set import modules and agent initialization parameters as follows: +1. Modify the code in [`basic_agent/agent.py`](https://github.com/datacommonsorg/agent-toolkit/blob/main/packages/datacommons-mcp/examples/sample_agents/basic_agent/agent.py){: target="_blank"} to set import modules and agent initialization parameters as follows: ```python from google.adk.tools.mcp_tool.mcp_toolset import ( @@ -334,3 +346,5 @@ By default, the host is `localhost` and the port is `8080` if you don't set thes The server is addressable with the endpoint `mcp`. For example, `http://my-mcp-server:8080/mcp`. You can connect to the server using [Gemini CLI](#use-gemini-cli) or the [sample ADK agent](#use-the-sample-agent). If you're using a different client from the ones documented on this page, consult its documentation to determine how to specify an HTTP URL. + + \ No newline at end of file From 03239539617a4d63e95ea6466542528d8ad8df94 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 9 Dec 2025 10:50:44 -0800 Subject: [PATCH 053/121] Add warning about nvm --- mcp/run_tools.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index dc9b3b5ad..983a1c97c 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -118,6 +118,8 @@ In addition to the [standard prerequisites](#prerequisites), you must have the f - [Node.js](https://nodejs.org/en/download){: target="_blank"} - [Google Gemini CLI](https://geminicli.com/docs/get-started/installation/){: target="_blank"} +> Note: You may need administrator privileges, or may need to override security settings, to install Gemini, using `nvm`, installed with Node.js. + When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons){: target="_blank"} to your local system. {:.no_toc} From 5f9dcca107bc226091069e9ac05ddb0bb1dfd545 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 9 Dec 2025 10:54:04 -0800 Subject: [PATCH 054/121] Remove warning --- mcp/run_tools.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 983a1c97c..3e72c4ba1 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -116,9 +116,7 @@ You can also set additional variables as described in the `.env.sample` file. In addition to the [standard prerequisites](#prerequisites), you must have the following installed: - [Git](https://git-scm.com/){: target="_blank"} - [Node.js](https://nodejs.org/en/download){: target="_blank"} -- [Google Gemini CLI](https://geminicli.com/docs/get-started/installation/){: target="_blank"} - -> Note: You may need administrator privileges, or may need to override security settings, to install Gemini, using `nvm`, installed with Node.js. +- [Google Gemini CLI](https://geminicli.com/docs/get-started/installation/){: target="_blank"} When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons){: target="_blank"} to your local system. From b5ddb9cdcc5e50645ab5d9084939658a46b7643b Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 9 Dec 2025 11:52:08 -0800 Subject: [PATCH 055/121] Remove reference to node.js --- mcp/run_tools.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 3e72c4ba1..1679f43da 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -115,7 +115,6 @@ You can also set additional variables as described in the `.env.sample` file. In addition to the [standard prerequisites](#prerequisites), you must have the following installed: - [Git](https://git-scm.com/){: target="_blank"} -- [Node.js](https://nodejs.org/en/download){: target="_blank"} - [Google Gemini CLI](https://geminicli.com/docs/get-started/installation/){: target="_blank"} When you install the extension, it clones the [Data Commons extension Github repo](https://github.com/gemini-cli-extensions/datacommons){: target="_blank"} to your local system. @@ -187,7 +186,6 @@ gemini extensions uninstall datacommons ## Use Gemini CLI In addition to the [standard prerequisites](#prerequisites), you must have the following installed: -- [Node.js](https://nodejs.org/en/download){: target="_blank"} - [Google Gemini CLI](https://geminicli.com/docs/get-started/installation/){: target="_blank"} {:.no_toc} From c09b44c5e063726f5df09e5da23fcf5e5d8899dc Mon Sep 17 00:00:00 2001 From: kmoscoe <165203920+kmoscoe@users.noreply.github.com> Date: Tue, 9 Dec 2025 11:52:51 -0800 Subject: [PATCH 056/121] Update mcp/run_tools.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --- mcp/run_tools.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 1679f43da..ffb65da61 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -345,4 +345,4 @@ The server is addressable with the endpoint `mcp`. For example, `http://my-mcp-s You can connect to the server using [Gemini CLI](#use-gemini-cli) or the [sample ADK agent](#use-the-sample-agent). If you're using a different client from the ones documented on this page, consult its documentation to determine how to specify an HTTP URL. - \ No newline at end of file + From 6bb745c734eb80b4c4a7268bd7df54c5ef70e0fe Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 16 Dec 2025 16:17:57 -0800 Subject: [PATCH 057/121] Add required header to connect to a remote server --- mcp/index.md | 2 ++ mcp/run_tools.md | 8 ++++++-- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/mcp/index.md b/mcp/index.md index 2bd3e4c85..4d169254e 100644 --- a/mcp/index.md +++ b/mcp/index.md @@ -23,6 +23,8 @@ At this time, there is no centrally deployed server; you run your own server, an ![alt text](/assets/images/mcp.png) +You can run the server and client locally, or you can run the server and client on different machines. + ## Tools The server currently supports the following tools: diff --git a/mcp/run_tools.md b/mcp/run_tools.md index ffb65da61..16ab37482 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -230,7 +230,10 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant { "mcpServers": { "datacommons-mcp": { - "httpUrl": "http://HOST:PORT/mcp" + "httpUrl": "http://HOST:PORT/mcp", + "headers": { + "Accept": "application/json, text/event-stream" + }, // other settings as above } } @@ -313,7 +316,8 @@ root_agent = LlmAgent( # ... tools=[McpToolset( connection_params=StreamableHTTPConnectionParams( - url=f"http://:/mcp" + url=f"http://:/mcp", + headers={"Accept": "text/event-stream"} ) )], ) From 7ededf3fd2f3b46f37d9dd01ce74fa198d2ce605 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 16 Dec 2025 16:43:57 -0800 Subject: [PATCH 058/121] Add additional accept type --- mcp/run_tools.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 16ab37482..f7dd785f0 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -317,7 +317,7 @@ root_agent = LlmAgent( tools=[McpToolset( connection_params=StreamableHTTPConnectionParams( url=f"http://:/mcp", - headers={"Accept": "text/event-stream"} + headers={"Accept": "application/json, text/event-stream"} ) )], ) From 1550ed8fd719723d37196fc361e2331e517095da Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Tue, 16 Dec 2025 19:13:39 -0800 Subject: [PATCH 059/121] Add code for sample agent headers --- mcp/run_tools.md | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index f7dd785f0..8f98274eb 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -232,6 +232,7 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant "datacommons-mcp": { "httpUrl": "http://HOST:PORT/mcp", "headers": { + "Content-Type": "application/json", "Accept": "application/json, text/event-stream" }, // other settings as above @@ -317,10 +318,14 @@ root_agent = LlmAgent( tools=[McpToolset( connection_params=StreamableHTTPConnectionParams( url=f"http://:/mcp", - headers={"Accept": "application/json, text/event-stream"} - ) - )], - ) + headers={ + "Content-Type": "application/json", + "Accept": "application/json, text/event-stream" + }, + ), + ) + ], +) ``` ## Sample queries From 75f51501dc14ef643d910ef5888329e551b2ad2a Mon Sep 17 00:00:00 2001 From: kmoscoe <165203920+kmoscoe@users.noreply.github.com> Date: Tue, 16 Dec 2025 19:17:08 -0800 Subject: [PATCH 060/121] Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --- mcp/run_tools.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 8f98274eb..136122da8 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -324,7 +324,7 @@ root_agent = LlmAgent( }, ), ) - ], + ], ) ``` From 6395a674e05793375badf98d0be5e2972b0255fb Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 17 Dec 2025 09:59:46 -0800 Subject: [PATCH 061/121] Remove headers from basic agent --- mcp/run_tools.md | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 8f98274eb..aa254456a 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -317,14 +317,9 @@ root_agent = LlmAgent( # ... tools=[McpToolset( connection_params=StreamableHTTPConnectionParams( - url=f"http://:/mcp", - headers={ - "Content-Type": "application/json", - "Accept": "application/json, text/event-stream" - }, + url=f"http://:/mcp" ), - ) - ], + )], ) ``` From 1c114f1dbcbb8a327108eb3dbd7bf284d33a8023 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 17 Dec 2025 10:12:32 -0800 Subject: [PATCH 062/121] Revert last change --- mcp/run_tools.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index ee86e0346..136122da8 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -317,10 +317,14 @@ root_agent = LlmAgent( # ... tools=[McpToolset( connection_params=StreamableHTTPConnectionParams( - url=f"http://:/mcp" + url=f"http://:/mcp", + headers={ + "Content-Type": "application/json", + "Accept": "application/json, text/event-stream" + }, ), ) - ], + ], ) ``` From 4db9a2c89b6b3edd0755fa9a02c4e99fbed09fcd Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 29 Dec 2025 18:18:00 -0800 Subject: [PATCH 063/121] Start of deploying MCP server to GCP --- custom_dc/deploy_cloud.md | 9 ++- custom_dc/mcp_server_cloud.md | 118 ++++++++++++++++++++++++++++++++++ 2 files changed, 122 insertions(+), 5 deletions(-) create mode 100644 custom_dc/mcp_server_cloud.md diff --git a/custom_dc/deploy_cloud.md b/custom_dc/deploy_cloud.md index 76848d9ef..1bb94a0f8 100644 --- a/custom_dc/deploy_cloud.md +++ b/custom_dc/deploy_cloud.md @@ -294,7 +294,7 @@ Any time you make changes to the website and want to deploy your changes to the If you don't specify the --package option, the package name and tag will be the same as the source image.
  1. Build a local version of the Docker image, following the procedure in Build a local image.
  2. -
  3. Generate credentials for the Docker package. +
  4. Generate credentials for the Docker package:
    gcloud auth configure-docker REGION-docker.pkg.dev
  5. Create a package from the source image you created in step 1:
    docker tag SOURCE_IMAGE_NAME:SOURCE_IMAGE_TAG \
    @@ -302,12 +302,11 @@ Any time you make changes to the website and want to deploy your changes to the
        The artifact repo is PROJECT_ID-artifacts.
  6. Push the image to the registry:
    docker push CONTAINER_IMAGE_URL
    - The container image URL is the full name of the package you created in the previous step, including the tag.
  7. + The container image URL is the full name of the package you created in the previous step, including the tag. For example: `us-central1-docker-pkg.dev/myproject/myrepo/datacommons:latest`.
- - The target image name and tag can be the same as the source or different. - Docker package names must be in the format REGION-docker-pkg.dev. The default region in the Terraform scripts is `us-central1`. @@ -346,8 +345,8 @@ You need to restart the services container every time you make changes to the co
  1. Go to the https://console.cloud.google.com/run/services page for your project.
  2. -
  3. From the list of services, click the link of the service created by the Terraform scripts
  4. -
  5. click Edit & Deploy Revision.
  6. +
  7. From the list of services, click the link of the service created by the Terraform scripts.
  8. +
  9. Click Edit & Deploy Revision.
  10. Under Container image URL, click Select.
  11. Expand the package name you created in the previous step.
  12. Expand the image name of the container, and select the tag you created in the previous step.
  13. diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md new file mode 100644 index 000000000..d7de330aa --- /dev/null +++ b/custom_dc/mcp_server_cloud.md @@ -0,0 +1,118 @@ +--- +layout: default +title: Run an MCP server in Google Cloud +nav_order: 9 +parent: Build your own Data Commons +--- + +If you have built a custom agent or Gemini CLI extension which you want to make publicly available, this page describes how to run the [Data Commons MCP server](https://pypi.org/project/datacommons-mcp/) in the cloud, using Google Cloud Run. + +Since setting up an MCP server is a simple, one-time setup, there's no need to use Terraform to manage it. You configure a Docker container and instruct Google Cloud Built to build and deploy the image. + +The following procedure assumes that you have set up all the necessary Google Cloud Platform services: +- An Artifact Registry repository +- A Secret Manager secret for storying your Data Commons API key +- + +## Step 1: Create a Dockerfile + +In a local project directory, create a file `Dockerfile`. Add the following configuration: + +
    +ROM python:3.12-slim
    +
    +WORKDIR /workspace
    +
    +RUN python -m venv ./venv
    +ENV PATH="/workspace/venv/bin:$PATH"
    +RUN pip3 install --upgrade pip
    +RUN pip install --no-cache-dir datacommons-mcp@VERSION_NUMBER
    +
    +ENV PORT=8080
    +
    +CMD ["datacommons-mcp", "serve", "http", "--host", "0.0.0.0", "--port", "8080"]
    +
    + +You should pin the package to a particular version number (which you can find on the [PyPi page](https://pypi.org/project/datacommons-mcp/)), so that any future changes that are released by the Data Commons do not break your application. + +## Step 2: Create a container image and upload it to the Artifact Registry + +In this step, you build a Docker image from the `datacommons-mcp` package hosted at and upload it to your Artifact Registry repository. You can perform this step in two ways: +- Build a local image using Docker and then upload it to the Artifact Registry. This method is useful if you want to test out that the package runs correctly before deploying it. +- Use Cloud Build to build an image remotely and automatically store it in the Artifact Registry. This method combines the build and upload steps in a single step. + +Before starting, be sure to refresh your [GCP credentials](deploy_cloud.md#gen-creds): + +```shell +gcloud auth application-default login +``` + +### Build locally with Docker and upload + +1. From the directory where your `Dockerfile` is stored, run the following command: +
    +    docker build --tag LOCATION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME:IMAGE_TAG  .
    +    
    + - The location is the region where you want the package to be stored. Typically this is `us-central1`. + - The repo must have previously been created in the Artifact Registry + - The image name is a meaningful name, such as `datacommons-mcp-server`. + - The image tag is a meaningful description of the version you are using. +1. Push the image to the registry: +
    docker push CONTAINER_IMAGE_URL
    + The container image URL is the full name of the package you created in the previous step, including the tag. For example: `us-central1-docker-pkg.dev/myproject/myrepo/datacommons:latest`. + +### Build remotely with Cloud Build + +gcloud builds submit --tag gcr.io/YOUR_PROJECT_ID/mcp-prod . + +## Step 3: Verify that the image is created in the repository + +1. Go to [https://console.cloud.google.com/artifacts](https://console.cloud.google.com/artifacts){: target="_blank"} for your project. +1. In the list of repositories, click on PROJECT_ID-artifacts. You should see your image in the list. You can click through to view revisions and tags. + +## Step 4: Create a Cloud Run Service + +
    +
      +
    • Cloud Console
    • +
    • gcloud CLI
    • +
    +
    +
    +
      +
    1. Go to the https://console.cloud.google.com/run/services page for your project.
    2. +
    3. Click Deploy container.
    4. +
    5. In the Container image URL field, click Select.
    6. +
    7. From the list of Artifact Registry repositories that appears, expand your repo, expand the image you created in step 1, and select the version.
    8. +
    9. Under Configure, provide a name for the new service and select the region you used when creating the image, e.g. `us-central1`.
    10. +
    11. Under Authentication, select Allow public access.
    12. +
    13. Under Service scaling, enter `10` for the maximum number of instances.
    14. +
    15. Click Add health check.
    16. +
    17. From the Select health check type drop-down, select Startup check and from Select probe type drop-down, select TCP.
    18. +
    19. Enter the following parameters: +
      • +
      • Click Deploy. It will take several minutes for the service to start. You can click the Logs tab to view the progress.
      • +
    +
    +
    +
  14. From any local directory, run the following command: +
    gcloud run deploy SERVICE_NAME --image CONTAINER_IMAGE_URL
  15. +
  16. To view the startup status, run the following command: +
    gcloud beta run jobs logs tail SERVICE_NAME
    +
  17. + The service name is NAMESPACE-datacommons-web-service. + The container image URL is the name of the package you created in the previous step. +gcloud run deploy mcp-server-prod \ + --image gcr.io/YOUR_PROJECT_ID/mcp-prod \ + --platform managed \ + --region us-central1 \ + --allow-unauthenticated \ + --timeout=10m \ + --set-secrets="DC_API_KEY=dc-api-key:latest" + +
+
+
+
+ + From 98b326531166f2f54311d8d7ad8bcdda2d6f01c5 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 7 Jan 2026 15:25:28 -0800 Subject: [PATCH 064/121] Complete rewrite of hosting MCP server content --- custom_dc/build_image.md | 4 +- custom_dc/deploy_cloud.md | 2 +- custom_dc/mcp_server_cloud.md | 133 ++++++++++++++-------------------- 3 files changed, 58 insertions(+), 81 deletions(-) diff --git a/custom_dc/build_image.md b/custom_dc/build_image.md index 1e27cd0df..6a5d9c487 100644 --- a/custom_dc/build_image.md +++ b/custom_dc/build_image.md @@ -17,9 +17,11 @@ While you are just testing out data changes, you don't need to build the website Data Commons provides two prebuilt images in the Google Artifact Registry that you can download to run in a Docker container: -- `gcr.io/datcom-ci/datacommons-data:stable` and `gcr.io/datcom-ci/datacommons-services:stable`. These are tested, stable versions but may be several weeks old. +- `gcr.io/datcom-ci/datacommons-data:stable` and `gcr.io/datcom-ci/datacommons-services:stable`. These are tested, stable versions but may be several weeks old. - `gcr.io/datcom-ci/datacommons-data:latest` and `gcr.io/datcom-ci/datacommons-services:latest`. These are the latest versions built from head. +You can see the images with their tags at and . + If you want to pick up the latest prebuilt version, do the following: 1. From the root directory (e.g. `website`), run the following command: diff --git a/custom_dc/deploy_cloud.md b/custom_dc/deploy_cloud.md index 1bb94a0f8..f6a7ff4f3 100644 --- a/custom_dc/deploy_cloud.md +++ b/custom_dc/deploy_cloud.md @@ -262,7 +262,7 @@ To view the tables: 1. In the left panel, select **Cloud SQL Studio**. 1. In the **Sign in to SQL Studio** page, from the **Database** field, select the database created by the Terraform script. 1. In the **User** field, select the user created by the Terraform script. -1. In the **Password** field, enter the password you have retrieved from the Cloud Secret Manager +1. In the **Password** field, enter the password you have retrieved from the Cloud Secret Manager. 1. In the left Explorer pane that appears, expand the **Databases** icon, your database name, and **Tables**. The table of interest is **observations**. You can see column names and other metadata. 1. To view the actual data, in the main window, click **New SQL Editor tab**. This opens an environment in which you can enter and run SQL queries. 1. Enter a query and click **Run**. For example, for the sample OECD data, if you do `select * from observations limit 10;`, you should see output like this: diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md index d7de330aa..3eec461b1 100644 --- a/custom_dc/mcp_server_cloud.md +++ b/custom_dc/mcp_server_cloud.md @@ -7,70 +7,30 @@ parent: Build your own Data Commons If you have built a custom agent or Gemini CLI extension which you want to make publicly available, this page describes how to run the [Data Commons MCP server](https://pypi.org/project/datacommons-mcp/) in the cloud, using Google Cloud Run. -Since setting up an MCP server is a simple, one-time setup, there's no need to use Terraform to manage it. You configure a Docker container and instruct Google Cloud Built to build and deploy the image. +Since setting up an MCP server is a simple, one-time setup, there's no need to use Terraform to manage it. Data Commons provides a prebuilt Docker image in the Artifact Registry, so you only need to set up a new Cloud Run service to point to it. There are several versions of the image available: -The following procedure assumes that you have set up all the necessary Google Cloud Platform services: -- An Artifact Registry repository -- A Secret Manager secret for storying your Data Commons API key -- - -## Step 1: Create a Dockerfile - -In a local project directory, create a file `Dockerfile`. Add the following configuration: - -
-ROM python:3.12-slim
-
-WORKDIR /workspace
-
-RUN python -m venv ./venv
-ENV PATH="/workspace/venv/bin:$PATH"
-RUN pip3 install --upgrade pip
-RUN pip install --no-cache-dir datacommons-mcp@VERSION_NUMBER
-
-ENV PORT=8080
-
-CMD ["datacommons-mcp", "serve", "http", "--host", "0.0.0.0", "--port", "8080"]
-
+- `gcr.io/datcom-ci/datacommons-mcp-server:latest`. This is the latest versions built from head. +-
gcr.io/datcom-ci/datacommons-mcp-server:VERSION
: These are specific versions. You will probably want to use one of these versions for production, so that any changes made by Data Commons team don't break your application. -You should pin the package to a particular version number (which you can find on the [PyPi page](https://pypi.org/project/datacommons-mcp/)), so that any future changes that are released by the Data Commons do not break your application. +You can see all versions at . -## Step 2: Create a container image and upload it to the Artifact Registry +## Before you start: Decide on a hosting model -In this step, you build a Docker image from the `datacommons-mcp` package hosted at and upload it to your Artifact Registry repository. You can perform this step in two ways: -- Build a local image using Docker and then upload it to the Artifact Registry. This method is useful if you want to test out that the package runs correctly before deploying it. -- Use Cloud Build to build an image remotely and automatically store it in the Artifact Registry. This method combines the build and upload steps in a single step. +There are several ways you can host the MCP server in Cloud Run, namely: -Before starting, be sure to refresh your [GCP credentials](deploy_cloud.md#gen-creds): +- As a standalone service. In this case, any client simply connects to it over HTTP, including your own MCP agent running as a separate Cloud Run service or locally. You can choose whether to make the internal Cloud Run app URL publicly available, or whether to put a load balancer in front of the service and map a domain name. +- As a ["sidecar"](https://docs.cloud.google.com/run/docs/deploying#sidecars) to an MCP client. If you are hosting your own MCP client in Cloud Run as well, this may be a useful option. In this case, the MCP server is not directly addressable. -```shell -gcloud auth application-default login -``` +In this page, we provide a procedure for running the Data Commons MCP server as a standalone container. If you would go with the sidecar option, please see (Deploying multiple containers to a service (sidecars))[https://docs.cloud.google.com/run/docs/deploying#sidecars]{: target="_blank"} for additional requirements (e.g. health-checks) and steps. -### Build locally with Docker and upload +## Prequisites -1. From the directory where your `Dockerfile` is stored, run the following command: -
-    docker build --tag LOCATION-docker.pkg.dev/PROJECT_ID/REPO_NAME/IMAGE_NAME:IMAGE_TAG  .
-    
- - The location is the region where you want the package to be stored. Typically this is `us-central1`. - - The repo must have previously been created in the Artifact Registry - - The image name is a meaningful name, such as `datacommons-mcp-server`. - - The image tag is a meaningful description of the version you are using. -1. Push the image to the registry: -
docker push CONTAINER_IMAGE_URL
- The container image URL is the full name of the package you created in the previous step, including the tag. For example: `us-central1-docker-pkg.dev/myproject/myrepo/datacommons:latest`. - -### Build remotely with Cloud Build - -gcloud builds submit --tag gcr.io/YOUR_PROJECT_ID/mcp-prod . - -## Step 3: Verify that the image is created in the repository - -1. Go to [https://console.cloud.google.com/artifacts](https://console.cloud.google.com/artifacts){: target="_blank"} for your project. -1. In the list of repositories, click on PROJECT_ID-artifacts. You should see your image in the list. You can click through to view revisions and tags. +The following procedures assume that you have set up the following Google Cloud Platform services: +- Service accounts +- A Secret Manager secret for storying your Data Commons API key +- -## Step 4: Create a Cloud Run Service +## Create a Cloud Run Service for the MCP server
    @@ -79,40 +39,55 @@ gcloud builds submit --tag gcr.io/YOUR_PROJECT_ID/mcp-prod .
-
    -
  1. Go to the https://console.cloud.google.com/run/services page for your project.
  2. -
  3. Click Deploy container.
  4. +
      +
    1. Go to the https://console.cloud.google.com/run/services page for your project.
    2. +
    3. Click Deploy container.
    4. In the Container image URL field, click Select.
    5. -
    6. From the list of Artifact Registry repositories that appears, expand your repo, expand the image you created in step 1, and select the version.
    7. -
    8. Under Configure, provide a name for the new service and select the region you used when creating the image, e.g. `us-central1`.
    9. -
    10. Under Authentication, select Allow public access.
    11. -
    12. Under Service scaling, enter `10` for the maximum number of instances.
    13. -
    14. Click Add health check.
    15. +
    16. In the Artifact Registry panel that appears in the right side of the window, that appears, click Change . +
    17. In the project search bar enter datcom-ci and click on the link that appears.
    18. +
    19. Expand gcr.io/datcom-ci and expand datacommons-mcp=server.
    20. +
    21. From the list of images, select the one you want.
    22. +
    23. Under Configure, select the desired region for the service, e.g.
      us-central1
      .
    24. +
    25. Under Service scaling, enter 10 for the maximum number of instances.
    26. +
    27. Expand Containers, Networking, Security.
    28. +
    29. Under Settings, click Add health check.
    30. From the Select health check type drop-down, select Startup check and from Select probe type drop-down, select TCP.
    31. -
    32. Enter the following parameters: -
      • -
      • Click Deploy. It will take several minutes for the service to start. You can click the Logs tab to view the progress.
      • -
    +
  5. Modify the following parameters: +
    • Period: set to 240.
    • +
    • Failure threshold: set to 1.
    • +
    • Timeout: set to 240.
    +
  6. Click Add and then click Done. +
  7. Under Requests, increase the request timeout to 600. +
  8. Under Revision scaling, enter 10 for the maximum number of instances.
  9. +
  10. Click the Variables & secrets tab. +
  11. Under Environment variables, click Add variable and set the following variables: +
    • name: DC_TYPE, value: custom
    • +
    • name: CUSTOM_DC_URL, value: YOUR_INSTANCE_URL
    +
  12. Under Secrets exposed as environment variables, click Reference a secret, and DC_API_KEY to the Secret Manager [secret previously created by Terraform](deploy_cloud.md#terraform).
  13. +
  14. Click Create. If correctly configured, the service will deploy automatically.
-
+
  • If you haven't recently refreshed your Google Cloud credentials, run
    gcloud auth application-default login
    and authenticate.
  • +
           
  • From any local directory, run the following command: -
    gcloud run deploy SERVICE_NAME --image CONTAINER_IMAGE_URL
  • +
    gcloud run deploy datacommons-mcp-server --image CONTAINER_IMAGE_URL \
    +        --region REGION--platform managed --allow-unauthenticated \
    +        --timeout=10m --set-secrets="DC_API_KEY=SECRET_NAME:latest"
    +        --set-env-vars="DC_TYPE=custom" --set-env-vars="CUSTOM_DC_URL=INSTANCE_URL" \
    +        --min-instances=0 
    +        
    +                 
      +
    • The container image URL is
      gcr.io/datcom-ci/datacommons-mcp-server:TAG
      . The tag is the tag or version number of the image you want to select from the Artifact Registry.
    • +
    • The region is the Cloud region where you want to run the service, e.g. us-central1.
    • +
    • The secret name is the one created when you ran the Terraform script, in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT. If you're not sure about the name or fingerprint, go to to {: target="_blank"} for your project and look it up.
    • +
  • To view the startup status, run the following command:
    gcloud beta run jobs logs tail SERVICE_NAME
  • - The service name is NAMESPACE-datacommons-web-service. - The container image URL is the name of the package you created in the previous step. -gcloud run deploy mcp-server-prod \ - --image gcr.io/YOUR_PROJECT_ID/mcp-prod \ - --platform managed \ - --region us-central1 \ - --allow-unauthenticated \ - --timeout=10m \ - --set-secrets="DC_API_KEY=dc-api-key:latest" -
    + + From 2073fdba3d59bcfa50f8293a3b5eedf83a5030eb Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Wed, 7 Jan 2026 21:55:56 -0800 Subject: [PATCH 065/121] more changes --- custom_dc/mcp_server_cloud.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md index 3eec461b1..e0a6e4938 100644 --- a/custom_dc/mcp_server_cloud.md +++ b/custom_dc/mcp_server_cloud.md @@ -7,19 +7,18 @@ parent: Build your own Data Commons If you have built a custom agent or Gemini CLI extension which you want to make publicly available, this page describes how to run the [Data Commons MCP server](https://pypi.org/project/datacommons-mcp/) in the cloud, using Google Cloud Run. -Since setting up an MCP server is a simple, one-time setup, there's no need to use Terraform to manage it. Data Commons provides a prebuilt Docker image in the Artifact Registry, so you only need to set up a new Cloud Run service to point to it. There are several versions of the image available: +Since setting up an MCP server is a simple, one-time setup, there's no need to use Terraform to manage it. Data Commons provides a prebuilt Docker image in the Artifact Registry, so you only need to set up a new Cloud Run service to point to it. -- `gcr.io/datcom-ci/datacommons-mcp-server:latest`. This is the latest versions built from head. --
    gcr.io/datcom-ci/datacommons-mcp-server:VERSION
    : These are specific versions. You will probably want to use one of these versions for production, so that any changes made by Data Commons team don't break your application. +## Prebuilt images -You can see all versions at . +There are several versions of the image available, viewable at . Most likely you will want to pick a specific version (rather than using the "latest" one) to ensure that changes introduced by the Data Commons team don't break your application. -## Before you start: Decide on a hosting model +## Before you start: decide on a hosting model There are several ways you can host the MCP server in Cloud Run, namely: - As a standalone service. In this case, any client simply connects to it over HTTP, including your own MCP agent running as a separate Cloud Run service or locally. You can choose whether to make the internal Cloud Run app URL publicly available, or whether to put a load balancer in front of the service and map a domain name. -- As a ["sidecar"](https://docs.cloud.google.com/run/docs/deploying#sidecars) to an MCP client. If you are hosting your own MCP client in Cloud Run as well, this may be a useful option. In this case, the MCP server is not directly addressable. +- As a ["sidecar"](https://docs.cloud.google.com/run/docs/deploying#sidecars) to an MCP client. If you are hosting your own MCP client in Cloud Run as well, this may be a useful option. In this case, the MCP server is not directly addressable; all external connections are managed by the client. In this page, we provide a procedure for running the Data Commons MCP server as a standalone container. If you would go with the sidecar option, please see (Deploying multiple containers to a service (sidecars))[https://docs.cloud.google.com/run/docs/deploying#sidecars]{: target="_blank"} for additional requirements (e.g. health-checks) and steps. From c5f95b057da5e50c6e69229302adefd4054d5e1e Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 12 Jan 2026 10:55:37 -0800 Subject: [PATCH 066/121] More changes --- custom_dc/deploy_cloud.md | 4 ++- custom_dc/mcp_server_cloud.md | 56 +++++++++++++++++++++++------------ mcp/run_tools.md | 5 ++-- 3 files changed, 42 insertions(+), 23 deletions(-) diff --git a/custom_dc/deploy_cloud.md b/custom_dc/deploy_cloud.md index f6a7ff4f3..e450eb9df 100644 --- a/custom_dc/deploy_cloud.md +++ b/custom_dc/deploy_cloud.md @@ -51,9 +51,10 @@ The first time you run it, may be prompted to specify a quota project for billin
     gcloud auth application-default set-quota-project PROJECT_ID
    +{: #accounts} ## One-time setup: Create service accounts and enable all APIs -`website/deploy/terraform-custom-datacommons/setup.sh` is a convenience script to set up service account roles and all necessary Cloud APIs. To run it: +`website/deploy/terraform-custom-datacommons/setup.sh` is a convenience script to set up all necessary Cloud APIs. To run it:
      cd website/deploy/terraform-custom-datacommons
    @@ -80,6 +81,7 @@ We recommend using the Data Commons Terraform scripts to greatly simplify and au
     
     Terraform provisions and runs all the necessary Cloud Platform services:
     
    +- Creates a service account for your project and namespace and assigns it various permissions ([IAM roles](https://docs.cloud.google.com/iam/docs/roles-overview){: target="_blank}).
     - Creates a Cloud Storage bucket and top-level folder, which will store your data files. You will upload your input data in the subsequent steps.
     - Creates a Cloud SQL MySQL instance, with basic resources, a default database user and a random password.
     - Creates the Data Commons data management container as a Cloud Run job, with basic resources.
    diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md
    index e0a6e4938..c343ceae8 100644
    --- a/custom_dc/mcp_server_cloud.md
    +++ b/custom_dc/mcp_server_cloud.md
    @@ -11,7 +11,7 @@ Since setting up an MCP server is a simple, one-time setup, there's no need to u
     
     ## Prebuilt images
     
    -There are several versions of the image available, viewable at . Most likely you will want to pick a specific version (rather than using the "latest" one) to ensure that changes introduced by the Data Commons team don't break your application.
    +There are several versions of the image available, viewable at . Most likely you will want to pick a specific version (identified by a version tag) to ensure that changes introduced by the Data Commons team don't break your application.
     
     ## Before you start: decide on a hosting model
     
    @@ -22,12 +22,11 @@ There are several ways you can host the MCP server in Cloud Run, namely:
     
     In this page, we provide a procedure for running the Data Commons MCP server as a standalone container. If you would go with the sidecar option, please see (Deploying multiple containers to a service (sidecars))[https://docs.cloud.google.com/run/docs/deploying#sidecars]{: target="_blank"} for additional requirements (e.g. health-checks) and steps.
     
    -## Prequisites
    +## Prerequisites
     
     The following procedures assume that you have set up the following Google Cloud Platform services:
    -- Service accounts
    -- A Secret Manager secret for storying your Data Commons API key
    -- 
    +- Service accounts. These are created when you run the [`website/deploy/terraform-custom-datacommons/setup.sh`](deploy_cloud.md#accounts) script
    +- A Google Cloud Secret Manager secret for storying your Data Commons API key. This is created the first time you [run Terraform](deploy_cloud#terraform).
     
     ## Create a Cloud Run Service for the MCP server
     
    @@ -42,40 +41,36 @@ The following procedures assume that you have set up the following Google Cloud
                
  • Go to the https://console.cloud.google.com/run/services page for your project.
  • Click Deploy container.
  • In the Container image URL field, click Select.
  • -
  • In the Artifact Registry panel that appears in the right side of the window, that appears, click Change . +
  • In the Artifact Registry panel that appears in the right side of the window, that appears, click Change.
  • In the project search bar enter datcom-ci and click on the link that appears.
  • Expand gcr.io/datcom-ci and expand datacommons-mcp=server.
  • -
  • From the list of images, select the one you want.
  • +
  • From the list of images, select an image with a version number tag of a production image.
  • Under Configure, select the desired region for the service, e.g.
    us-central1
    .
  • Under Service scaling, enter 10 for the maximum number of instances.
  • -
  • Expand Containers, Networking, Security.
  • -
  • Under Settings, click Add health check.
  • -
  • From the Select health check type drop-down, select Startup check and from Select probe type drop-down, select TCP.
  • -
  • Modify the following parameters: -
    • Period: set to 240.
    • -
    • Failure threshold: set to 1.
    • -
    • Timeout: set to 240.
    -
  • Click Add and then click Done.
  • Under Requests, increase the request timeout to 600.
  • Under Revision scaling, enter 10 for the maximum number of instances.
  • +
  • Expand Containers, Networking, Security.
  • Click the Variables & secrets tab.
  • Under Environment variables, click Add variable and set the following variables:
    • name: DC_TYPE, value: custom
    • name: CUSTOM_DC_URL, value: YOUR_INSTANCE_URL
    -
  • Under Secrets exposed as environment variables, click Reference a secret, and DC_API_KEY to the Secret Manager [secret previously created by Terraform](deploy_cloud.md#terraform).
  • +
  • Under Secrets exposed as environment variables, click Reference a secret, and DC_API_KEY to the Secret Manager [secret previously created by Terraform](deploy_cloud.md#terraform), in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT.
  • +
  • Click Done. +
  • Click the Security tab. From the Service account field, select the service account for your namespace and project.
  • Click Create. If correctly configured, the service will deploy automatically.
  • If you haven't recently refreshed your Google Cloud credentials, run
    gcloud auth application-default login
    and authenticate.
  •        
  • From any local directory, run the following command:
    gcloud run deploy datacommons-mcp-server --image CONTAINER_IMAGE_URL \
    -        --region REGION--platform managed --allow-unauthenticated \
    -        --timeout=10m --set-secrets="DC_API_KEY=SECRET_NAME:latest"
    +        --service-account=SERVICE_ACCOUNT --region REGION --platform managed --allow-unauthenticated --timeout=10m \
    +        --set-secrets="DC_API_KEY=SECRET_NAME:latest" \
             --set-env-vars="DC_TYPE=custom" --set-env-vars="CUSTOM_DC_URL=INSTANCE_URL" \
             --min-instances=0 
             
    • -
    • The container image URL is
      gcr.io/datcom-ci/datacommons-mcp-server:TAG
      . The tag is the tag or version number of the image you want to select from the Artifact Registry.
    • +
    • The container image URL is
      gcr.io/datcom-ci/datacommons-mcp-server:TAG
      . The tag is the tag should be a version number of a production image, e.g. v1.3.3.
    • +
    • The service account was created when you ran Terraform. It is in the form
      NAMESPACEdatacommons-sa@PROJECT_ID.iam.gserviceaccount.com
      .
    • The region is the Cloud region where you want to run the service, e.g. us-central1.
    • The secret name is the one created when you ran the Terraform script, in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT. If you're not sure about the name or fingerprint, go to to {: target="_blank"} for your project and look it up.
    @@ -89,4 +84,27 @@ The following procedures assume that you have set up the following Google Cloud
    +## Connect to the server from a remote client + +For details, see the following pages: +- [Connect to the server from a local Gemini CLI client](/mcp/run_tools.html#gemini-cli-remote) +- [Connect to the server from a local agent](/mcp/run_tools.html#remote) + +The HTTP URL parameter is the Cloud Run App URL, if you are exposing the service directly, or a custom domain URL if you are using a load balancer and domain mapping. + +## Troubleshoot deployment issues + +### Container fails to start + +If you see this error message: + +``` +The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable within the allocated timeout... +``` +This is a generic message that could indicate a number of configuration problems. Check all of these: +- Be sure you have specified the `DC_API_KEY` environment variable. +- Be sure you have specified the correct service account. +- Try increasing the health check timeout. + + diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 136122da8..5cda4c158 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -222,7 +222,7 @@ To configure Gemini CLI to recognize the Data Commons server, edit the relevant {:.no_toc} -### Configure to connect to a remote server +### Configure to connect to a remote server {#gemini-cli-remote} 1. Start up the MCP server in standalone mode, as described in [Run a standalone server](#run-a-standalone-server). 1. In the `settings.json` file, replace the `datacommons-mcp` specification as follows: @@ -298,9 +298,8 @@ By default, the agent will spawn a local server and connect to it over Stdio. If ``` 1. Enter your [queries](#sample-queries) at the `User` prompt in the terminal. -{: #remote} {:.no_toc} -### Configure to connect to a remote server +### Configure to connect to a remote server {#remote} If you want to connect to a remote MCP server, follow this procedure before starting the agent: From 32bacd966014e2f9e75a0be4ec9bd7908409ddd2 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 12 Jan 2026 13:36:44 -0800 Subject: [PATCH 067/121] formatting fixes --- custom_dc/deploy_cloud.md | 4 +- custom_dc/mcp_server_cloud.md | 97 +++++++++++++++++++---------------- 2 files changed, 53 insertions(+), 48 deletions(-) diff --git a/custom_dc/deploy_cloud.md b/custom_dc/deploy_cloud.md index e450eb9df..744360c22 100644 --- a/custom_dc/deploy_cloud.md +++ b/custom_dc/deploy_cloud.md @@ -367,9 +367,6 @@ You need to restart the services container every time you make changes to the co
    - - - ### View your running application {#view-app} @@ -404,6 +401,7 @@ To create additional deployments: cp terraform.tfvars terraform_prod.tfvars ``` > Tip: You may wish to rename the original `terraform.tfvars` to something more descriptive as well. + 1. Do any of the following: - If you intend to run the new deployment in a different GCP project, edit the `project_id` variable and specify the project ID. - If you intend to run the new deployment in the same GCP project, edit the `namespace` variable to name it according to the environment you are creating, e.g. `-prod`. When you run the deployment, all created services will use the new namespace. diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md index c343ceae8..38e278eb6 100644 --- a/custom_dc/mcp_server_cloud.md +++ b/custom_dc/mcp_server_cloud.md @@ -5,28 +5,31 @@ nav_order: 9 parent: Build your own Data Commons --- +{:.no_toc} +# Run an MCP server in Google Cloud + If you have built a custom agent or Gemini CLI extension which you want to make publicly available, this page describes how to run the [Data Commons MCP server](https://pypi.org/project/datacommons-mcp/) in the cloud, using Google Cloud Run. Since setting up an MCP server is a simple, one-time setup, there's no need to use Terraform to manage it. Data Commons provides a prebuilt Docker image in the Artifact Registry, so you only need to set up a new Cloud Run service to point to it. ## Prebuilt images -There are several versions of the image available, viewable at . Most likely you will want to pick a specific version (identified by a version tag) to ensure that changes introduced by the Data Commons team don't break your application. +There are several versions of the image available, viewable at . We recommend that you choose a production version with a specific version number, to ensure that changes introduced by the Data Commons team don't break your application. ## Before you start: decide on a hosting model There are several ways you can host the MCP server in Cloud Run, namely: - As a standalone service. In this case, any client simply connects to it over HTTP, including your own MCP agent running as a separate Cloud Run service or locally. You can choose whether to make the internal Cloud Run app URL publicly available, or whether to put a load balancer in front of the service and map a domain name. -- As a ["sidecar"](https://docs.cloud.google.com/run/docs/deploying#sidecars) to an MCP client. If you are hosting your own MCP client in Cloud Run as well, this may be a useful option. In this case, the MCP server is not directly addressable; all external connections are managed by the client. +- As a ["sidecar"](https://docs.cloud.google.com/run/docs/deploying#sidecars){: target="_blank"} to an MCP client. If you are hosting your own MCP client in Cloud Run as well, this may be a useful option. In this case, the MCP server is not directly addressable; all external connections are managed by the client. -In this page, we provide a procedure for running the Data Commons MCP server as a standalone container. If you would go with the sidecar option, please see (Deploying multiple containers to a service (sidecars))[https://docs.cloud.google.com/run/docs/deploying#sidecars]{: target="_blank"} for additional requirements (e.g. health-checks) and steps. +In this page, we provide steps for running the Data Commons MCP server as a standalone container. If you want to go with the sidecar option, please see [Deploying multiple containers to a service (sidecars)](https://docs.cloud.google.com/run/docs/deploying#sidecars){: target="_blank"} for additional requirements and setup procedures. ## Prerequisites -The following procedures assume that you have set up the following Google Cloud Platform services: -- Service accounts. These are created when you run the [`website/deploy/terraform-custom-datacommons/setup.sh`](deploy_cloud.md#accounts) script -- A Google Cloud Secret Manager secret for storying your Data Commons API key. This is created the first time you [run Terraform](deploy_cloud#terraform). +The following procedures assume that you have set up the following Google Cloud Platform services, using the [Terraform scripts](deploy_cloud.md#terraform): +- A service account and roles. +- A Google Cloud Secret Manager secret for storying your Data Commons API key. ## Create a Cloud Run Service for the MCP server @@ -37,53 +40,57 @@ The following procedures assume that you have set up the following Google Cloud
    -
      -
    1. Go to the https://console.cloud.google.com/run/services page for your project.
    2. -
    3. Click Deploy container.
    4. -
    5. In the Container image URL field, click Select.
    6. -
    7. In the Artifact Registry panel that appears in the right side of the window, that appears, click Change. -
    8. In the project search bar enter datcom-ci and click on the link that appears.
    9. -
    10. Expand gcr.io/datcom-ci and expand datacommons-mcp=server.
    11. -
    12. From the list of images, select an image with a version number tag of a production image.
    13. -
    14. Under Configure, select the desired region for the service, e.g.
      us-central1
      .
    15. -
    16. Under Service scaling, enter 10 for the maximum number of instances.
    17. -
    18. Under Requests, increase the request timeout to 600. -
    19. Under Revision scaling, enter 10 for the maximum number of instances.
    20. -
    21. Expand Containers, Networking, Security.
    22. -
    23. Click the Variables & secrets tab. -
    24. Under Environment variables, click Add variable and set the following variables: -
      • name: DC_TYPE, value: custom
      • -
      • name: CUSTOM_DC_URL, value: YOUR_INSTANCE_URL
      -
    25. Under Secrets exposed as environment variables, click Reference a secret, and DC_API_KEY to the Secret Manager [secret previously created by Terraform](deploy_cloud.md#terraform), in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT.
    26. -
    27. Click Done. -
    28. Click the Security tab. From the Service account field, select the service account for your namespace and project.
    29. -
    30. Click Create. If correctly configured, the service will deploy automatically.
    +
      +
    1. Go to the https://console.cloud.google.com/run/services page for your project.
    2. +
    3. Click Deploy container.
    4. +
    5. In the Container image URL field, click Select.
    6. +
    7. In the Artifact Registry panel that appears in the right side of the window, that appears, click Change.
    8. +
    9. In the project search bar, enter datcom-ci and click on the link that appears.
    10. +
    11. Expand gcr.io/datcom-ci and expand datacommons-mcp-server.
    12. +
    13. From the list of images, select a production image, e.g. production-v1.1.4.
    14. +
    15. Under Configure, select the desired region for the service, e.g. us-central1.
    16. +
    17. Under Service scaling, enter 10 for the maximum number of instances.
    18. +
    19. Under Requests, increase the request timeout to 600.
    20. +
    21. Under Revision scaling, enter 10 for the maximum number of instances.
    22. +
    23. Expand Containers, Networking, Security.
    24. +
    25. Click the Variables & secrets tab.
    26. +
    27. Under Environment variables, click Add variable and set the following variables: +
        +
      • name: DC_TYPE, value: custom
      • +
      • name: CUSTOM_DC_URL, value: YOUR_INSTANCE_URL
      • +
    28. +
    29. Under Secrets exposed as environment variables, click Reference a secret.
    30. +
    31. In the Name field, enter DC_API_KEY, and from the Secret field, select the secret previously created by the Terraform scripts. It is in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT.
    32. +
    33. In the Version field, select the desired version, e.g. latest.
    34. +
    35. Click Done.
    36. +
    37. Click the Security tab. From the Service account field, select the service account for your namespace and project, previously created by the Terraform scripts. It is in the form
    38. +
    39. Click Create. If correctly configured, the service will deploy automatically. It may take several minutes to start up.
    -
  • If you haven't recently refreshed your Google Cloud credentials, run
    gcloud auth application-default login
    and authenticate.
  • -
    -      
  • From any local directory, run the following command: +
    +
      +
    1. If you haven't recently refreshed your Google Cloud credentials, run gcloud auth application-default login and authenticate.
    2. +
    3. From any local directory, run the following command:
      gcloud run deploy datacommons-mcp-server --image CONTAINER_IMAGE_URL \
      -        --service-account=SERVICE_ACCOUNT --region REGION --platform managed --allow-unauthenticated --timeout=10m \
      +        --service-account SERVICE_ACCOUNT --region REGION \
      +        --allow-unauthenticated --timeout=10m \
               --set-secrets="DC_API_KEY=SECRET_NAME:latest" \
               --set-env-vars="DC_TYPE=custom" --set-env-vars="CUSTOM_DC_URL=INSTANCE_URL" \
      -        --min-instances=0 
      -        
    4. -
        -
      • The container image URL is
        gcr.io/datcom-ci/datacommons-mcp-server:TAG
        . The tag is the tag should be a version number of a production image, e.g. v1.3.3.
      • -
      • The service account was created when you ran Terraform. It is in the form
        NAMESPACEdatacommons-sa@PROJECT_ID.iam.gserviceaccount.com
        . + --min-instances=0
  • + +
      +
    • The container image URL is gcr.io/datcom-ci/datacommons-mcp-server:TAG. The tag should be a production image with a version number, e.g. production-v1.1.4.
    • +
    • The service account was created when you ran Terraform. It is in the form NAMESPACE-datacommons-sa@PROJECT_ID.iam.gserviceaccount.com.
    • The region is the Cloud region where you want to run the service, e.g. us-central1.
    • -
    • The secret name is the one created when you ran the Terraform script, in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT. If you're not sure about the name or fingerprint, go to to {: target="_blank"} for your project and look it up.
    • -
    -
  • To view the startup status, run the following command: -
    gcloud beta run jobs logs tail SERVICE_NAME
    -
  • - -
    -
    -
    +
  • The secret name is the one created when you ran the Terraform scripts, in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT. If you're not sure about the name or fingerprint, go to https://console.cloud.google.com/security/secret-manager for your project and look it up.
  • + + To view the startup status, run the following command: +
    gcloud beta run jobs logs tail datacommons-mcp-server
    +
    + + ## Connect to the server from a remote client For details, see the following pages: From 44395c9806086d71b4eb730ebb5b169dd75db55c Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 12 Jan 2026 13:41:39 -0800 Subject: [PATCH 068/121] add reference to custom DC doc --- mcp/run_tools.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mcp/run_tools.md b/mcp/run_tools.md index 5cda4c158..32428786f 100644 --- a/mcp/run_tools.md +++ b/mcp/run_tools.md @@ -342,6 +342,8 @@ Here are some examples of such queries: ## Run a standalone server +The following procedure starts the MCP server in a local environment. To run the server in Google Cloud against a Custom Data Commons instance, see [Run an MCP server in Google Cloud](/custom_dc/mcp_server_cloud.html) + 1. Ensure you've set up the relevant server [environment variables](#configure-environment-variables). If you're using a `.env` file, go to the directory where the file is stored. 1. Run:
    
    From c131f4287638bfba138f9d4d2db829df39cb08cb Mon Sep 17 00:00:00 2001
    From: Kara Moscoe 
    Date: Mon, 12 Jan 2026 13:52:18 -0800
    Subject: [PATCH 069/121] HTML fix
    
    ---
     custom_dc/deploy_cloud.md | 14 +++++---------
     1 file changed, 5 insertions(+), 9 deletions(-)
    
    diff --git a/custom_dc/deploy_cloud.md b/custom_dc/deploy_cloud.md
    index 744360c22..6bce8c5c4 100644
    --- a/custom_dc/deploy_cloud.md
    +++ b/custom_dc/deploy_cloud.md
    @@ -355,17 +355,13 @@ You need to restart the services container every time you make changes to the co
                
  • Click Deploy. It will take several minutes for the service to start. You can click the Logs tab to view the progress.
  • -
    -
  • From any local directory, run the following command: -
    gcloud run deploy SERVICE_NAME --image CONTAINER_IMAGE_URL
  • -
  • To view the startup status, run the following command: +

    From any local directory, run the following command: +

    gcloud run deploy SERVICE_NAME --image CONTAINER_IMAGE_URL

    +

    To view the startup status, run the following command:

    gcloud beta run jobs logs tail SERVICE_NAME
    -
  • - The service name is NAMESPACE-datacommons-web-service. - The container image URL is the name of the package you created in the previous step. - +

    -
    +
    ### View your running application {#view-app} From 56404480e9f7f2e33d4c08971f24667b02e713e6 Mon Sep 17 00:00:00 2001 From: kmoscoe <165203920+kmoscoe@users.noreply.github.com> Date: Mon, 12 Jan 2026 13:53:18 -0800 Subject: [PATCH 070/121] Update custom_dc/mcp_server_cloud.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --- custom_dc/mcp_server_cloud.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md index 38e278eb6..c7666b437 100644 --- a/custom_dc/mcp_server_cloud.md +++ b/custom_dc/mcp_server_cloud.md @@ -29,7 +29,7 @@ In this page, we provide steps for running the Data Commons MCP server as a stan The following procedures assume that you have set up the following Google Cloud Platform services, using the [Terraform scripts](deploy_cloud.md#terraform): - A service account and roles. -- A Google Cloud Secret Manager secret for storying your Data Commons API key. +- A Google Cloud Secret Manager secret for storing your Data Commons API key. ## Create a Cloud Run Service for the MCP server From cfd71ad4f4dea959afd81d49a009718c1aaa9cc5 Mon Sep 17 00:00:00 2001 From: kmoscoe <165203920+kmoscoe@users.noreply.github.com> Date: Mon, 12 Jan 2026 13:53:53 -0800 Subject: [PATCH 071/121] Update custom_dc/mcp_server_cloud.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --- custom_dc/mcp_server_cloud.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md index c7666b437..fc8086280 100644 --- a/custom_dc/mcp_server_cloud.md +++ b/custom_dc/mcp_server_cloud.md @@ -84,7 +84,7 @@ The following procedures assume that you have set up the following Google Cloud
  • The secret name is the one created when you ran the Terraform scripts, in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT. If you're not sure about the name or fingerprint, go to https://console.cloud.google.com/security/secret-manager for your project and look it up.
  • To view the startup status, run the following command: -
    gcloud beta run jobs logs tail datacommons-mcp-server
    +
    gcloud run services logs tail datacommons-mcp-server --region REGION
    From 6eabd2030c5f34f4fccaeb991637a923fd1e54a6 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 12 Jan 2026 13:55:37 -0800 Subject: [PATCH 072/121] Remove a redundant step --- custom_dc/mcp_server_cloud.md | 1 - 1 file changed, 1 deletion(-) diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md index 38e278eb6..ab3255d50 100644 --- a/custom_dc/mcp_server_cloud.md +++ b/custom_dc/mcp_server_cloud.md @@ -51,7 +51,6 @@ The following procedures assume that you have set up the following Google Cloud
  • Under Configure, select the desired region for the service, e.g. us-central1.
  • Under Service scaling, enter 10 for the maximum number of instances.
  • Under Requests, increase the request timeout to 600.
  • -
  • Under Revision scaling, enter 10 for the maximum number of instances.
  • Expand Containers, Networking, Security.
  • Click the Variables & secrets tab.
  • Under Environment variables, click Add variable and set the following variables: From 97a8139118c2b3da2e2850ab2ed04cc9cdbbe89c Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 12 Jan 2026 13:59:15 -0800 Subject: [PATCH 073/121] Merge changes from Code Assist --- custom_dc/mcp_server_cloud.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md index 92267b5aa..563592d9d 100644 --- a/custom_dc/mcp_server_cloud.md +++ b/custom_dc/mcp_server_cloud.md @@ -62,7 +62,7 @@ The following procedures assume that you have set up the following Google Cloud
  • In the Name field, enter DC_API_KEY, and from the Secret field, select the secret previously created by the Terraform scripts. It is in the form NAMESPACE-datacommons-dc-api-key-FINGERPRINT.
  • In the Version field, select the desired version, e.g. latest.
  • Click Done.
  • -
  • Click the Security tab. From the Service account field, select the service account for your namespace and project, previously created by the Terraform scripts. It is in the form
  • +
  • Click the Security tab. From the Service account field, select the service account for your namespace and project, previously created by the Terraform scripts.
  • Click Create. If correctly configured, the service will deploy automatically. It may take several minutes to start up.
  • From 1f3d4c3087ce21e18deb528c7e8e0393ae62d496 Mon Sep 17 00:00:00 2001 From: Kara Moscoe Date: Mon, 12 Jan 2026 14:27:13 -0800 Subject: [PATCH 074/121] Remove specifics of scaling etc. and refer to Cloud Run docs instead --- custom_dc/deploy_cloud.md | 3 +-- custom_dc/mcp_server_cloud.md | 9 ++++----- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/custom_dc/deploy_cloud.md b/custom_dc/deploy_cloud.md index 6bce8c5c4..9bdff2a13 100644 --- a/custom_dc/deploy_cloud.md +++ b/custom_dc/deploy_cloud.md @@ -51,8 +51,7 @@ The first time you run it, may be prompted to specify a quota project for billin
     gcloud auth application-default set-quota-project PROJECT_ID
    -{: #accounts} -## One-time setup: Create service accounts and enable all APIs +## One-time setup: Enable APIs `website/deploy/terraform-custom-datacommons/setup.sh` is a convenience script to set up all necessary Cloud APIs. To run it: diff --git a/custom_dc/mcp_server_cloud.md b/custom_dc/mcp_server_cloud.md index 563592d9d..bc7a69ec0 100644 --- a/custom_dc/mcp_server_cloud.md +++ b/custom_dc/mcp_server_cloud.md @@ -33,6 +33,8 @@ The following procedures assume that you have set up the following Google Cloud ## Create a Cloud Run Service for the MCP server +The following procedure sets up a bare-bones container service. To set additional options, such as request timeouts, instance replication, etc., please see [Configure Cloud Run services](https://docs.cloud.google.com/run/docs/configuring){: target="_blank"} for details. +
    • Cloud Console
    • @@ -49,8 +51,6 @@ The following procedures assume that you have set up the following Google Cloud
    • Expand gcr.io/datcom-ci and expand datacommons-mcp-server.
    • From the list of images, select a production image, e.g. production-v1.1.4.
    • Under Configure, select the desired region for the service, e.g. us-central1.
    • -
    • Under Service scaling, enter 10 for the maximum number of instances.
    • -
    • Under Requests, increase the request timeout to 600.
    • Expand Containers, Networking, Security.
    • Click the Variables & secrets tab.
    • Under Environment variables, click Add variable and set the following variables: @@ -71,10 +71,9 @@ The following procedures assume that you have set up the following Google Cloud
    • From any local directory, run the following command:
      gcloud run deploy datacommons-mcp-server --image CONTAINER_IMAGE_URL \
               --service-account SERVICE_ACCOUNT --region REGION \
      -        --allow-unauthenticated --timeout=10m \
      +        --allow-unauthenticated \
               --set-secrets="DC_API_KEY=SECRET_NAME:latest" \
      -        --set-env-vars="DC_TYPE=custom" --set-env-vars="CUSTOM_DC_URL=INSTANCE_URL" \
      -        --min-instances=0
    • + --set-env-vars="DC_TYPE=custom" --set-env-vars="CUSTOM_DC_URL=INSTANCE_URL"