Skip to content

Add configurable NFT page size and slice responses #126

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 74 additions & 24 deletions .cursor/rules/110-new-mcp-tool.mdc
Original file line number Diff line number Diff line change
Expand Up @@ -353,15 +353,50 @@ return build_tool_response(

#### 5. Handling Pagination with Opaque Cursors (`return_type: ToolResponse[list[dict]]`)

For tools that return paginated data, do not expose individual pagination parameters (like `page`, `offset`, `items_count`) in the tool's signature. Instead, use a single, opaque `cursor` string. This improves robustness and saves LLM context. The implementation involves both handling an incoming cursor and generating the next one.
For tools that return paginated data, do not expose individual pagination parameters (like `page`, `offset`, `items_count`) in the tool's signature. Instead, use a single, opaque `cursor` string. This improves robustness and saves LLM context.

**Context Conservation Strategy:**
Many blockchain APIs return large datasets (50+ items per page) that would overwhelm LLM context. To balance network efficiency with context conservation, tools should:

- Fetch larger pages from APIs (typically 50 items) for network efficiency
- Return smaller slices to the LLM (typically 10-20 items) to conserve context
- Generate pagination objects that allow the LLM to request additional pages when needed

**A. Handling the Incoming Cursor:**
Your tool should accept an optional `cursor` argument. If it's provided, use the `apply_cursor_to_params` helper from `tools/common.py`. This helper centralizes the logic for decoding the cursor and handling potential `InvalidCursorError` exceptions, raising a user-friendly `ValueError` automatically.

**B. Generating Structured Pagination:**
In your response, check for `next_page_params` from the API. If they exist, create `PaginationInfo` and `NextCallInfo` objects with the structured parameters for the next call.
**ALWAYS use the `create_items_pagination` helper** from `tools/common.py` instead of manually creating pagination objects. This function implements the response slicing strategy described above, while also ensuring consistency and handling edge cases properly.

**C. Page Size Configuration:**
For each new paginated tool, you must add a dedicated page size configuration variable:

1. **Add to `blockscout_mcp_server/config.py`**:

```python
class ServerConfig(BaseSettings):
# Existing page sizes
nft_page_size: int = 10
logs_page_size: int = 10
advanced_filters_page_size: int = 10

# Add your new page size
my_tool_page_size: int = 15 # Adjust based on typical item size
```

2. **Add to `.env.example`**:

```shell
BLOCKSCOUT_MY_TOOL_PAGE_SIZE=15
```

3. **Add to `Dockerfile`**:

```dockerfile
ENV BLOCKSCOUT_MY_TOOL_PAGE_SIZE="15"
```

**C. Tool Description Guidelines:**
**D. Tool Description Guidelines:**
For paginated tools, **MUST** include this exact notice in the docstring: `**SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.`

**Complete Example Pattern:**
Expand All @@ -372,11 +407,25 @@ from pydantic import Field
from blockscout_mcp_server.tools.common import (
make_blockscout_request,
get_blockscout_base_url,
encode_cursor,
apply_cursor_to_params,
build_tool_response
build_tool_response,
create_items_pagination,
)
from blockscout_mcp_server.models import ToolResponse, PaginationInfo, NextCallInfo
from blockscout_mcp_server.models import ToolResponse
from blockscout_mcp_server.config import config

def extract_cursor_params(item: dict) -> dict:
"""Extract cursor parameters from an item for pagination continuation.

This function determines which fields from the last item should be used
as cursor parameters for the next page request. The returned dictionary
will be encoded as an opaque cursor string.
"""
return {
"some_id": item.get("id"), # Primary pagination key
"timestamp": item.get("timestamp"), # Secondary sort key if needed
"items_count": 50, # Page size for next request
}

async def paginated_tool_name(
chain_id: Annotated[str, Field(description="The ID of the blockchain")],
Expand All @@ -397,26 +446,27 @@ async def paginated_tool_name(
base_url = await get_blockscout_base_url(chain_id)
response_data = await make_blockscout_request(base_url=base_url, api_path=api_path, params=query_params)

processed_items = process_items(response_data.get("items", []))

# 2. Generate structured pagination
pagination = None
next_page_params = response_data.get("next_page_params")
if next_page_params:
next_cursor = encode_cursor(next_page_params)
pagination = PaginationInfo(
next_call=NextCallInfo(
tool_name="paginated_tool_name",
params={
"chain_id": chain_id,
"address": address,
"cursor": next_cursor
}
)
)
# 2. Process/transform items if needed
items = response_data.get("items", [])
processed_items = process_items(items) # Your transformation logic here

# 3. Use create_items_pagination helper to handle slicing and pagination
sliced_items, pagination = create_items_pagination(
items=processed_items,
page_size=config.my_tool_page_size, # Use the page size you configured above
tool_name="paginated_tool_name",
next_call_base_params={
"chain_id": chain_id,
"address": address,
# Include other non-cursor parameters that should be preserved
},
cursor_extractor=extract_cursor_params,
force_pagination=False, # Set to True if you know there are more pages despite few items
)

return build_tool_response(data=processed_items, pagination=pagination)
return build_tool_response(data=sliced_items, pagination=pagination)
```

#### 6. Simplifying Address Objects to Save Context (`return_type: ToolResponse[dict]`)

**Rationale:** Many Blockscout API endpoints return addresses as complex JSON objects containing the hash, name, tags, etc. To conserve LLM context and encourage compositional tool use, we must simplify these objects into a single address string. If the AI needs more details about an address, it should be guided to use the dedicated `get_address_info` tool.
Expand Down
35 changes: 33 additions & 2 deletions .cursor/rules/210-unit-testing-guidelines.mdc
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,37 @@ alwaysApply: false

This document provides detailed guidelines for writing effective unit tests for MCP tool functions and related components.

## **HIGH PRIORITY: Keep Unit Tests Simple and Focused**

**Each unit test must be narrow and specific.** A single test should verify one specific behavior or scenario. If a test attempts to cover multiple scenarios or different groups of input parameters, **split it into separate tests**.

**Simple tests are:**

- Easier to understand and maintain
- Faster to debug when they fail
- More reliable and less prone to false positives
- Better at pinpointing the exact cause of failures

**Example - Split complex tests:**

```python
# BAD: One test covering multiple scenarios
def test_lookup_token_complex():
# Tests both success and error cases
# Tests multiple input parameter combinations
# Hard to debug when it fails

# GOOD: Separate focused tests
def test_lookup_token_success():
# Tests only the success scenario

def test_lookup_token_invalid_symbol():
# Tests only invalid symbol error case

def test_lookup_token_network_error():
# Tests only network error handling
```

## Key Testing Patterns & Guidelines

### A. Use the `mock_ctx` Fixture
Expand All @@ -16,6 +47,7 @@ A reusable `pytest` fixture named `mock_ctx` is defined in `tests/conftest.py`.
**DO NOT** create a manual `MagicMock` for the context within your test functions.

**Correct Usage:**

```python
import pytest

Expand All @@ -38,15 +70,14 @@ For tools that return a `ToolResponse` object containing structured data, **DO N

However, the approach depends on the complexity of the tool.



### C. Handling Repetitive Data in Assertions (DAMP vs. DRY)

When testing tools that transform a list of items (e.g., `lookup_token_by_symbol`), explicitly writing out the entire `expected_result` can lead to large, repetitive, and hard-to-maintain test code.

In these cases, it is better to **programmatically generate the `expected_result`** from the `mock_api_response`. This keeps the test maintainable while still explicitly documenting the transformation logic itself.

**Correct Usage:**

```python
import copy
from blockscout_mcp_server.models import ToolResponse
Expand Down
10 changes: 10 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,15 @@ BLOCKSCOUT_CHAINSCOUT_TIMEOUT=15.0
BLOCKSCOUT_CHAIN_CACHE_TTL_SECONDS=1800
BLOCKSCOUT_PROGRESS_INTERVAL_SECONDS="15.0"

# The number of items to return per page for the nft_tokens_by_address tool.
BLOCKSCOUT_NFT_PAGE_SIZE=10

# The number of log items to return per page for get_address_logs and get_transaction_logs.
BLOCKSCOUT_LOGS_PAGE_SIZE=10

# The number of items to return per page for tools using the advanced filters endpoint.
BLOCKSCOUT_ADVANCED_FILTERS_PAGE_SIZE=10

BLOCKSCOUT_METADATA_URL="https://metadata.services.blockscout.com"
BLOCKSCOUT_METADATA_TIMEOUT="30.0"

21 changes: 7 additions & 14 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ mcp-server/
│ ├── models.py # Defines standardized Pydantic models for all tool responses
│ └── tools/ # Sub-package for tool implementations
│ ├── __init__.py # Initializes the tools sub-package
│ ├── common.py # Shared utilities for tools (e.g., HTTP client, chain resolution, progress reporting, data processing and truncation helpers)
│ ├── common.py # Shared utilities and common functionality for all tools
│ ├── get_instructions.py # Implements the __get_instructions__ tool
│ ├── ens_tools.py # Implements ENS-related tools
│ ├── search_tools.py # Implements search-related tools (e.g., lookup_token_by_symbol)
Expand Down Expand Up @@ -108,6 +108,9 @@ mcp-server/
* `BLOCKSCOUT_CHAINSCOUT_TIMEOUT`: Timeout for Chainscout API requests.
* `BLOCKSCOUT_CHAIN_CACHE_TTL_SECONDS`: Time-to-live for chain resolution cache.
* `BLOCKSCOUT_PROGRESS_INTERVAL_SECONDS`: Interval for periodic progress updates in long-running operations.
* `BLOCKSCOUT_NFT_PAGE_SIZE`: Page size for NFT token queries (default: 10).
* `BLOCKSCOUT_LOGS_PAGE_SIZE`: Page size for address logs queries (default: 10).
* `BLOCKSCOUT_ADVANCED_FILTERS_PAGE_SIZE`: Page size for advanced filter queries (default: 10).

2. **`tests/` (Test Suite)**
* This directory contains the complete test suite for the project, divided into two categories:
Expand Down Expand Up @@ -164,19 +167,9 @@ mcp-server/
* **`tools/` (Sub-package for Tool Implementations)**
* **`__init__.py`**: Marks `tools` as a sub-package. May re-export tool functions for easier import into `server.py`.
* **`common.py`**:
* Contains shared utility functions for all tool modules, including data processing and truncation helpers.
* Implements chain resolution and caching mechanism with `get_blockscout_base_url` function.
* Implements helper functions (`encode_cursor`, `decode_cursor`) and a custom exception (`InvalidCursorError`) for handling opaque pagination cursors.
* Contains asynchronous HTTP client functions for different API endpoints:
* `make_blockscout_request`: Takes base_url (resolved from chain_id), API path, and parameters for Blockscout API calls.
* `make_bens_request`: For BENS API calls.
* `make_chainscout_request`: For Chainscout API calls.
* `make_metadata_request`: For Blockscout Metadata API calls.
* These functions handle:
* API key inclusion
* Common HTTP error patterns
* URL construction
* Response parsing
* Provides shared utilities and common functionality for all MCP tools.
* Handles API communication, chain resolution, pagination, data processing, and error handling.
* Implements standardized patterns used across the tool ecosystem.
* **Individual Tool Modules** (e.g., `ens_tools.py`, `transaction_tools.py`):
* Each file will group logically related tools.
* Each tool will be implemented as an `async` Python function.
Expand Down
3 changes: 3 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -24,5 +24,8 @@ ENV BLOCKSCOUT_CHAINSCOUT_URL="https://chains.blockscout.com"
ENV BLOCKSCOUT_CHAINSCOUT_TIMEOUT="15.0"
ENV BLOCKSCOUT_CHAIN_CACHE_TTL_SECONDS="1800"
ENV BLOCKSCOUT_PROGRESS_INTERVAL_SECONDS="15.0"
ENV BLOCKSCOUT_NFT_PAGE_SIZE="10"
ENV BLOCKSCOUT_LOGS_PAGE_SIZE="10"
ENV BLOCKSCOUT_ADVANCED_FILTERS_PAGE_SIZE="10"

CMD ["python", "-m", "blockscout_mcp_server"]
Loading