Skip to content

Commit 29fedf8

Browse files
authored
Add configurable NFT page size and slice responses (#126)
1 parent cb4f215 commit 29fedf8

20 files changed

+1776
-372
lines changed

.cursor/rules/110-new-mcp-tool.mdc

Lines changed: 74 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -353,15 +353,50 @@ return build_tool_response(
353353

354354
#### 5. Handling Pagination with Opaque Cursors (`return_type: ToolResponse[list[dict]]`)
355355

356-
For tools that return paginated data, do not expose individual pagination parameters (like `page`, `offset`, `items_count`) in the tool's signature. Instead, use a single, opaque `cursor` string. This improves robustness and saves LLM context. The implementation involves both handling an incoming cursor and generating the next one.
356+
For tools that return paginated data, do not expose individual pagination parameters (like `page`, `offset`, `items_count`) in the tool's signature. Instead, use a single, opaque `cursor` string. This improves robustness and saves LLM context.
357+
358+
**Context Conservation Strategy:**
359+
Many blockchain APIs return large datasets (50+ items per page) that would overwhelm LLM context. To balance network efficiency with context conservation, tools should:
360+
361+
- Fetch larger pages from APIs (typically 50 items) for network efficiency
362+
- Return smaller slices to the LLM (typically 10-20 items) to conserve context
363+
- Generate pagination objects that allow the LLM to request additional pages when needed
357364

358365
**A. Handling the Incoming Cursor:**
359366
Your tool should accept an optional `cursor` argument. If it's provided, use the `apply_cursor_to_params` helper from `tools/common.py`. This helper centralizes the logic for decoding the cursor and handling potential `InvalidCursorError` exceptions, raising a user-friendly `ValueError` automatically.
360367

361368
**B. Generating Structured Pagination:**
362-
In your response, check for `next_page_params` from the API. If they exist, create `PaginationInfo` and `NextCallInfo` objects with the structured parameters for the next call.
369+
**ALWAYS use the `create_items_pagination` helper** from `tools/common.py` instead of manually creating pagination objects. This function implements the response slicing strategy described above, while also ensuring consistency and handling edge cases properly.
370+
371+
**C. Page Size Configuration:**
372+
For each new paginated tool, you must add a dedicated page size configuration variable:
373+
374+
1. **Add to `blockscout_mcp_server/config.py`**:
375+
376+
```python
377+
class ServerConfig(BaseSettings):
378+
# Existing page sizes
379+
nft_page_size: int = 10
380+
logs_page_size: int = 10
381+
advanced_filters_page_size: int = 10
382+
383+
# Add your new page size
384+
my_tool_page_size: int = 15 # Adjust based on typical item size
385+
```
386+
387+
2. **Add to `.env.example`**:
388+
389+
```shell
390+
BLOCKSCOUT_MY_TOOL_PAGE_SIZE=15
391+
```
392+
393+
3. **Add to `Dockerfile`**:
394+
395+
```dockerfile
396+
ENV BLOCKSCOUT_MY_TOOL_PAGE_SIZE="15"
397+
```
363398

364-
**C. Tool Description Guidelines:**
399+
**D. Tool Description Guidelines:**
365400
For paginated tools, **MUST** include this exact notice in the docstring: `**SUPPORTS PAGINATION**: If response includes 'pagination' field, use the provided next_call to get additional pages.`
366401

367402
**Complete Example Pattern:**
@@ -372,11 +407,25 @@ from pydantic import Field
372407
from blockscout_mcp_server.tools.common import (
373408
make_blockscout_request,
374409
get_blockscout_base_url,
375-
encode_cursor,
376410
apply_cursor_to_params,
377-
build_tool_response
411+
build_tool_response,
412+
create_items_pagination,
378413
)
379-
from blockscout_mcp_server.models import ToolResponse, PaginationInfo, NextCallInfo
414+
from blockscout_mcp_server.models import ToolResponse
415+
from blockscout_mcp_server.config import config
416+
417+
def extract_cursor_params(item: dict) -> dict:
418+
"""Extract cursor parameters from an item for pagination continuation.
419+
420+
This function determines which fields from the last item should be used
421+
as cursor parameters for the next page request. The returned dictionary
422+
will be encoded as an opaque cursor string.
423+
"""
424+
return {
425+
"some_id": item.get("id"), # Primary pagination key
426+
"timestamp": item.get("timestamp"), # Secondary sort key if needed
427+
"items_count": 50, # Page size for next request
428+
}
380429

381430
async def paginated_tool_name(
382431
chain_id: Annotated[str, Field(description="The ID of the blockchain")],
@@ -397,26 +446,27 @@ async def paginated_tool_name(
397446
base_url = await get_blockscout_base_url(chain_id)
398447
response_data = await make_blockscout_request(base_url=base_url, api_path=api_path, params=query_params)
399448

400-
processed_items = process_items(response_data.get("items", []))
401-
402-
# 2. Generate structured pagination
403-
pagination = None
404-
next_page_params = response_data.get("next_page_params")
405-
if next_page_params:
406-
next_cursor = encode_cursor(next_page_params)
407-
pagination = PaginationInfo(
408-
next_call=NextCallInfo(
409-
tool_name="paginated_tool_name",
410-
params={
411-
"chain_id": chain_id,
412-
"address": address,
413-
"cursor": next_cursor
414-
}
415-
)
416-
)
449+
# 2. Process/transform items if needed
450+
items = response_data.get("items", [])
451+
processed_items = process_items(items) # Your transformation logic here
452+
453+
# 3. Use create_items_pagination helper to handle slicing and pagination
454+
sliced_items, pagination = create_items_pagination(
455+
items=processed_items,
456+
page_size=config.my_tool_page_size, # Use the page size you configured above
457+
tool_name="paginated_tool_name",
458+
next_call_base_params={
459+
"chain_id": chain_id,
460+
"address": address,
461+
# Include other non-cursor parameters that should be preserved
462+
},
463+
cursor_extractor=extract_cursor_params,
464+
force_pagination=False, # Set to True if you know there are more pages despite few items
465+
)
417466

418-
return build_tool_response(data=processed_items, pagination=pagination)
467+
return build_tool_response(data=sliced_items, pagination=pagination)
419468
```
469+
420470
#### 6. Simplifying Address Objects to Save Context (`return_type: ToolResponse[dict]`)
421471

422472
**Rationale:** Many Blockscout API endpoints return addresses as complex JSON objects containing the hash, name, tags, etc. To conserve LLM context and encourage compositional tool use, we must simplify these objects into a single address string. If the AI needs more details about an address, it should be guided to use the dedicated `get_address_info` tool.

.cursor/rules/210-unit-testing-guidelines.mdc

Lines changed: 33 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,37 @@ alwaysApply: false
77

88
This document provides detailed guidelines for writing effective unit tests for MCP tool functions and related components.
99

10+
## **HIGH PRIORITY: Keep Unit Tests Simple and Focused**
11+
12+
**Each unit test must be narrow and specific.** A single test should verify one specific behavior or scenario. If a test attempts to cover multiple scenarios or different groups of input parameters, **split it into separate tests**.
13+
14+
**Simple tests are:**
15+
16+
- Easier to understand and maintain
17+
- Faster to debug when they fail
18+
- More reliable and less prone to false positives
19+
- Better at pinpointing the exact cause of failures
20+
21+
**Example - Split complex tests:**
22+
23+
```python
24+
# BAD: One test covering multiple scenarios
25+
def test_lookup_token_complex():
26+
# Tests both success and error cases
27+
# Tests multiple input parameter combinations
28+
# Hard to debug when it fails
29+
30+
# GOOD: Separate focused tests
31+
def test_lookup_token_success():
32+
# Tests only the success scenario
33+
34+
def test_lookup_token_invalid_symbol():
35+
# Tests only invalid symbol error case
36+
37+
def test_lookup_token_network_error():
38+
# Tests only network error handling
39+
```
40+
1041
## Key Testing Patterns & Guidelines
1142

1243
### A. Use the `mock_ctx` Fixture
@@ -16,6 +47,7 @@ A reusable `pytest` fixture named `mock_ctx` is defined in `tests/conftest.py`.
1647
**DO NOT** create a manual `MagicMock` for the context within your test functions.
1748

1849
**Correct Usage:**
50+
1951
```python
2052
import pytest
2153

@@ -38,15 +70,14 @@ For tools that return a `ToolResponse` object containing structured data, **DO N
3870

3971
However, the approach depends on the complexity of the tool.
4072

41-
42-
4373
### C. Handling Repetitive Data in Assertions (DAMP vs. DRY)
4474

4575
When testing tools that transform a list of items (e.g., `lookup_token_by_symbol`), explicitly writing out the entire `expected_result` can lead to large, repetitive, and hard-to-maintain test code.
4676

4777
In these cases, it is better to **programmatically generate the `expected_result`** from the `mock_api_response`. This keeps the test maintainable while still explicitly documenting the transformation logic itself.
4878

4979
**Correct Usage:**
80+
5081
```python
5182
import copy
5283
from blockscout_mcp_server.models import ToolResponse

.env.example

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,5 +10,15 @@ BLOCKSCOUT_CHAINSCOUT_TIMEOUT=15.0
1010
BLOCKSCOUT_CHAIN_CACHE_TTL_SECONDS=1800
1111
BLOCKSCOUT_PROGRESS_INTERVAL_SECONDS="15.0"
1212

13+
# The number of items to return per page for the nft_tokens_by_address tool.
14+
BLOCKSCOUT_NFT_PAGE_SIZE=10
15+
16+
# The number of log items to return per page for get_address_logs and get_transaction_logs.
17+
BLOCKSCOUT_LOGS_PAGE_SIZE=10
18+
19+
# The number of items to return per page for tools using the advanced filters endpoint.
20+
BLOCKSCOUT_ADVANCED_FILTERS_PAGE_SIZE=10
21+
1322
BLOCKSCOUT_METADATA_URL="https://metadata.services.blockscout.com"
1423
BLOCKSCOUT_METADATA_TIMEOUT="30.0"
24+

AGENTS.md

Lines changed: 7 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ mcp-server/
1313
│ ├── models.py # Defines standardized Pydantic models for all tool responses
1414
│ └── tools/ # Sub-package for tool implementations
1515
│ ├── __init__.py # Initializes the tools sub-package
16-
│ ├── common.py # Shared utilities for tools (e.g., HTTP client, chain resolution, progress reporting, data processing and truncation helpers)
16+
│ ├── common.py # Shared utilities and common functionality for all tools
1717
│ ├── get_instructions.py # Implements the __get_instructions__ tool
1818
│ ├── ens_tools.py # Implements ENS-related tools
1919
│ ├── search_tools.py # Implements search-related tools (e.g., lookup_token_by_symbol)
@@ -108,6 +108,9 @@ mcp-server/
108108
* `BLOCKSCOUT_CHAINSCOUT_TIMEOUT`: Timeout for Chainscout API requests.
109109
* `BLOCKSCOUT_CHAIN_CACHE_TTL_SECONDS`: Time-to-live for chain resolution cache.
110110
* `BLOCKSCOUT_PROGRESS_INTERVAL_SECONDS`: Interval for periodic progress updates in long-running operations.
111+
* `BLOCKSCOUT_NFT_PAGE_SIZE`: Page size for NFT token queries (default: 10).
112+
* `BLOCKSCOUT_LOGS_PAGE_SIZE`: Page size for address logs queries (default: 10).
113+
* `BLOCKSCOUT_ADVANCED_FILTERS_PAGE_SIZE`: Page size for advanced filter queries (default: 10).
111114

112115
2. **`tests/` (Test Suite)**
113116
* This directory contains the complete test suite for the project, divided into two categories:
@@ -164,19 +167,9 @@ mcp-server/
164167
* **`tools/` (Sub-package for Tool Implementations)**
165168
* **`__init__.py`**: Marks `tools` as a sub-package. May re-export tool functions for easier import into `server.py`.
166169
* **`common.py`**:
167-
* Contains shared utility functions for all tool modules, including data processing and truncation helpers.
168-
* Implements chain resolution and caching mechanism with `get_blockscout_base_url` function.
169-
* Implements helper functions (`encode_cursor`, `decode_cursor`) and a custom exception (`InvalidCursorError`) for handling opaque pagination cursors.
170-
* Contains asynchronous HTTP client functions for different API endpoints:
171-
* `make_blockscout_request`: Takes base_url (resolved from chain_id), API path, and parameters for Blockscout API calls.
172-
* `make_bens_request`: For BENS API calls.
173-
* `make_chainscout_request`: For Chainscout API calls.
174-
* `make_metadata_request`: For Blockscout Metadata API calls.
175-
* These functions handle:
176-
* API key inclusion
177-
* Common HTTP error patterns
178-
* URL construction
179-
* Response parsing
170+
* Provides shared utilities and common functionality for all MCP tools.
171+
* Handles API communication, chain resolution, pagination, data processing, and error handling.
172+
* Implements standardized patterns used across the tool ecosystem.
180173
* **Individual Tool Modules** (e.g., `ens_tools.py`, `transaction_tools.py`):
181174
* Each file will group logically related tools.
182175
* Each tool will be implemented as an `async` Python function.

Dockerfile

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,8 @@ ENV BLOCKSCOUT_CHAINSCOUT_URL="https://chains.blockscout.com"
2424
ENV BLOCKSCOUT_CHAINSCOUT_TIMEOUT="15.0"
2525
ENV BLOCKSCOUT_CHAIN_CACHE_TTL_SECONDS="1800"
2626
ENV BLOCKSCOUT_PROGRESS_INTERVAL_SECONDS="15.0"
27+
ENV BLOCKSCOUT_NFT_PAGE_SIZE="10"
28+
ENV BLOCKSCOUT_LOGS_PAGE_SIZE="10"
29+
ENV BLOCKSCOUT_ADVANCED_FILTERS_PAGE_SIZE="10"
2730

2831
CMD ["python", "-m", "blockscout_mcp_server"]

0 commit comments

Comments
 (0)