fix(blocks): rename placeholder_values to options on AgentDropdownInputBlock#12595
fix(blocks): rename placeholder_values to options on AgentDropdownInputBlock#12595Torantulino wants to merge 1 commit intodevfrom
Conversation
WalkthroughThis change renames Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.md`:
- Around line 77-78: The docs incorrectly state that the `input_default` field
`options` is required; update the guidance in agent_generation_guide.md so
`options` is described as optional (it uses `default_factory=list` in the
backend) and remove the phrase "must have at least one", adding a note that an
empty `options` list indicates free-text input; keep terminology consistent with
other blocks by referring to `input_default`, `name`, `options`, `title`,
`description`, and `value` exactly as used in the code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 88ab133b-5fea-417b-8c5a-c53033abe199
📒 Files selected for processing (5)
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.pyautogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.mdautogpt_platform/backend/load-tests/tests/api/graph-execution-test.jsdocs/integrations/block-integrations/basic.md
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: check API types
- GitHub Check: Seer Code Review
- GitHub Check: end-to-end tests
- GitHub Check: type-check (3.13)
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
- GitHub Check: Analyze (python)
- GitHub Check: Check PR Status
🧰 Additional context used
📓 Path-based instructions (8)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend developmentRefer to
@backend/CLAUDE.mdfor backend-specific commands, architecture, and development tasks
autogpt_platform/backend/**/*.py: Import only at the top level; no local/inner imports except for lazy imports of heavy optional dependencies likeopenpyxl
Use absolute imports withfrom backend.module import ...for cross-package imports; single-dot relative imports (from .sibling import ...) are acceptable for sibling modules within the same package; avoid double-dot relative imports (from ..parent import ...)
Do not use duck typing withhasattr(),getattr(), orisinstance()for type dispatch; use typed interfaces, unions, or protocols instead
Use Pydantic models for structured data instead of dataclasses, namedtuples, or dicts
Do not use linter suppressors; no# type: ignore,# noqa, or# pyright: ignorecomments — fix the underlying type/code issue instead
Use list comprehensions instead of manual loop-and-append patterns
Use early return guard clauses to avoid deep nesting
Use%sfor deferred interpolation indebuglog statements; use f-strings for readability in other log levels (e.g.,logger.debug("Processing %s items", count),logger.info(f"Processing {count} items"))
Sanitize error paths usingos.path.basename()in error messages to avoid leaking directory structure
Avoid TOCTOU (time-of-check-time-of-use) patterns; do not use check-then-act patterns for file access and credit charging operations
Use Redis pipelines withtransaction=Truefor atomicity on multi-step Redis operations
Usemax(0, value)guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract h...
Files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
autogpt_platform/backend/backend/blocks/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/backend/blocks/**/*.py: Inherit from 'Block' base class with input/output schemas when adding new blocks in backend
Implement 'run' method with proper error handling in backend blocks
Generate block UUID using 'uuid.uuid4()' when creating new blocks in backend
Write tests alongside block implementation when adding new blocks in backendBlocks are reusable components located in backend/backend/blocks/ that perform specific tasks
Files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
autogpt_platform/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
autogpt_platform/backend/backend/blocks/*.py
📄 CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
autogpt_platform/backend/backend/blocks/*.py: When creating a new block, follow the Block SDK Guide: configure provider withProviderBuilder, inherit fromBlockbase class, define input/output schemas usingBlockSchema, implement asyncrunmethod, and generate unique block ID usinguuid.uuid4()
Usestore_media_file()withreturn_format="for_block_output"for block outputs unless there is a specific reason not to; this format automatically adapts to execution context (returnsworkspace://in CoPilot, data URI in graphs)
Usestore_media_file()withreturn_format="for_local_processing"when processing files locally with tools like ffmpeg or PIL
Usestore_media_file()withreturn_format="for_external_api"when sending file content to external APIs like Replicate or OpenAI
Never hardcode workspace checks in blocks; letstore_media_file()withreturn_format="for_block_output"handle context adaptation automatically
Files:
autogpt_platform/backend/backend/blocks/io.py
docs/integrations/**/*.md
📄 CodeRabbit inference engine (docs/CLAUDE.md)
docs/integrations/**/*.md: Provide a technical explanation of how the block functions in the 'How It Works' section, including 1-2 paragraphs describing processing logic, validation, error handling, or edge cases, with code examples in backticks when helpful
Provide exactly 3 practical use cases in the 'Use Case' section, formatted with bold headings followed by short one-sentence descriptions
Files:
docs/integrations/block-integrations/basic.md
docs/**/*.md
📄 CodeRabbit inference engine (docs/CLAUDE.md)
docs/**/*.md: Keep documentation descriptions concise and action-oriented, focusing on practical, real-world scenarios
Use consistent terminology with other blocks and avoid overly technical jargon unless necessary in documentation
Files:
docs/integrations/block-integrations/basic.md
autogpt_platform/backend/**/test/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use snapshot testing with '--snapshot-update' flag in backend tests when output changes; always review with 'git diff'
Files:
autogpt_platform/backend/backend/blocks/test/test_block.py
autogpt_platform/backend/**/*test*.py
📄 CodeRabbit inference engine (AGENTS.md)
Run
poetry run testfor backend testing (runs pytest with docker based postgres + prisma)
Files:
autogpt_platform/backend/backend/blocks/test/test_block.py
🧠 Learnings (21)
📓 Common learnings
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/frontend/src/app/api/openapi.json:11897-11900
Timestamp: 2026-03-04T23:58:18.476Z
Learning: Repo: Significant-Gravitas/AutoGPT — PR `#12284`
Backend/frontend OpenAPI codegen convention: In backend/api/features/store/model.py, the StoreSubmission and StoreSubmissionAdminView models define submitted_at: datetime | None, changes_summary: str | None, and instructions: str | None with no default. This is intentional to produce “required but nullable” fields in OpenAPI (properties appear in required[] and use anyOf [type, null]). This matches Prisma’s submittedAt DateTime? and changesSummary String?. Do not flag this as a required/nullable mismatch.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12213
File: autogpt_platform/frontend/src/app/api/openapi.json:9983-9995
Timestamp: 2026-02-27T15:59:00.370Z
Learning: Repo: Significant-Gravitas/AutoGPT PR: 12213 — Backend/frontend OpenAPI codegen
Learning: For MCP schema models, required OpenAPI fields must have no defaults in Pydantic. Specifically, MCPToolInfo.input_schema must be required (no Field(default_factory=dict)) so openapi.json emits it in "required", ensuring generated TS types treat input_schema as non-optional.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/copilot/workflow_import/converter.py:0-0
Timestamp: 2026-03-17T10:57:12.953Z
Learning: In Significant-Gravitas/AutoGPT PR `#12440`, `autogpt_platform/backend/backend/copilot/workflow_import/converter.py` was fully rewritten (commit 732960e2d) to no longer make direct LLM/OpenAI API calls. The converter now builds a structured text prompt for AutoPilot/CoPilot instead. There is no `response.choices` access or any direct LLM client usage in this file. Do not flag `response.choices` access or LLM client initialization patterns as issues in this file.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12213
File: autogpt_platform/frontend/src/app/api/openapi.json:9983-9995
Timestamp: 2026-02-27T15:59:00.370Z
Learning: Repo: Significant-Gravitas/AutoGPT PR: 12213 — OpenAPI/codegen
Learning: Ensuring a field is required in generated TS types needs two sides: (1) no default value on the Pydantic field, and (2) the OpenAPI model's "required" array must list it. For MCPToolInfo, making input_schema required in OpenAPI and removing Field(default_factory=dict) in the backend prevents optional typing drift.
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12536
File: autogpt_platform/frontend/src/app/api/openapi.json:5770-5790
Timestamp: 2026-03-24T21:25:13.744Z
Learning: Repo: Significant-Gravitas/AutoGPT — PR `#12536`
File: autogpt_platform/frontend/src/app/api/openapi.json
Learning: The OpenAPI spec file is auto-generated; per established convention, endpoints generally declare only 200/201, 401, and 422 responses. Do not suggest adding explicit 403/404 response entries for single operations unless planning a repo-wide spec update. Prefer clarifying such behaviors in endpoint descriptions/docstrings instead of altering response maps.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12334
File: docs/integrations/block-integrations/github/repo.md:11-40
Timestamp: 2026-03-08T23:28:21.675Z
Learning: In Significant-Gravitas/AutoGPT, new GitHub block documentation stubs in `docs/integrations/block-integrations/github/` are auto-generated by a docs script with placeholder text (`_Add technical explanation here._` / `_Add practical use case examples here._`) inside `<!-- MANUAL: how_it_works
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12213
File: autogpt_platform/frontend/src/app/api/openapi.json:10030-10037
Timestamp: 2026-03-01T07:59:02.311Z
Learning: Repo: Significant-Gravitas/AutoGPT PR: 12213 — For MCP manual token storage, backend model autogpt_platform/backend/backend/api/features/mcp/routes.py defines MCPStoreTokenRequest.token as Pydantic SecretStr with a min length constraint, which generates OpenAPI schema metadata (format: "password", writeOnly: true, minLength: 1) in autogpt_platform/frontend/src/app/api/openapi.json. Prefer SecretStr (with length constraints) for sensitive request fields so generated TS clients and docs treat them as secrets.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12356
File: autogpt_platform/backend/backend/copilot/constants.py:9-12
Timestamp: 2026-03-10T08:39:22.025Z
Learning: In Significant-Gravitas/AutoGPT PR `#12356`, the `COPILOT_SYNTHETIC_ID_PREFIX = "copilot-"` check in `create_auto_approval_record` (human_review.py) is intentional and safe. The `graph_exec_id` passed to this function comes from server-side `PendingHumanReview` DB records (not from user input); the API only accepts `node_exec_id` from users. Synthetic `copilot-*` IDs are only ever created server-side in `run_block.py`. The prefix skip avoids a DB lookup for a `AgentGraphExecution` record that legitimately does not exist for CoPilot sessions, while `user_id` scoping is enforced at the auth layer and on the resulting auto-approval record.
📚 Learning: 2026-03-04T23:58:18.476Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/frontend/src/app/api/openapi.json:11897-11900
Timestamp: 2026-03-04T23:58:18.476Z
Learning: Repo: Significant-Gravitas/AutoGPT — PR `#12284`
Backend/frontend OpenAPI codegen convention: In backend/api/features/store/model.py, the StoreSubmission and StoreSubmissionAdminView models define submitted_at: datetime | None, changes_summary: str | None, and instructions: str | None with no default. This is intentional to produce “required but nullable” fields in OpenAPI (properties appear in required[] and use anyOf [type, null]). This matches Prisma’s submittedAt DateTime? and changesSummary String?. Do not flag this as a required/nullable mismatch.
Applied to files:
autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.mdautogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-02-27T15:59:00.370Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12213
File: autogpt_platform/frontend/src/app/api/openapi.json:9983-9995
Timestamp: 2026-02-27T15:59:00.370Z
Learning: Repo: Significant-Gravitas/AutoGPT PR: 12213 — Backend/frontend OpenAPI codegen
Learning: For MCP schema models, required OpenAPI fields must have no defaults in Pydantic. Specifically, MCPToolInfo.input_schema must be required (no Field(default_factory=dict)) so openapi.json emits it in "required", ensuring generated TS types treat input_schema as non-optional.
Applied to files:
autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.mdautogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-02-27T15:59:00.370Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12213
File: autogpt_platform/frontend/src/app/api/openapi.json:9983-9995
Timestamp: 2026-02-27T15:59:00.370Z
Learning: Repo: Significant-Gravitas/AutoGPT PR: 12213 — OpenAPI/codegen
Learning: Ensuring a field is required in generated TS types needs two sides: (1) no default value on the Pydantic field, and (2) the OpenAPI model's "required" array must list it. For MCPToolInfo, making input_schema required in OpenAPI and removing Field(default_factory=dict) in the backend prevents optional typing drift.
Applied to files:
autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.mdautogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-17T10:57:12.953Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/copilot/workflow_import/converter.py:0-0
Timestamp: 2026-03-17T10:57:12.953Z
Learning: In Significant-Gravitas/AutoGPT PR `#12440`, `autogpt_platform/backend/backend/copilot/workflow_import/converter.py` was fully rewritten (commit 732960e2d) to no longer make direct LLM/OpenAI API calls. The converter now builds a structured text prompt for AutoPilot/CoPilot instead. There is no `response.choices` access or any direct LLM client usage in this file. Do not flag `response.choices` access or LLM client initialization patterns as issues in this file.
Applied to files:
autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.mdautogpt_platform/backend/backend/blocks/io.py
📚 Learning: 2026-03-26T13:40:09.865Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12577
File: autogpt_platform/frontend/src/app/(platform)/admin/components/AdminUserSearch.tsx:0-0
Timestamp: 2026-03-26T13:40:09.865Z
Learning: In `autogpt_platform/frontend`, the non-legacy `Input` component (`TextField`) requires `label` and `id` props and has a fundamentally different API from the legacy `@/components/__legacy__/ui/input`. The entire admin section intentionally continues to use the legacy `Input` for simple form inputs (e.g., search boxes) where those props are unnecessary. Do not flag the use of `@/components/__legacy__/ui/input` in admin components as a blocking issue until a lightweight non-legacy Input alternative is available.
Applied to files:
autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.md
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.
Applied to files:
autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.mdautogpt_platform/backend/load-tests/tests/api/graph-execution-test.jsautogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-25T06:58:59.788Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2026-03-25T06:58:59.788Z
Learning: Applies to autogpt_platform/backend/**/agents/**/*.json,backend/**/workflows/**/*.json : Agent Graphs are workflow definitions stored as JSON and executed by the backend
Applied to files:
autogpt_platform/backend/load-tests/tests/api/graph-execution-test.js
📚 Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/backend/blocks/**/*.py : Inherit from 'Block' base class with input/output schemas when adding new blocks in backend
Applied to files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-02-05T04:11:00.596Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11796
File: autogpt_platform/backend/backend/blocks/video/concat.py:3-4
Timestamp: 2026-02-05T04:11:00.596Z
Learning: In autogpt_platform/backend/backend/blocks/**/*.py, when creating a new block, generate a UUID once with uuid.uuid4() and hard-code the resulting string as the block's id parameter. Do not call uuid.uuid4() at runtime; IDs must be constant across all imports and runs to ensure stability.
Applied to files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-16T16:32:21.686Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:32:21.686Z
Learning: In autogpt_platform/backend/backend/blocks/, the Block base class execute() already wraps run() in a try/except to convert uncaught exceptions into BlockExecutionError/BlockUnknownError. Do not add per-block try/except in individual block run() methods, as this is not the established pattern (e.g., Gmail, Slack, Todoist blocks omit it). Only use explicit try/except within blocks that need to distinguish between success and error yield paths inside a generator (e.g., attachment blocks). This guidance applies to all Python files under autogpt_platform/backend/backend/blocks/ and similar block implementations; avoid duplicating error handling in run() unless a block requires generator-based branching.
Applied to files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.
Applied to files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: In autogpt_platform/backend/backend/blocks/ (and related blocks under autogpt_platform/backend/backend/blocks/), do not add try/except blocks around a block's run() method for standard error propagation. The block executor framework (backend/executor/manager.py) catches uncaught exceptions from run() and emits them on the 'error' output. Only add explicit try/except blocks when you need to control partial outputs in failure cases (e.g., certain outputs must not be yielded on error, as in attachment blocks). This is the standard pattern across the codebase; apply it broadly to blocks' run() implementations.
Applied to files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-16T16:30:23.196Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:30:23.196Z
Learning: In any Python file under autogpt_platform/backend/backend/blocks, do not add a try/except around run() solely for standard error handling. The block framework’s _execute() in _base.py already catches unhandled exceptions and re-raises as BlockExecutionError or BlockUnknownError. If you yield ("error", message), _execute() raises BlockExecutionError immediately, so the error port will not propagate downstream. Reserve explicit try/except for scenarios where you must control partial output (e.g., attachment blocks that must skip yielding content_base64 on failure).
Applied to files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: Do not wrap synchronous AgentMail SDK calls with asyncio.to_thread() in blocks under autogpt_platform/backend/backend/blocks (and across the codebase). The block executor runs node execution in dedicated threads via asyncio.run_coroutine_threadsafe (see manager.py around lines ~745-752 and ~1079). The existing pattern avoids using asyncio.to_thread for SDK calls inside async run() methods, so maintain that approach and do not add to_thread usage in these code paths.
Applied to files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.
Applied to files:
autogpt_platform/backend/backend/blocks/io.pyautogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-08T23:28:21.675Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12334
File: docs/integrations/block-integrations/github/repo.md:11-40
Timestamp: 2026-03-08T23:28:21.675Z
Learning: In Significant-Gravitas/AutoGPT, new GitHub block documentation stubs in `docs/integrations/block-integrations/github/` are auto-generated by a docs script with placeholder text (`_Add technical explanation here._` / `_Add practical use case examples here._`) inside `<!-- MANUAL: how_it_works
Applied to files:
docs/integrations/block-integrations/basic.md
📚 Learning: 2026-01-22T11:37:20.219Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: docs/CLAUDE.md:0-0
Timestamp: 2026-01-22T11:37:20.219Z
Learning: Applies to docs/integrations/**/*.md : Provide a technical explanation of how the block functions in the 'How It Works' section, including 1-2 paragraphs describing processing logic, validation, error handling, or edge cases, with code examples in backticks when helpful
Applied to files:
docs/integrations/block-integrations/basic.md
📚 Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/backend/blocks/**/*.py : Write tests alongside block implementation when adding new blocks in backend
Applied to files:
autogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/**/test/**/*.py : Use snapshot testing with '--snapshot-update' flag in backend tests when output changes; always review with 'git diff'
Applied to files:
autogpt_platform/backend/backend/blocks/test/test_block.py
📚 Learning: 2026-03-19T15:10:50.676Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12483
File: autogpt_platform/backend/backend/copilot/tools/test_dry_run.py:298-303
Timestamp: 2026-03-19T15:10:50.676Z
Learning: When using Python’s `unittest.mock.patch` in tests, choose the patch target based on how the imported name is resolved:
- If the code under test uses an **eager/module-level import** (e.g., `from foo.bar import baz` at module top), patch **the module where the name is looked up** (i.e., where it is used in the SUT), e.g. `patch("mymodule.baz")`.
- If the code under test uses a **lazy import** executed later (e.g., `from foo.bar import baz` inside a function/branch), patch **the source module** (e.g., `patch("foo.bar.baz")`) because the late `from ... import` will read the (potentially patched) name from the source module at call time.
For a concrete example: if `simulate_block` is imported inside an `if dry_run:` block in the SUT, then the correct test patch target is the source module path for `simulate_block` as it exists at call time (e.g., `patch("backend.executor.simulator.simulate_block")`), not the test file’s import location.
Applied to files:
autogpt_platform/backend/backend/blocks/test/test_block.py
🔇 Additional comments (4)
autogpt_platform/backend/backend/blocks/io.py (1)
481-503: Backward-compat rename handling looks correct.
optionsis now the canonical field, and the legacyplaceholder_valuesremap inmodel_construct()cleanly preserves enum generation behavior.autogpt_platform/backend/load-tests/tests/api/graph-execution-test.js (1)
56-63: Runtime input payload fix is correct.Switching to
valuehere matches the actual execution input contract and avoids sending deprecated/incorrect fields in load tests.docs/integrations/block-integrations/basic.md (1)
172-187: Docs update is aligned with the new field contract.The
optionsterminology and behavior description here now match the backend dropdown semantics.autogpt_platform/backend/backend/blocks/test/test_block.py (1)
326-379: Add integration test for DB hydration path with legacy dropdown configs.The existing tests validate
_generate_schema()andmodel_construct()in isolation, but they don't explicitly test theNodeModel.from_db(...) → input_schemaproperty path when a persistedAgentNode.constantInputcontains legacyplaceholder_values.Add a test that:
- Creates an
AgentNodewithconstantInputcontaining a legacy dropdown payload withplaceholder_values- Loads it via
NodeModel.from_db(node)to create aNodeModel- Verifies that accessing the graph's
input_schemaproperty correctly producesenumfor the dropdownThis ensures the full DB hydration and schema generation pipeline works correctly with legacy persisted data, not just isolated function calls.
autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.md
Outdated
Show resolved
Hide resolved
…utBlock [REQ-78] The placeholder_values field on AgentDropdownInputBlock was misleadingly named - in every major UI framework placeholder means non-binding hint text, but this field actually restricts input to a dropdown selector. Changes: - Rename placeholder_values to options on AgentDropdownInputBlock.Input - Add clear description explaining actual UI behavior - Override model_construct() for backward compat with persisted agent JSON - Update copilot SDK agent_generation_guide.md - Update docs/integrations/block-integrations/basic.md - Clean up load test data that incorrectly used placeholder_values - Add new backward-compat tests verifying legacy field name still works
381eda5 to
f38dc33
Compare
|
/review |
1 similar comment
Review: PR #12595Good rename. The Observations
Verdict: LGTM -- ready to merge. |
majdyz
left a comment
There was a problem hiding this comment.
Clean rename with good backward compatibility. A few observations:
Looks good:
model_construct()override correctly intercepts the only call path (graph.py:332) to remapplaceholder_values->options- No frontend changes needed since the frontend reads
enumfrom the generated JSON Schema - Existing agent JSON fixtures in
agents/will continue to work through the remap - Tests cover both canonical and legacy paths
- Load test fix is a good cleanup (removing the bogus
placeholder_values: {}on AgentInputBlock) - Documentation updates are thorough
Minor note (non-blocking): The agent JSON fixtures in backend/agents/ still reference placeholder_values -- these will work at runtime via the model_construct remap, but could optionally be migrated to options in a follow-up to keep the canonical representation consistent. Not required for this PR.
| default_factory=list, | ||
| advanced=False, | ||
| title="Dropdown Options", | ||
| ) |
There was a problem hiding this comment.
🤖 🟠 Should Fix: The model_construct() override only covers the schema generation path (graph.py:332), not the __init__() path used during block execution (blocks/_base.py:711). While this works today because options is only read in generate_schema() (not in run()), it's fragile — any future code path that constructs this model via __init__() or model_validate() will silently lose the legacy placeholder_values.
A @model_validator(mode="before") covers all construction paths and is the Pydantic-idiomatic approach:
from pydantic import model_validator
@model_validator(mode="before")
@classmethod
def _remap_legacy_placeholder_values(cls, data):
if isinstance(data, dict) and "placeholder_values" in data:
if "options" not in data:
data["options"] = data.pop("placeholder_values")
else:
data.pop("placeholder_values")
return dataThis replaces the model_construct() override entirely and handles __init__, model_validate, AND model_construct uniformly.
There was a problem hiding this comment.
Correction: I just verified that model_validator(mode="before") does NOT fire on model_construct() in Pydantic v2 (by design — model_construct bypasses all validation).
So the model_construct() override is actually necessary for the schema generation path (graph.py:332). My recommendation changes to:
Keep the model_construct() override (it's correct for the schema path), but also add a model_validator(mode="before") for defense-in-depth on the __init__/model_validate paths. This way all construction routes are covered.
That said, since options is currently only used in generate_schema() which goes through model_construct(), the PR as-is is functionally correct for the current codebase. Downgrading to 🟡 Nice to Have.
majdyz
left a comment
There was a problem hiding this comment.
Good rename — options is much clearer than placeholder_values. The backward compat via model_construct() override correctly covers the schema generation path (the only path where options is read), so existing agents with persisted placeholder_values will continue to render dropdowns in the UI.
The frontend reads enum from the generated JSON schema, not the field name directly, so no frontend changes are needed.
One minor suggestion (🟡): also add a model_validator(mode="before") for defense-in-depth alongside the model_construct() override (see inline comment + correction).
LGTM with the caveat above.
Summary
Resolves REQ-78: The
placeholder_valuesfield onAgentDropdownInputBlockis misleadingly named. In every major UI framework "placeholder" means non-binding hint text that disappears on focus, but this field actually creates a dropdown selector that restricts the user to only those values.Changes
Core rename (
autogpt_platform/backend/backend/blocks/io.py)placeholder_values→optionsonAgentDropdownInputBlock.Inputmodel_construct()to remap legacyplaceholder_values→optionsfor backward compatibility with existing persisted agent JSONTests (
autogpt_platform/backend/backend/blocks/test/test_block.py)optionsfield nameplaceholder_valuesstill works through bothmodel_construct()andGraph._generate_schema()pathsDocumentation
autogpt_platform/backend/backend/copilot/sdk/agent_generation_guide.md— changed field name in CoPilot SDK guidedocs/integrations/block-integrations/basic.md— changed field name and description in public docsLoad tests (
autogpt_platform/backend/load-tests/tests/api/graph-execution-test.js)placeholder_values: {}from AgentInputBlock node (this field never existed on AgentInputBlock)valueinstead ofplaceholder_valuesBackward Compatibility
Existing agents with
placeholder_valuesin their persistedinput_defaultJSON will continue to work — themodel_construct()override transparently remaps the old key tooptions. No database migration needed since the field is stored inside a JSON blob, not as a dedicated column.Testing
enumfrom generated JSON Schema, not the field name directly)