Skip to content

feat(sdk/agent): Parallel Tool Call Execution#2390

Open
VascoSch92 wants to merge 10 commits intomainfrom
parallel-tool-execution
Open

feat(sdk/agent): Parallel Tool Call Execution#2390
VascoSch92 wants to merge 10 commits intomainfrom
parallel-tool-execution

Conversation

@VascoSch92
Copy link
Contributor

@VascoSch92 VascoSch92 commented Mar 11, 2026

Summary

(ref #2350)

Add ParallelToolExecutor to enable concurrent tool execution within agent steps, controlled by the TOOL_CONCURRENCY_LIMIT environment variable (default: 1, fully backward-compatible).

Motivation

When an LLM returns multiple tool calls in a single response (e.g., "read these 3 files" or "run these 4 independent searches"), the current agent executes them sequentially. For I/O-bound tools — file reads, HTTP requests, MCP server calls, database queries — this leaves significant performance on the table. Parallel execution turns N × latency into ~1 × latency for independent operations.

Concrete scenarios where this helps:

  • Multi-file reads: Agent asks to read 5 source files to understand a codebase → 5 sequential disk/network reads become 1 parallel batch
  • Web search + fetch: Agent issues parallel web searches or API calls → wall-clock time drops from sum to max of individual latencies
  • MCP tool calls: Multiple independent MCP server requests (e.g., querying different data sources) execute concurrently
  • Subagent orchestration: Parent agent dispatches multiple independent tool calls while subagents can safely use their own parallel executors without deadlocking

What this does NOT help: CPU-bound tools limited by the GIL, or tools with shared mutable state that aren't thread-safe.

Design

  • Per-agent ThreadPoolExecutor: Each ParallelToolExecutor instance owns its own thread pool, so subagents never compete with their parent for pool slots (no deadlocks).
  • Opt-in via TOOL_CONCURRENCY_LIMIT: Default is 1 (sequential), preserving existing behavior. Set to N > 1 to enable parallelism.
  • Side-effect-free execution: _execute_action_event returns list[Event] instead of calling on_event directly, making it safe for concurrent threads. All state mutations (pop_blocked_action, execution_status = FINISHED, event
    emission) happen on the main thread after parallel work completes.
  • FinishTool truncation: If a batch contains finish, all tool calls after it are discarded and logged — they never execute.
  • Error isolation: All exceptions are caught and wrapped in AgentErrorEvent, so one failing tool never crashes the agent or prevents sibling tools from completing. ValueError (expected tool errors) is logged at INFO; unexpected
    exceptions (RuntimeError, AssertionError, etc.) are logged at ERROR with full traceback to aid debugging.
  • Result ordering: Regardless of completion order, events are emitted in the original tool call order.
  • The responsibility split is now:
    • _ActionBatch — owns everything about preparing a batch (truncation, blocked partitioning, execution)
    • Agent — owns what to do with the results (emitting events, handling finish/refinement)
    • ParallelToolExecutor — stays a pure concurrency primitive, passed in as a dependency

Thread safety warning

When TOOL_CONCURRENCY_LIMIT > 1, tools run in parallel threads sharing the same conversation object. Tools are not thread-safe by default. Callers opting into parallelism must ensure their tools are safe for concurrent execution
(no shared mutable filesystem state, no concurrent conversation mutations).

Evaluation

I ran an evaluation with SWE-bench to ensure that the default behavior is the one we already have in the repo [ref]

Report from trace investigation of OpenHands CLI:

No parallel tool calls detected -- the feature is cleanly disabled. Here's the full breakdown:
                                                                                                                                                                                                                     
  Trace Format                                                                                                                                                                                                       
                                                                                                                                                                                                                     
  - Events alternate between ActionEvent (tool call) and ObservationEvent (tool result)                                                                                                                              
  - Tools used: terminal (1150), file_editor (588), think (58), finish (25)                                                                                                                                          
  - 1,821 action events matched exactly 1,821 observation events across all 25 traces                                                                                                                                
                                                                                                                                                                                                                     
  Parallel Tool Call Check: CLEAN                                                                                                                                                                                    
                                                                                                                                                                                                                     
  - Zero shared llm_response_id across events (each LLM turn produced exactly 1 tool call)                                                                                                                           
  - Perfect action-observation interleaving -- no consecutive actions or observations                                                                                                                                
  - No tool_calls arrays, no parallel batching of any kind                                                                                                                                                           
  - All 25 conversations completed normally with a finish action

Agent Server images for this PR

GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server

Variants & Base Images

Variant Architectures Base Image Docs / Tags
java amd64, arm64 eclipse-temurin:17-jdk Link
python amd64, arm64 nikolaik/python-nodejs:python3.13-nodejs22 Link
golang amd64, arm64 golang:1.21-bookworm Link

Pull (multi-arch manifest)

# Each variant is a multi-arch manifest supporting both amd64 and arm64
docker pull ghcr.io/openhands/agent-server:e1d0081-python

Run

docker run -it --rm \
  -p 8000:8000 \
  --name agent-server-e1d0081-python \
  ghcr.io/openhands/agent-server:e1d0081-python

All tags pushed for this build

ghcr.io/openhands/agent-server:e1d0081-golang-amd64
ghcr.io/openhands/agent-server:e1d0081-golang_tag_1.21-bookworm-amd64
ghcr.io/openhands/agent-server:e1d0081-golang-arm64
ghcr.io/openhands/agent-server:e1d0081-golang_tag_1.21-bookworm-arm64
ghcr.io/openhands/agent-server:e1d0081-java-amd64
ghcr.io/openhands/agent-server:e1d0081-eclipse-temurin_tag_17-jdk-amd64
ghcr.io/openhands/agent-server:e1d0081-java-arm64
ghcr.io/openhands/agent-server:e1d0081-eclipse-temurin_tag_17-jdk-arm64
ghcr.io/openhands/agent-server:e1d0081-python-amd64
ghcr.io/openhands/agent-server:e1d0081-nikolaik_s_python-nodejs_tag_python3.13-nodejs22-amd64
ghcr.io/openhands/agent-server:e1d0081-python-arm64
ghcr.io/openhands/agent-server:e1d0081-nikolaik_s_python-nodejs_tag_python3.13-nodejs22-arm64
ghcr.io/openhands/agent-server:e1d0081-golang
ghcr.io/openhands/agent-server:e1d0081-java
ghcr.io/openhands/agent-server:e1d0081-python

About Multi-Architecture Support

  • Each variant tag (e.g., e1d0081-python) is a multi-arch manifest supporting both amd64 and arm64
  • Docker automatically pulls the correct architecture for your platform
  • Individual architecture tags (e.g., e1d0081-python-amd64) are also available if needed

… tool execution

Add infrastructure for executing multiple tool calls concurrently with a
configurable global concurrency limit.

Classes:
- ToolExecutorSemaphore: Process-global singleton that limits concurrent
  tool executions across all agents and sub-agents. Configured via
  OPENHANDS_TOOL_CONCURRENCY_LIMIT environment variable (default: 8).
- ParallelToolExecutor: Executes batches of tool calls concurrently using
  ThreadPoolExecutor, with concurrency controlled by the semaphore.

Key design decisions:
- Single layer of concurrency control via environment variable
- Singleton pattern using __new__ for ToolExecutorSemaphore
- ThreadPoolExecutor for I/O-bound tool execution
- Results returned in original order regardless of completion order

Related to #2350

Co-authored-by: openhands <openhands@all-hands.dev>
@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2026

Python API breakage checks — ✅ PASSED

Result:PASSED

Action log

@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2026

REST API breakage checks (OpenAPI) — ✅ PASSED

Result:PASSED

Action log

@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2026

Coverage

Coverage Report •
FileStmtsMissCoverMissing
openhands-sdk/openhands/sdk/agent
   agent.py2643785%101, 105, 278, 297, 300, 307–308, 322, 328, 356–358, 360, 390–391, 398–399, 431, 484–485, 487, 527, 675–676, 681, 693–694, 699–700, 719–720, 722, 751, 759–760, 794, 801
TOTAL19988580070% 

all-hands-bot

This comment was marked as outdated.

@VascoSch92 VascoSch92 changed the title DRAFT: feat: add ParallelToolExecutor and ToolExecutorSemaphore for parallel tool execution feat(sdk/agent): Parallel Tool Call Execution Mar 12, 2026
all-hands-bot

This comment was marked as outdated.

all-hands-bot

This comment was marked as outdated.

all-hands-bot

This comment was marked as outdated.

@VascoSch92 VascoSch92 marked this pull request as ready for review March 13, 2026 14:08
Copy link
Collaborator

@all-hands-bot all-hands-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Taste Rating: Acceptable - Requires Eval Verification

Core architecture is excellent. Making _execute_action_event side-effect-free (returns events instead of emitting directly) is exactly the right design — this eliminates the need for locks and makes the special case (parallel execution) become a normal case. Per-agent thread pools elegantly prevent deadlocks without complex detection logic.

The code is clean, tests are comprehensive and test real behavior (not mocks), and default concurrency=1 preserves backward compatibility.

However, this PR changes core agent execution flow (tool calling, event emission, state management). Even with the backward-compatible default, the execution path has been refactored significantly. Per repository policy, PRs that change agent behavior require lightweight eval verification before merge.

KEY INSIGHT

The refactoring turns concurrency from a special case requiring complex coordination into a normal case with side-effect-free functions. This is "good taste" — the right abstraction eliminates the complexity rather than managing it with locks and conditionals.

VERDICT

Code quality is solid — approve from a technical perspective
⚠️ Eval verification required — per repo policy before merge

Move _emit_batch and _handle_finish logic from Agent into _ActionBatch
as emit() and finalize() methods. Agent-specific logic (iterative
refinement check, mark-finished callback) is injected via callables,
keeping _ActionBatch decoupled from the Agent class.

This simplifies Agent._execute_actions to a clean prepare → emit →
finalize pipeline and gives _ActionBatch full ownership of the batch
lifecycle.

Co-authored-by: openhands <openhands@all-hands.dev>
@enyst
Copy link
Collaborator

enyst commented Mar 13, 2026

@OpenHands Do a /codereview-roasted on this PR.

@openhands-ai
Copy link

openhands-ai bot commented Mar 13, 2026

I'm on it! enyst can track my progress at all-hands.dev

Copy link
Collaborator

@enyst enyst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Needs improvement

[CRITICAL ISSUES]

  • [openhands-sdk/openhands/sdk/agent/parallel_executor.py, Lines 97-103] Breaking change disguised as the fallback path: TOOL_CONCURRENCY_LIMIT=1 still routes any multi-tool batch through ThreadPoolExecutor(max_workers=1). That is not the old behavior. It changes thread affinity, and because results are buffered until the batch finishes, it also changes when observations hit the conversation. I reproduced this locally with a tiny tool: both calls ran on ThreadPoolExecutor-*, not MainThread, and the second call saw zero prior ObservationEvents. So the PR description's “fully backward-compatible” claim is false. Fix: keep the old for action in action_events: execute + emit path when the limit is 1, and only use the batch executor when the limit is actually >1.
  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-396] Sequential semantics were silently changed: _ActionBatch.prepare() executes the whole batch before batch.emit(), so later tools in the same batch no longer see earlier observations in conversation.state.events. Even with concurrency effectively “off”, you've changed execution from run tool -> emit observation -> run next tool into run everything -> emit later. That's a real semantic regression for tools/hooks that inspect conversation state mid-batch. Fix: preserve incremental emission in the sequential path; don't reuse the buffered parallel path as the fallback.
  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-393] Unsafe by construction for the stock tool set: once TOOL_CONCURRENCY_LIMIT > 1, this code blindly parallelizes whatever tool calls the model produced. But the default tool set includes shared mutable executors like terminal sessions, browser sessions, and file-editor history. A warning in a docstring is not a concurrency model; it just means the env var can turn into garbled terminal I/O or corrupted edits. Fix: parallelize only tools that explicitly declare themselves safe, or serialize known shared-resource tools.

[TESTING GAPS]

  • [tests/sdk/agent/test_parallel_execution_integration.py, Lines 188-213] The default-limit regression isn't actually tested: this test only checks ordering. It doesn't assert same-thread execution or incremental state visibility, so it passes even though the fallback already changed behavior.
  • [tests/sdk/agent/test_parallel_execution_integration.py] Only synthetic tools are exercised: the tests never hit the real risky executors (terminal, file_editor, browser), so they won't catch the races this PR introduces.

VERDICT:
Needs rework: the concurrency primitive is reasonable, but the fallback path is not backward-compatible, and the parallel path has no safety gate for the tools people actually run.

KEY INSIGHT:
The real bug here is that the code collapsed “sequential semantics” and “parallel scheduling” into one path. Those are not the same thing, and when limit=1 they must stay identical to the old behavior.

Because this PR changes agent tool-execution behavior, I'd still want a human maintainer to make the final merge call after lightweight evals even after the above is addressed.

@openhands-ai

This comment was marked as duplicate.

Copy link
Collaborator

enyst commented Mar 13, 2026

OpenHands-GPT-5.4 here. I re-read my earlier review with fresh eyes, section by section, and I think I overstated parts of it. Here's my updated take.

1) TOOL_CONCURRENCY_LIMIT=1 and backward compatibility

  • [openhands-sdk/openhands/sdk/agent/parallel_executor.py, Lines 97-103]
  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-396]

Is it actually the case? Yes.

Multi-tool batches still go through ThreadPoolExecutor(max_workers=1), and observations are emitted after the batch instead of between tool calls. So the fallback is not identical to the old for action: run -> emit -> next behavior.

How major is it? Medium, not catastrophic.

This is a real semantic change, but it is narrow: it only matters when the model emits multiple tool calls in one response. The PR's own trace evidence suggests that is rare in current real usage.

Is it real usage? Rare today, but real enough that the phrase "fully backward-compatible" feels too strong unless this is fixed or the claim is narrowed.

2) Later tools not seeing earlier observations from the same batch

  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-396]

Is it actually the case? Yes.

Because _ActionBatch.prepare() runs the executable subset before batch.emit(), tools that inspect conversation.state.events during execution will not see earlier observations from that same batch.

How major is it? Medium-to-low.

This is the strongest technical concern from my earlier review, but it's still niche rather than broad. Most built-in tools probably do not depend on seeing sibling observations mid-batch. Some custom or state-aware tools could.

Is it real usage? Mostly edge/custom-tool territory today.

So I still think this is worth calling out, but more as a compatibility nuance than as a major architectural failure.

3) Unsafe parallelism with terminal / file editor / browser

  • [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-393]

Is it actually the case? Yes.

If someone sets TOOL_CONCURRENCY_LIMIT > 1 and allows parallel batches that include shared-state tools, they can absolutely get races.

How major is it? Lower than I first said.

This is an opt-in feature, and the PR is explicit about the thread-safety caveat. That makes it much more of a deliberate first-cut tradeoff than a hidden regression.

Is it real usage? Yes, if somebody flips the env var globally and expects terminal/file-editor/browser calls to be safe. But because they had to opt into it, I would treat this as a scope/documentation clarity issue, not a hard blocker.

If anything, I'd want the docs/PR text to be very clear that the immediate sweet spot is read-only / RPC-like tools, not arbitrary shared-state tools.

4) Testing

  • [tests/sdk/agent/test_parallel_execution_integration.py]

I still think one regression test for the limit=1 fallback semantics would be valuable if backward compatibility is a goal.

That said, I want to explicitly soften one part of my earlier review: I do not think "the tests only use synthetic tools" is a serious problem by itself. Synthetic tools are perfectly reasonable for this kind of concurrency test. I would not ask for flaky terminal/browser integration tests as a merge gate.


Updated bottom line

After re-review, I would downgrade my earlier assessment.

I do not think this PR is fundamentally broken.

I think there is one real but narrow compatibility concern around the TOOL_CONCURRENCY_LIMIT=1 path versus the PR's "fully backward-compatible" framing. If the fallback keeps the old sequential emit semantics, or if the compatibility claim is softened, the rest looks like an acceptable first-cut tradeoff for an opt-in feature.

Because this still changes agent behavior, I'd keep this in COMMENT territory and let a human maintainer make the final call after lightweight evals.

If readers are skimming: my earlier review was too harsh on points (3) and part of (4); point (1) remains real, but narrower than I first framed it.

Copy link
Collaborator

enyst commented Mar 13, 2026

OpenHands-GPT-5.4 follow-up: you're right, and I need to correct my re-review.

I made a reasoning mistake there.

I wrote that the TOOL_CONCURRENCY_LIMIT=1 regression "only matters when the model emits multiple tool calls in one response" and then treated that as narrow because some current traces don't show it often. But that framing is wrong for this PR.

This PR exists precisely to handle the multi-tool-call case better. If multi-tool responses are common in real usage, then the changed fallback semantics are not some corner case — they hit the exact workload this feature is supposed to serve.

So I want to sharpen the conclusion:

  • The limit=1 path is a real compatibility issue if the goal is to preserve existing behavior by default.
  • It matters specifically on the feature's main path: responses containing multiple tool calls.
  • In other words, saying "it's only when there are multiple tool calls" does not make it small here; that's the whole point of the feature.

I would restate the concern more cleanly as:

The PR changes default semantics for multi-tool responses even when concurrency is effectively disabled (TOOL_CONCURRENCY_LIMIT=1), because execution is still batched and observations are emitted after batch completion rather than after each tool call.

I still think my earlier re-review was right to soften one thing: synthetic tools are fine for testing the concurrency primitive itself.

But I over-corrected on severity. If multi-tool responses are common in production, then the fallback-semantics issue is materially more important than I gave it credit for.

I also think this strengthens — not weakens — the concern about parallelizing shared-state tools. If the intended real-world usage is "LLM emits several tool calls and we want them parallel," then correctness for the default tool ecosystem is part of the core feature story, not just a docs footnote.

@enyst
Copy link
Collaborator

enyst commented Mar 14, 2026

HUMAN: Just to note, I do think multiple tool calls are a normal occurrence. I don’t have numbers, but the agent here is confused; they’re often enough that, in practice, it’s like, see codex-cli fire up 12, think “woot was that a parallel tool call?”. Barely finish thought, see a bucket of 3, see another bucket of 5, conclude “yeah obviously”.

We have the eval trajectories we could compute to see the number of batches with the same llm_response?

@VascoSch92
Copy link
Contributor Author

VascoSch92 commented Mar 14, 2026

HUMAN: Just to note, I do think multiple tool calls are a normal occurrence. I don’t have numbers, but the agent here is confused; they’re often enough that, in practice, it’s like, see codex-cli fire up 12, think “woot was that a parallel tool call?”. Barely finish thought, see a bucket of 3, see another bucket of 5, conclude “yeah obviously”.

We have the eval trajectories we could compute to see the number of batches with the same llm_response?

@enyst yes this could be interesting.

@VascoSch92
Copy link
Contributor Author

@enyst I answered your points.

  1. TOOL_CONCURRENCY_LIMIT=1 and backward compatibility

The previous implementation wasn't 100% backward compatible, but it was a simple fix. I have updated the code and added a test case to ensure full compatibility.

  1. Later tools not seeing earlier observations from the same batch

I’m not sure I follow this point. If multiple tool calls are being executed in parallel, they shouldn't need to depend on each other's outputs.

  1. Unsafe parallelism with terminal / file editor / browser

This is a valid point. However, I avoided adding that specific logic here to prevent the PR from becoming even larger than it already is. My approach was to first implement parallel tool calling and then address the prevention of dependent tool calls in a subsequent update. This aligns with what we originally discussed in the issue, and is why the current default remains the standard behavior.

  1. Testing

I have added integration tests to cover scenarios where num_workers = 1 and the request contains multiple tool calls. This should show that we have backward compatibility.

@enyst enyst requested a review from xingyaoww March 17, 2026 11:31
f"{ENV_TOOL_CONCURRENCY_LIMIT}={env_value} is invalid, "
f"using default {DEFAULT_TOOL_CONCURRENCY_LIMIT}"
)
return DEFAULT_TOOL_CONCURRENCY_LIMIT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not 100% sure if getting this from env var is the best approach here.. Maybe we can consider passing it to the Agent's configurations, similar to how we pass tools?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@OpenHands what do you think? Do not modify code, just give me some suggestions here

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm on it! xingyaoww can track my progress at all-hands.dev

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great question — I agree that moving this to the Agent's configuration is the better approach. Here's my reasoning:

Why the env var approach is problematic:

  1. Violates the SDK's design principles. Agents are immutable Pydantic models whose "configuration is their serializable representation." An env var creates a second, hidden source of truth that isn't captured in model_dump(), can't be persisted/replayed, and won't show up in config introspection.

  2. No per-agent control. With an env var, every Agent and every subagent shares the same concurrency limit. But a parent orchestrator might want concurrency=4 while its subagents stay at 1. A field makes this trivial.

  3. Testing friction. Tests currently need monkeypatch.setenv + careful teardown, which is fragile. A field is just Agent(llm=..., tool_concurrency_limit=4).

Suggested approach:

Add a tool_concurrency_limit field to AgentBase:

tool_concurrency_limit: int = Field(
    default=1,
    ge=1,
    description=(
        "Maximum number of tool calls to execute concurrently within a single "
        "agent step. Default is 1 (sequential). Values > 1 require that the "
        "tools in use are thread-safe for concurrent execution."
    ),
)

Then wire it through to ParallelToolExecutor:

# In Agent.__init__ or a model_post_init:
self._parallel_executor = ParallelToolExecutor(max_workers=self.tool_concurrency_limit)

# ParallelToolExecutor becomes:
class ParallelToolExecutor:
    def __init__(self, max_workers: int = 1) -> None:
        self._max_workers = max_workers

Benefits:

  • Fully declarative and serializable
  • Per-agent control (parent vs subagent can differ)
  • Clean testing (Agent(llm=..., tool_concurrency_limit=4) — no env var patching)
  • Pydantic validation (ge=1) catches bad values at construction time
  • _get_max_concurrency() and the env var constant can be removed entirely

Optional: keep the env var as a deployment-level override?

If there's a use case for operators overriding concurrency without changing agent configs (e.g., in production), you could keep the env var as a fallback when the field is at its default. But I'd lean toward removing it for simplicity — the field is a much cleaner contract, and deployment-level overrides can be handled by the config layer that constructs the Agent.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No code changes were made — the reviewer explicitly asked me not to modify code, only to give suggestions. Here's a summary of what I did:

Summary

I replied to @xingyaoww's review comment on PR #2390 (thread on parallel_executor.py line 53) asking whether reading TOOL_CONCURRENCY_LIMIT from an environment variable is the best approach, or whether it should be passed as Agent configuration instead.

My recommendation: Move it to an Agent configuration field. The key arguments:

  1. SDK design principles — Agents are immutable Pydantic models whose configuration is their serializable representation. An env var creates a hidden, non-serializable source of truth that violates this contract (won't appear in model_dump(), can't be persisted/replayed).

  2. Per-agent control — With an env var, every Agent and subagent shares the same limit. A field like tool_concurrency_limit: int = 1 on AgentBase allows a parent orchestrator to use concurrency=4 while subagents stay at 1.

  3. Testing simplicityAgent(llm=..., tool_concurrency_limit=4) is much cleaner than monkeypatch.setenv with careful teardown.

Suggested implementation:

  • Add tool_concurrency_limit: int = Field(default=1, ge=1, ...) to AgentBase
  • Make ParallelToolExecutor.__init__ accept max_workers: int directly
  • Remove _get_max_concurrency() and the env var constants entirely
  • Pydantic's ge=1 validator catches bad values at construction time

The reply has been posted to the review thread on the PR. No branch push needed since no code was changed.

for ae in self.action_events:
reason = self.blocked_reasons.get(ae.id)
if reason is not None:
logger.info(f"Action '{ae.tool_name}' blocked by hook: {reason}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure this is always going to be blocked by hook? @OpenHands can you double check and provide some evidences. Do not modify code

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm on it! xingyaoww can track my progress at all-hands.dev

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, in the current codebase blocked_actions is always populated by hooks. Here is the evidence:

Write path (who populates blocked_actions)

There is exactly one production caller of state.block_action():

  • openhands-sdk/openhands/sdk/hooks/conversation_hooks.py:167 inside _handle_pre_tool_use():
    # conversation_hooks.py:159-167
    if not should_continue:
        reason = self.hook_manager.get_blocking_reason(results)
        ...
        block_reason = reason or "Blocked by hook"
        self._conversation_state.block_action(event.id, block_reason)
    This is the PreToolUse hook processing path — no other code in the codebase calls block_action().

Verified with:

grep -rn "block_action\b" --include="*.py" | grep -v "pop_blocked_action\|blocked_action\|test_\|test\.py\|example"

Returns only the definition (state.py:426) and the single call site (conversation_hooks.py:167).

Read path (how blocked_reasons is consumed)

  • _ActionBatch.prepare() (agent.py:137) calls state.pop_blocked_action(ae.id) → reads from state.blocked_actions
  • The field definition (state.py:128-132) explicitly documents this:
    # Hook-blocked actions: action_id -> blocking reason
    blocked_actions: dict[str, str] = Field(
        default_factory=dict,
        description="Actions blocked by PreToolUse hooks, keyed by action ID",
    )
  • The method docstrings also confirm: block_action says "Persistently record a hook-blocked action" and pop_blocked_action says "Remove and return a hook-blocked action reason"

Other rejection paths do NOT use blocked_actions

The other UserRejectObservation creation site (local_conversation.py:735) handles user rejection during confirmation mode — it creates the observation directly via _on_event() and never touches blocked_actions. That path uses the default rejection_source="user".

Serialization

The agent-server model (models.py:223) mirrors the field with the same description "Actions blocked by PreToolUse hooks". Deserialized/resumed state would contain values that were originally set by hooks.

Minor caveat

Since block_action() is a public method on ConversationState, a future non-hook caller could theoretically use it, which would make the log message slightly misleading. But today, there is no such caller. If this is a concern, the method could be renamed to _block_action (private) or the log message could be made more generic (e.g., "Action blocked: {reason}").

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary

The reviewer (@xingyaoww) asked on PR #2390, line 158 of openhands-sdk/openhands/sdk/agent/agent.py:

"Are we sure this is always going to be blocked by hook?"

This was about the log message: logger.info(f"Action '{ae.tool_name}' blocked by hook: {reason}")

Answer: Yes, in the current codebase, blocked_actions is always populated by hooks. Here is the evidence I provided in my reply to the review thread:

  1. Single write path: There is exactly one production caller of state.block_action()conversation_hooks.py:167 inside _handle_pre_tool_use(), which is the PreToolUse hook processing path. Verified via grep across the entire codebase.

  2. Field documentation confirms it: The blocked_actions field on ConversationState (state.py:128-132) is explicitly described as "Actions blocked by PreToolUse hooks, keyed by action ID". The method docstrings also say "Persistently record a hook-blocked action" and "Remove and return a hook-blocked action reason."

  3. Other rejection paths don't use blocked_actions: User rejection during confirmation mode (local_conversation.py:735) creates UserRejectObservation directly via _on_event() with rejection_source="user" (the default) — it never touches the blocked_actions dict.

  4. Serialization path is consistent: The agent-server model (models.py:223) mirrors the field with the same "PreToolUse hooks" description. Deserialized/resumed state would only contain values originally set by hooks.

  5. Minor caveat noted: Since block_action() is a public method on ConversationState, a hypothetical future non-hook caller could use it, which would make the log message slightly misleading. I suggested either making the method private or generalizing the log message if this is a concern.

No code changes were made — the reviewer explicitly requested investigation only ("Do not modify code").

logger = get_logger(__name__)

# Default concurrency limit for tool executions (per agent)
DEFAULT_TOOL_CONCURRENCY_LIMIT: Final[int] = 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's configure default tool concurrency to 4 or 8, and we should create a example file that prompts the agent to run multiple tools at the same time (even delegating multiple agents) to stress test the parallel tool execution system. I'd leaning towards merging this PR once this test passes via test-examples workflow

@VascoSch92
Copy link
Contributor Author

@xingyaoww

I'm not 100% sure if getting this from env var is the best approach here.. Maybe we can consider passing it to the Agent's configurations, similar to how we pass tools?

Makes sense. I changed as suggested by OH

Let's configure default tool concurrency to 4 or 8, and we should create a example file that prompts the agent to run multiple tools at the same time (even delegating multiple agents) to stress test the parallel tool execution system. I'd leaning towards merging this PR once this test passes via test-examples workflow

I created an example where we spawn subagents that perform code searches. This happens in parallel, and at the end there is also a parallel report to confirm the calls were actually executed in parallel.

If it is too verbose for an example I can also remove it. I just wanted to make sure we are correctly parallelizing the calls.

The report looks like:

================================================================================
Parallelism Report
================================================================================

  main agent batch (chatcmpl-272dfe5...):
    Parallel tools: ['task', 'task', 'task']

  sub-agent batch (chatcmpl-ae3ab59...):
    Parallel tools: ['terminal', 'file_editor']

  sub-agent batch (chatcmpl-9f48c72...):
    Parallel tools: ['file_editor', 'file_editor']

  sub-agent batch (chatcmpl-db48e30...):
    Parallel tools: ['file_editor', 'file_editor']

  sub-agent batch (chatcmpl-affbbd2...):
    Parallel tools: ['file_editor', 'file_editor']

  sub-agent batch (chatcmpl-3854bba...):
    Parallel tools: ['file_editor', 'file_editor']

  sub-agent batch (chatcmpl-3742934...):
    Parallel tools: ['file_editor', 'file_editor']

  sub-agent batch (chatcmpl-508834f...):
    Parallel tools: ['file_editor', 'file_editor', 'file_editor', 'file_editor']

  sub-agent batch (chatcmpl-4795c8d...):
    Parallel tools: ['terminal', 'file_editor']

  sub-agent batch (chatcmpl-922b7a5...):
    Parallel tools: ['file_editor', 'file_editor']

@enyst
Copy link
Collaborator

enyst commented Mar 17, 2026

The report looks like:

Ha! There’s no terminal, terminal 😅. But if Xingyao is happy, I’m happy 🙏

@VascoSch92
Copy link
Contributor Author

The report looks like:

Ha! There’s no terminal, terminal 😅. But if Xingyao is happy, I’m happy 🙏

If you want I can force the example to have terminal, terminal :-)

@enyst
Copy link
Collaborator

enyst commented Mar 17, 2026

The report looks like:

Ha! There’s no terminal, terminal 😅. But if Xingyao is happy, I’m happy 🙏

If you want I can force the example to have terminal, terminal :-)

😇 Just out of curiosity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants