feat(sdk/agent): Parallel Tool Call Execution#2390
Conversation
… tool execution Add infrastructure for executing multiple tool calls concurrently with a configurable global concurrency limit. Classes: - ToolExecutorSemaphore: Process-global singleton that limits concurrent tool executions across all agents and sub-agents. Configured via OPENHANDS_TOOL_CONCURRENCY_LIMIT environment variable (default: 8). - ParallelToolExecutor: Executes batches of tool calls concurrently using ThreadPoolExecutor, with concurrency controlled by the semaphore. Key design decisions: - Single layer of concurrency control via environment variable - Singleton pattern using __new__ for ToolExecutorSemaphore - ThreadPoolExecutor for I/O-bound tool execution - Results returned in original order regardless of completion order Related to #2350 Co-authored-by: openhands <openhands@all-hands.dev>
Python API breakage checks — ✅ PASSEDResult: ✅ PASSED |
REST API breakage checks (OpenAPI) — ✅ PASSEDResult: ✅ PASSED |
all-hands-bot
left a comment
There was a problem hiding this comment.
🟡 Taste Rating: Acceptable - Requires Eval Verification
Core architecture is excellent. Making _execute_action_event side-effect-free (returns events instead of emitting directly) is exactly the right design — this eliminates the need for locks and makes the special case (parallel execution) become a normal case. Per-agent thread pools elegantly prevent deadlocks without complex detection logic.
The code is clean, tests are comprehensive and test real behavior (not mocks), and default concurrency=1 preserves backward compatibility.
However, this PR changes core agent execution flow (tool calling, event emission, state management). Even with the backward-compatible default, the execution path has been refactored significantly. Per repository policy, PRs that change agent behavior require lightweight eval verification before merge.
KEY INSIGHT
The refactoring turns concurrency from a special case requiring complex coordination into a normal case with side-effect-free functions. This is "good taste" — the right abstraction eliminates the complexity rather than managing it with locks and conditionals.
VERDICT
✅ Code quality is solid — approve from a technical perspective
Move _emit_batch and _handle_finish logic from Agent into _ActionBatch as emit() and finalize() methods. Agent-specific logic (iterative refinement check, mark-finished callback) is injected via callables, keeping _ActionBatch decoupled from the Agent class. This simplifies Agent._execute_actions to a clean prepare → emit → finalize pipeline and gives _ActionBatch full ownership of the batch lifecycle. Co-authored-by: openhands <openhands@all-hands.dev>
|
@OpenHands Do a /codereview-roasted on this PR. |
|
I'm on it! enyst can track my progress at all-hands.dev |
enyst
left a comment
There was a problem hiding this comment.
🔴 Needs improvement
[CRITICAL ISSUES]
- [openhands-sdk/openhands/sdk/agent/parallel_executor.py, Lines 97-103] Breaking change disguised as the fallback path:
TOOL_CONCURRENCY_LIMIT=1still routes any multi-tool batch throughThreadPoolExecutor(max_workers=1). That is not the old behavior. It changes thread affinity, and because results are buffered until the batch finishes, it also changes when observations hit the conversation. I reproduced this locally with a tiny tool: both calls ran onThreadPoolExecutor-*, notMainThread, and the second call saw zero priorObservationEvents. So the PR description's “fully backward-compatible” claim is false. Fix: keep the oldfor action in action_events: execute + emitpath when the limit is1, and only use the batch executor when the limit is actually>1. - [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-396] Sequential semantics were silently changed:
_ActionBatch.prepare()executes the whole batch beforebatch.emit(), so later tools in the same batch no longer see earlier observations inconversation.state.events. Even with concurrency effectively “off”, you've changed execution fromrun tool -> emit observation -> run next toolintorun everything -> emit later. That's a real semantic regression for tools/hooks that inspect conversation state mid-batch. Fix: preserve incremental emission in the sequential path; don't reuse the buffered parallel path as the fallback. - [openhands-sdk/openhands/sdk/agent/agent.py, Lines 389-393] Unsafe by construction for the stock tool set: once
TOOL_CONCURRENCY_LIMIT > 1, this code blindly parallelizes whatever tool calls the model produced. But the default tool set includes shared mutable executors like terminal sessions, browser sessions, and file-editor history. A warning in a docstring is not a concurrency model; it just means the env var can turn into garbled terminal I/O or corrupted edits. Fix: parallelize only tools that explicitly declare themselves safe, or serialize known shared-resource tools.
[TESTING GAPS]
- [tests/sdk/agent/test_parallel_execution_integration.py, Lines 188-213] The default-limit regression isn't actually tested: this test only checks ordering. It doesn't assert same-thread execution or incremental state visibility, so it passes even though the fallback already changed behavior.
- [tests/sdk/agent/test_parallel_execution_integration.py] Only synthetic tools are exercised: the tests never hit the real risky executors (
terminal,file_editor,browser), so they won't catch the races this PR introduces.
VERDICT:
❌ Needs rework: the concurrency primitive is reasonable, but the fallback path is not backward-compatible, and the parallel path has no safety gate for the tools people actually run.
KEY INSIGHT:
The real bug here is that the code collapsed “sequential semantics” and “parallel scheduling” into one path. Those are not the same thing, and when limit=1 they must stay identical to the old behavior.
Because this PR changes agent tool-execution behavior, I'd still want a human maintainer to make the final merge call after lightweight evals even after the above is addressed.
This comment was marked as duplicate.
This comment was marked as duplicate.
|
OpenHands-GPT-5.4 here. I re-read my earlier review with fresh eyes, section by section, and I think I overstated parts of it. Here's my updated take. 1)
|
|
OpenHands-GPT-5.4 follow-up: you're right, and I need to correct my re-review. I made a reasoning mistake there. I wrote that the This PR exists precisely to handle the multi-tool-call case better. If multi-tool responses are common in real usage, then the changed fallback semantics are not some corner case — they hit the exact workload this feature is supposed to serve. So I want to sharpen the conclusion:
I would restate the concern more cleanly as:
I still think my earlier re-review was right to soften one thing: synthetic tools are fine for testing the concurrency primitive itself. But I over-corrected on severity. If multi-tool responses are common in production, then the fallback-semantics issue is materially more important than I gave it credit for. I also think this strengthens — not weakens — the concern about parallelizing shared-state tools. If the intended real-world usage is "LLM emits several tool calls and we want them parallel," then correctness for the default tool ecosystem is part of the core feature story, not just a docs footnote. |
|
HUMAN: Just to note, I do think multiple tool calls are a normal occurrence. I don’t have numbers, but the agent here is confused; they’re often enough that, in practice, it’s like, see We have the eval trajectories we could compute to see the number of batches with the same |
@enyst yes this could be interesting. |
|
@enyst I answered your points.
The previous implementation wasn't 100% backward compatible, but it was a simple fix. I have updated the code and added a test case to ensure full compatibility.
I’m not sure I follow this point. If multiple tool calls are being executed in parallel, they shouldn't need to depend on each other's outputs.
This is a valid point. However, I avoided adding that specific logic here to prevent the PR from becoming even larger than it already is. My approach was to first implement parallel tool calling and then address the prevention of dependent tool calls in a subsequent update. This aligns with what we originally discussed in the issue, and is why the current default remains the standard behavior.
I have added integration tests to cover scenarios where num_workers = 1 and the request contains multiple tool calls. This should show that we have backward compatibility. |
| f"{ENV_TOOL_CONCURRENCY_LIMIT}={env_value} is invalid, " | ||
| f"using default {DEFAULT_TOOL_CONCURRENCY_LIMIT}" | ||
| ) | ||
| return DEFAULT_TOOL_CONCURRENCY_LIMIT |
There was a problem hiding this comment.
I'm not 100% sure if getting this from env var is the best approach here.. Maybe we can consider passing it to the Agent's configurations, similar to how we pass tools?
There was a problem hiding this comment.
@OpenHands what do you think? Do not modify code, just give me some suggestions here
There was a problem hiding this comment.
I'm on it! xingyaoww can track my progress at all-hands.dev
There was a problem hiding this comment.
Great question — I agree that moving this to the Agent's configuration is the better approach. Here's my reasoning:
Why the env var approach is problematic:
-
Violates the SDK's design principles. Agents are immutable Pydantic models whose "configuration is their serializable representation." An env var creates a second, hidden source of truth that isn't captured in
model_dump(), can't be persisted/replayed, and won't show up in config introspection. -
No per-agent control. With an env var, every
Agentand every subagent shares the same concurrency limit. But a parent orchestrator might wantconcurrency=4while its subagents stay at1. A field makes this trivial. -
Testing friction. Tests currently need
monkeypatch.setenv+ careful teardown, which is fragile. A field is justAgent(llm=..., tool_concurrency_limit=4).
Suggested approach:
Add a tool_concurrency_limit field to AgentBase:
tool_concurrency_limit: int = Field(
default=1,
ge=1,
description=(
"Maximum number of tool calls to execute concurrently within a single "
"agent step. Default is 1 (sequential). Values > 1 require that the "
"tools in use are thread-safe for concurrent execution."
),
)Then wire it through to ParallelToolExecutor:
# In Agent.__init__ or a model_post_init:
self._parallel_executor = ParallelToolExecutor(max_workers=self.tool_concurrency_limit)
# ParallelToolExecutor becomes:
class ParallelToolExecutor:
def __init__(self, max_workers: int = 1) -> None:
self._max_workers = max_workersBenefits:
- Fully declarative and serializable
- Per-agent control (parent vs subagent can differ)
- Clean testing (
Agent(llm=..., tool_concurrency_limit=4)— no env var patching) - Pydantic validation (
ge=1) catches bad values at construction time _get_max_concurrency()and the env var constant can be removed entirely
Optional: keep the env var as a deployment-level override?
If there's a use case for operators overriding concurrency without changing agent configs (e.g., in production), you could keep the env var as a fallback when the field is at its default. But I'd lean toward removing it for simplicity — the field is a much cleaner contract, and deployment-level overrides can be handled by the config layer that constructs the Agent.
There was a problem hiding this comment.
No code changes were made — the reviewer explicitly asked me not to modify code, only to give suggestions. Here's a summary of what I did:
Summary
I replied to @xingyaoww's review comment on PR #2390 (thread on parallel_executor.py line 53) asking whether reading TOOL_CONCURRENCY_LIMIT from an environment variable is the best approach, or whether it should be passed as Agent configuration instead.
My recommendation: Move it to an Agent configuration field. The key arguments:
-
SDK design principles — Agents are immutable Pydantic models whose configuration is their serializable representation. An env var creates a hidden, non-serializable source of truth that violates this contract (won't appear in
model_dump(), can't be persisted/replayed). -
Per-agent control — With an env var, every Agent and subagent shares the same limit. A field like
tool_concurrency_limit: int = 1onAgentBaseallows a parent orchestrator to useconcurrency=4while subagents stay at1. -
Testing simplicity —
Agent(llm=..., tool_concurrency_limit=4)is much cleaner thanmonkeypatch.setenvwith careful teardown.
Suggested implementation:
- Add
tool_concurrency_limit: int = Field(default=1, ge=1, ...)toAgentBase - Make
ParallelToolExecutor.__init__acceptmax_workers: intdirectly - Remove
_get_max_concurrency()and the env var constants entirely - Pydantic's
ge=1validator catches bad values at construction time
The reply has been posted to the review thread on the PR. No branch push needed since no code was changed.
| for ae in self.action_events: | ||
| reason = self.blocked_reasons.get(ae.id) | ||
| if reason is not None: | ||
| logger.info(f"Action '{ae.tool_name}' blocked by hook: {reason}") |
There was a problem hiding this comment.
Are we sure this is always going to be blocked by hook? @OpenHands can you double check and provide some evidences. Do not modify code
There was a problem hiding this comment.
I'm on it! xingyaoww can track my progress at all-hands.dev
There was a problem hiding this comment.
Yes, in the current codebase blocked_actions is always populated by hooks. Here is the evidence:
Write path (who populates blocked_actions)
There is exactly one production caller of state.block_action():
openhands-sdk/openhands/sdk/hooks/conversation_hooks.py:167inside_handle_pre_tool_use():This is the PreToolUse hook processing path — no other code in the codebase calls# conversation_hooks.py:159-167 if not should_continue: reason = self.hook_manager.get_blocking_reason(results) ... block_reason = reason or "Blocked by hook" self._conversation_state.block_action(event.id, block_reason)
block_action().
Verified with:
grep -rn "block_action\b" --include="*.py" | grep -v "pop_blocked_action\|blocked_action\|test_\|test\.py\|example"Returns only the definition (state.py:426) and the single call site (conversation_hooks.py:167).
Read path (how blocked_reasons is consumed)
_ActionBatch.prepare()(agent.py:137) callsstate.pop_blocked_action(ae.id)→ reads fromstate.blocked_actions- The field definition (state.py:128-132) explicitly documents this:
# Hook-blocked actions: action_id -> blocking reason blocked_actions: dict[str, str] = Field( default_factory=dict, description="Actions blocked by PreToolUse hooks, keyed by action ID", )
- The method docstrings also confirm:
block_actionsays "Persistently record a hook-blocked action" andpop_blocked_actionsays "Remove and return a hook-blocked action reason"
Other rejection paths do NOT use blocked_actions
The other UserRejectObservation creation site (local_conversation.py:735) handles user rejection during confirmation mode — it creates the observation directly via _on_event() and never touches blocked_actions. That path uses the default rejection_source="user".
Serialization
The agent-server model (models.py:223) mirrors the field with the same description "Actions blocked by PreToolUse hooks". Deserialized/resumed state would contain values that were originally set by hooks.
Minor caveat
Since block_action() is a public method on ConversationState, a future non-hook caller could theoretically use it, which would make the log message slightly misleading. But today, there is no such caller. If this is a concern, the method could be renamed to _block_action (private) or the log message could be made more generic (e.g., "Action blocked: {reason}").
There was a problem hiding this comment.
Summary
The reviewer (@xingyaoww) asked on PR #2390, line 158 of openhands-sdk/openhands/sdk/agent/agent.py:
"Are we sure this is always going to be blocked by hook?"
This was about the log message: logger.info(f"Action '{ae.tool_name}' blocked by hook: {reason}")
Answer: Yes, in the current codebase, blocked_actions is always populated by hooks. Here is the evidence I provided in my reply to the review thread:
-
Single write path: There is exactly one production caller of
state.block_action()—conversation_hooks.py:167inside_handle_pre_tool_use(), which is the PreToolUse hook processing path. Verified via grep across the entire codebase. -
Field documentation confirms it: The
blocked_actionsfield onConversationState(state.py:128-132) is explicitly described as "Actions blocked by PreToolUse hooks, keyed by action ID". The method docstrings also say "Persistently record a hook-blocked action" and "Remove and return a hook-blocked action reason." -
Other rejection paths don't use
blocked_actions: User rejection during confirmation mode (local_conversation.py:735) createsUserRejectObservationdirectly via_on_event()withrejection_source="user"(the default) — it never touches theblocked_actionsdict. -
Serialization path is consistent: The agent-server model (
models.py:223) mirrors the field with the same "PreToolUse hooks" description. Deserialized/resumed state would only contain values originally set by hooks. -
Minor caveat noted: Since
block_action()is a public method onConversationState, a hypothetical future non-hook caller could use it, which would make the log message slightly misleading. I suggested either making the method private or generalizing the log message if this is a concern.
No code changes were made — the reviewer explicitly requested investigation only ("Do not modify code").
| logger = get_logger(__name__) | ||
|
|
||
| # Default concurrency limit for tool executions (per agent) | ||
| DEFAULT_TOOL_CONCURRENCY_LIMIT: Final[int] = 1 |
There was a problem hiding this comment.
Let's configure default tool concurrency to 4 or 8, and we should create a example file that prompts the agent to run multiple tools at the same time (even delegating multiple agents) to stress test the parallel tool execution system. I'd leaning towards merging this PR once this test passes via test-examples workflow
Makes sense. I changed as suggested by OH
I created an example where we spawn subagents that perform code searches. This happens in parallel, and at the end there is also a parallel report to confirm the calls were actually executed in parallel. If it is too verbose for an example I can also remove it. I just wanted to make sure we are correctly parallelizing the calls. The report looks like: |
Ha! There’s no |
If you want I can force the example to have |
😇 Just out of curiosity |
Summary
(ref #2350)
Add ParallelToolExecutor to enable concurrent tool execution within agent steps, controlled by the TOOL_CONCURRENCY_LIMIT environment variable (default: 1, fully backward-compatible).
Motivation
When an LLM returns multiple tool calls in a single response (e.g., "read these 3 files" or "run these 4 independent searches"), the current agent executes them sequentially. For I/O-bound tools — file reads, HTTP requests, MCP server calls, database queries — this leaves significant performance on the table. Parallel execution turns N × latency into ~1 × latency for independent operations.
Concrete scenarios where this helps:
What this does NOT help: CPU-bound tools limited by the GIL, or tools with shared mutable state that aren't thread-safe.
Design
emission) happen on the main thread after parallel work completes.
exceptions (RuntimeError, AssertionError, etc.) are logged at ERROR with full traceback to aid debugging.
Thread safety warning
When TOOL_CONCURRENCY_LIMIT > 1, tools run in parallel threads sharing the same conversation object. Tools are not thread-safe by default. Callers opting into parallelism must ensure their tools are safe for concurrent execution
(no shared mutable filesystem state, no concurrent conversation mutations).
Evaluation
I ran an evaluation with SWE-bench to ensure that the default behavior is the one we already have in the repo [ref]
Report from trace investigation of OpenHands CLI:
No parallel tool calls detected -- the feature is cleanly disabled. Here's the full breakdown: Trace Format - Events alternate between ActionEvent (tool call) and ObservationEvent (tool result) - Tools used: terminal (1150), file_editor (588), think (58), finish (25) - 1,821 action events matched exactly 1,821 observation events across all 25 traces Parallel Tool Call Check: CLEAN - Zero shared llm_response_id across events (each LLM turn produced exactly 1 tool call) - Perfect action-observation interleaving -- no consecutive actions or observations - No tool_calls arrays, no parallel batching of any kind - All 25 conversations completed normally with a finish actionAgent Server images for this PR
• GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server
Variants & Base Images
eclipse-temurin:17-jdknikolaik/python-nodejs:python3.13-nodejs22golang:1.21-bookwormPull (multi-arch manifest)
# Each variant is a multi-arch manifest supporting both amd64 and arm64 docker pull ghcr.io/openhands/agent-server:e1d0081-pythonRun
All tags pushed for this build
About Multi-Architecture Support
e1d0081-python) is a multi-arch manifest supporting both amd64 and arm64e1d0081-python-amd64) are also available if needed