Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
155f2d0
feat(ui): non-blocking floating input box prototype
Nate0-1999 Feb 27, 2026
c64e601
fix(ui): stable interject logic without dropping keys
Nate0-1999 Feb 27, 2026
2ae2fee
fix(ui): remove bad unindented global vars
Nate0-1999 Feb 27, 2026
9768458
fix(ui): enforce state lock so agent cannot run twice concurrently
Nate0-1999 Feb 27, 2026
1dc7671
fix(ui): clean up sed corruption around run_prompt_with_attachments
Nate0-1999 Feb 27, 2026
4b6f852
fix(ui): cleanly rebuild background block without losing original fea…
Nate0-1999 Feb 27, 2026
0c55310
feat(ui): smooth inline interjection and prompt queue handling
Nate0-1999 Feb 27, 2026
d6a8f19
fix(ui): revert to raw=True for patch_stdout to fix terminal ANSI wra…
Nate0-1999 Feb 27, 2026
1d0dfa6
feat(ui): improve prompt layout, clean cancellation handling, full wi…
Nate0-1999 Feb 27, 2026
9b1d969
feat: Add smooth interject functionality with single-key execution
Nate0-1999 Feb 27, 2026
dec161b
fix: Disable CPR to prevent terminal output corruption during interje…
Nate0-1999 Feb 27, 2026
59b6aab
feat: Add semantic framing prefix and suffix to interjected messages
Nate0-1999 Feb 27, 2026
06795e3
fix: Erase interject prompt when done and prevent redundant queued ex…
Nate0-1999 Feb 27, 2026
44923e6
fix: Polish interjection UI (erase intermediate input, match spacing,…
Nate0-1999 Feb 27, 2026
c17e191
fix: Resolve missing get_console import in CLI runner causing crash o…
Nate0-1999 Feb 27, 2026
1b27e9b
fix: Polish interject UI - remove ghost prompt, fix spacing, align bo…
Nate0-1999 Feb 27, 2026
527230f
fix: Disable redundant user prompt emission and compact prompt spacing
Nate0-1999 Feb 27, 2026
9488524
Revert "fix: Disable redundant user prompt emission and compact promp…
Nate0-1999 Feb 27, 2026
ee4f630
fix: Align interject prompt with main prompt and pause spinner during…
Nate0-1999 Feb 27, 2026
5bf5f5f
Merge pull request #1 from Nate0-1999/interject-works-laggy
Nate0-1999 Feb 27, 2026
c804474
revert latest interject visibility tweak
Mar 4, 2026
4c84036
Merge pull request #2 from Nate0-1999/revert-latest-interject-loop
Nate0-1999 Mar 4, 2026
faebf28
sometime the queue doesn't trigger and the interject printing has a b…
Mar 5, 2026
321233d
Merge pull request #3 from Nate0-1999/good-check-point
Nate0-1999 Mar 5, 2026
ff116b2
Restore shell streaming during foreground commands
Mar 6, 2026
7cdda6e
Merge pull request #4 from Nate0-1999/codex/restore-shell-streaming
Nate0-1999 Mar 6, 2026
46df828
Create v1 stable checkpoint
Mar 6, 2026
0bb958f
Merge pull request #5 from Nate0-1999/codex/v1-stable
Nate0-1999 Mar 6, 2026
4738615
Refine interject and queue runtime
Mar 10, 2026
132028f
Document current rewrite state
Mar 10, 2026
d918320
Merge pull request #6 from Nate0-1999/codex/rewrite-interject-queue-v2
Nate0-1999 Mar 10, 2026
94e5f34
Remove README transcript snapshot
Mar 10, 2026
b2a61de
Polish transcript visibility
Mar 10, 2026
d14816c
Merge pull request #7 from Nate0-1999/codex/next-iteration-cycle
Nate0-1999 Mar 10, 2026
0eb01de
Overhaul interactive command handling
Mar 10, 2026
186456f
Merge pull request #8 from Nate0-1999/codex/command-overhaul
Nate0-1999 Mar 10, 2026
671238a
Restore interactive autosave triggers
Mar 10, 2026
cbd9c95
Merge pull request #9 from Nate0-1999/codex/fix-autosave-triggers
Nate0-1999 Mar 10, 2026
9c2c92f
Serialize above-prompt rendering
Mar 11, 2026
dd849ac
Merge pull request #10 from Nate0-1999/codex/fix-above-prompt-artifac…
Nate0-1999 Mar 11, 2026
6c1b179
Harden parity for busy attachments
Mar 11, 2026
7206d56
Merge pull request #11 from Nate0-1999/codex/parity-token-hooks-at
Nate0-1999 Mar 11, 2026
2bd1c82
Add chooser edit and escape paths
Mar 11, 2026
b1a71bb
Merge pull request #12 from Nate0-1999/codex/chooser-back-to-edit
Nate0-1999 Mar 11, 2026
8beca2b
Sync fork main onto upstream main
Mar 12, 2026
e899ce5
Merge pull request #13 from Nate0-1999/codex/sync-upstream-main
Nate0-1999 Mar 13, 2026
9827fc3
Make chooser input modal
Mar 13, 2026
6ac7a2d
Merge pull request #14 from Nate0-1999/codex/more-improvements
Nate0-1999 Mar 13, 2026
be6ab3e
Refine shell, wiggum, and paused queue behavior
Mar 16, 2026
08991e1
Merge pull request #15 from Nate0-1999/codex/shell-and-wiggum-cancel-…
Nate0-1999 Mar 16, 2026
cad85fd
Merge remote-tracking branch 'origin/main' into codex/upstream-pr-prep
Mar 16, 2026
5246f89
Prep upstream PR branch
Mar 16, 2026
55ff763
Add configurable interactive queue limit
Mar 16, 2026
088af6d
Merge pull request #16 from Nate0-1999/codex/upstream-pr-prep
Nate0-1999 Mar 16, 2026
bffca5f
Improve prompt-surface streaming previews
Mar 23, 2026
2025fdf
Merge pull request #17 from Nate0-1999/codex/next-features
Nate0-1999 Mar 23, 2026
f86d1cb
Lean terminal compat and protect foreground ephemeral UI
Apr 7, 2026
9120c7d
Merge pull request #18 from Nate0-1999/codex/next-features
Nate0-1999 Apr 7, 2026
1b6b4aa
Merge upstream/main into fork main
Apr 7, 2026
77d4f8b
Gracefully reject malformed replace_in_file payloads
Apr 7, 2026
f0268c4
Merge pull request #19 from Nate0-1999/codex/upstream-main-sync-surgical
Nate0-1999 Apr 7, 2026
87d4af4
fix(runtime): address review correctness issues
Apr 7, 2026
f6db02d
Merge pull request #20 from Nate0-1999/codex/upstream-main-sync-surgical
Nate0-1999 Apr 7, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions IMPLEMENTATION_GUARDRAILS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Implementation Guardrails

- When the live prompt surface is active, `agent_share_your_reasoning` must render through the structured `AGENT REASONING` path, not as low-level `Calling ... token(s)` tool progress.
- When the live prompt surface is active, mutable tool progress that upstream prints and clears must render in the prompt-local ephemeral status strip, not as transcript output and not via above-prompt prints.
- When the live prompt surface is active, streamed `TextPart` content may appear only in the prompt-local ephemeral preview; the permanent transcript must still come only from the final `AGENT RESPONSE`.
- When the live prompt surface is active, shell output with carriage-return progress must use the prompt-local ephemeral status strip; ordinary shell lines remain on the durable shell output path.
- Durable structured outputs like `AGENT REASONING` and `DIRECTORY LISTING` should still render above the prompt.
- Prompt-surface stream fixes must not duplicate the final `AGENT RESPONSE`.
- The prompt-local ephemeral status/preview is foreground-only; session-tagged sub-agent messages must never write to it or clear it.
- Terminal/emulator-specific behavior must flow through the shared terminal-capability helper in `terminal_utils` rather than adding new scattered env checks.
2 changes: 1 addition & 1 deletion code_puppy/agents/base_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -1960,7 +1960,7 @@ async def run_agent_task():
"Try disabling any malfunctioning MCP servers", group_id=group_id
)
except* asyncio.exceptions.CancelledError:
emit_info("Cancelled")
# We don't print "Cancelled" here anymore so smooth interjections stay clean.
if get_use_dbos():
await DBOS.cancel_workflow_async(group_id)
except* InterruptedError as ie:
Expand Down
227 changes: 187 additions & 40 deletions code_puppy/agents/event_stream_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

import asyncio
import logging
import sys
from collections.abc import AsyncIterable
from typing import Any, Optional

Expand Down Expand Up @@ -85,6 +86,108 @@ def _should_suppress_output() -> bool:
return is_subagent() and not get_subagent_verbose()


def _get_active_prompt_runtime() -> Any | None:
"""Return the active interactive runtime, if available."""
try:
from code_puppy.command_line.interactive_runtime import (
get_active_interactive_runtime,
)

return get_active_interactive_runtime()
except Exception:
return None


def _has_active_prompt_surface() -> bool:
"""Return True when the always-on prompt surface is mounted."""
runtime = _get_active_prompt_runtime()
return runtime.has_prompt_surface() if runtime is not None else False


def _set_prompt_ephemeral_status(text: str | None) -> None:
"""Update transient prompt-local status for mutable stream output."""
runtime = _get_active_prompt_runtime()
if runtime is None:
return
try:
runtime.set_prompt_ephemeral_status(text)
except Exception:
pass


def _clear_prompt_ephemeral_status() -> None:
"""Clear transient prompt-local status."""
runtime = _get_active_prompt_runtime()
if runtime is None:
return
try:
runtime.clear_prompt_ephemeral_status()
except Exception:
pass


def _set_prompt_ephemeral_preview(text: str | None) -> None:
"""Update transient prompt-local preview for live response text."""
runtime = _get_active_prompt_runtime()
if runtime is None:
return
try:
runtime.set_prompt_ephemeral_preview(text)
except Exception:
pass


def _merge_tool_name(current_name: str, tool_name_delta: str) -> str:
"""Merge a streamed tool name delta without duplicating already-known names."""
if not tool_name_delta:
return current_name
if not current_name:
return tool_name_delta
if tool_name_delta.startswith(current_name):
return tool_name_delta
if tool_name_delta in current_name:
return current_name
for overlap in range(min(len(current_name), len(tool_name_delta)), 0, -1):
if current_name.endswith(tool_name_delta[:overlap]):
return current_name + tool_name_delta[overlap:]
return current_name + tool_name_delta
Comment thread
coderabbitai[bot] marked this conversation as resolved.


def _is_reasoning_tool_name(tool_name: str) -> bool:
"""Return True for the reasoning tool, including streamed prefixes."""
reasoning_tool = "agent_share_your_reasoning"
return bool(tool_name) and (
reasoning_tool.startswith(tool_name) or tool_name.startswith(reasoning_tool)
)


def _build_prompt_safe_console(source_console: Console) -> Console:
"""Create a console that writes to the real terminal above the prompt."""
return Console(
file=sys.__stdout__,
force_terminal=source_console.is_terminal,
width=source_console.width,
color_system=source_console.color_system,
soft_wrap=source_console.soft_wrap,
legacy_windows=source_console.legacy_windows,
)


async def _print_stream_output(
console: Console, *args: Any, **kwargs: Any
) -> None:
"""Render stream output above the prompt when the prompt surface is mounted."""
runtime = _get_active_prompt_runtime()
if runtime is not None and runtime.has_prompt_surface():
prompt_safe_console = _build_prompt_safe_console(console)
rendered = await runtime.run_above_prompt_async(
lambda: prompt_safe_console.print(*args, **kwargs)
)
if rendered:
return
console.print(*args, **kwargs)


async def event_stream_handler(
ctx: RunContext,
events: AsyncIterable[Any],
Expand Down Expand Up @@ -119,6 +222,8 @@ async def event_stream_handler(
token_count: dict[int, int] = {} # Track token count per text/tool part
tool_names: dict[int, str] = {} # Track tool name per tool part index
did_stream_anything = False # Track if we streamed any content
spinner_paused = False
prompt_surface_response_preview = ""

# Termflow streaming state for text parts
termflow_parsers: dict[int, TermflowParser] = {}
Expand All @@ -127,16 +232,22 @@ async def event_stream_handler(

async def _print_thinking_banner() -> None:
"""Print the THINKING banner with spinner pause and line clear."""
nonlocal did_stream_anything

pause_all_spinners()
await asyncio.sleep(0.1) # Delay to let spinner fully clear
# Clear line and print newline before banner
console.print(" " * 50, end="\r")
console.print() # Newline before banner
nonlocal did_stream_anything, spinner_paused

prompt_surface_active = _has_active_prompt_surface()
if not spinner_paused:
pause_all_spinners()
spinner_paused = True
await asyncio.sleep(0.02)
if prompt_surface_active:
await _print_stream_output(console)
else:
await _print_stream_output(console, " " * 50, end="\r")
await _print_stream_output(console) # Newline before banner
# Bold banner with configurable color and lightning bolt
thinking_color = get_banner_color("thinking")
console.print(
await _print_stream_output(
console,
Text.from_markup(
f"[bold white on {thinking_color}] THINKING [/bold white on {thinking_color}] [dim]\u26a1 "
),
Expand All @@ -146,15 +257,21 @@ async def _print_thinking_banner() -> None:

async def _print_response_banner() -> None:
"""Print the AGENT RESPONSE banner with spinner pause and line clear."""
nonlocal did_stream_anything

pause_all_spinners()
await asyncio.sleep(0.1) # Delay to let spinner fully clear
# Clear line and print newline before banner
console.print(" " * 50, end="\r")
console.print() # Newline before banner
nonlocal did_stream_anything, spinner_paused

prompt_surface_active = _has_active_prompt_surface()
if not spinner_paused:
pause_all_spinners()
spinner_paused = True
await asyncio.sleep(0.02)
if prompt_surface_active:
await _print_stream_output(console)
else:
await _print_stream_output(console, " " * 50, end="\r")
await _print_stream_output(console) # Newline before banner
response_color = get_banner_color("agent_response")
console.print(
await _print_stream_output(
console,
Text.from_markup(
f"[bold white on {response_color}] AGENT RESPONSE [/bold white on {response_color}]"
)
Expand Down Expand Up @@ -182,32 +299,33 @@ async def _print_response_banner() -> None:
if part.content and part.content.strip():
await _print_thinking_banner()
escaped = escape(part.content)
console.print(f"[dim]{escaped}[/dim]", end="")
await _print_stream_output(console, f"[dim]{escaped}[/dim]", end="")
banner_printed.add(event.index)
elif isinstance(part, TextPart):
streaming_parts.add(event.index)
text_parts.add(event.index)
# Initialize termflow streaming for this text part
termflow_parsers[event.index] = TermflowParser()
termflow_renderers[event.index] = TermflowRenderer(
output=console.file, width=console.width
)
termflow_line_buffers[event.index] = ""
# Handle initial content if present
if part.content and part.content.strip():
await _print_response_banner()
banner_printed.add(event.index)
termflow_line_buffers[event.index] = part.content
if _has_active_prompt_surface():
if part.content:
prompt_surface_response_preview += part.content
_set_prompt_ephemeral_preview(prompt_surface_response_preview)
else:
# Initialize termflow streaming for this text part
termflow_parsers[event.index] = TermflowParser()
termflow_renderers[event.index] = TermflowRenderer(
output=console.file, width=console.width
)
termflow_line_buffers[event.index] = ""
# Handle initial content if present
if part.content and part.content.strip():
await _print_response_banner()
banner_printed.add(event.index)
termflow_line_buffers[event.index] = part.content
elif isinstance(part, ToolCallPart):
streaming_parts.add(event.index)
tool_parts.add(event.index)
token_count[event.index] = 0 # Initialize token counter
# Capture tool name from the start event
tool_names[event.index] = part.tool_name or ""
# Track tool name for display
banner_printed.add(
event.index
) # Use banner_printed to track if we've shown tool info

# PartDeltaEvent - stream the content as it arrives
elif isinstance(event, PartDeltaEvent):
Expand All @@ -227,6 +345,12 @@ async def _print_response_banner() -> None:
if delta.content_delta:
# For text parts, stream markdown with termflow
if event.index in text_parts:
if _has_active_prompt_surface():
prompt_surface_response_preview += delta.content_delta
_set_prompt_ephemeral_preview(
prompt_surface_response_preview
)
continue
# Print banner on first content
if event.index not in banner_printed:
await _print_response_banner()
Expand All @@ -252,8 +376,11 @@ async def _print_response_banner() -> None:
await _print_thinking_banner()
banner_printed.add(event.index)
escaped = escape(delta.content_delta)
console.print(f"[dim]{escaped}[/dim]", end="")
await _print_stream_output(
console, f"[dim]{escaped}[/dim]", end=""
)
elif isinstance(delta, ToolCallPartDelta):
prompt_surface_active = _has_active_prompt_surface()
# For tool calls, estimate tokens from args_delta content
# args_delta contains the streaming JSON arguments
args_delta = getattr(delta, "args_delta", "") or ""
Expand All @@ -268,21 +395,35 @@ async def _print_response_banner() -> None:
# Update tool name if delta provides more of it
tool_name_delta = getattr(delta, "tool_name_delta", "") or ""
if tool_name_delta:
tool_names[event.index] = (
tool_names.get(event.index, "") + tool_name_delta
tool_names[event.index] = _merge_tool_name(
tool_names.get(event.index, ""), tool_name_delta
)

# Use stored tool name for display
tool_name = tool_names.get(event.index, "")
if prompt_surface_active:
if not _is_reasoning_tool_name(tool_name):
count = token_count[event.index]
if tool_name:
_set_prompt_ephemeral_status(
f"\U0001f527 Calling {tool_name}... {count} token(s)"
)
else:
_set_prompt_ephemeral_status(
f"\U0001f527 Calling tool... {count} token(s)"
)
continue
count = token_count[event.index]
# Display with tool wrench icon and tool name
if tool_name:
console.print(
await _print_stream_output(
console,
f" \U0001f527 Calling {tool_name}... {count} token(s) ",
end="\r",
)
else:
console.print(
await _print_stream_output(
console,
f" \U0001f527 Calling tool... {count} token(s) ",
end="\r",
)
Expand Down Expand Up @@ -322,11 +463,14 @@ async def _print_response_banner() -> None:
del termflow_line_buffers[event.index]
# For tool parts, clear the chunk counter line
elif event.index in tool_parts:
# Clear the chunk counter line by printing spaces and returning
console.print(" " * 50, end="\r")
if _has_active_prompt_surface():
_clear_prompt_ephemeral_status()
else:
# Clear the chunk counter line by printing spaces and returning
await _print_stream_output(console, " " * 50, end="\r")
# For thinking parts, just print newline
elif event.index in banner_printed:
console.print() # Final newline after streaming
await _print_stream_output(console) # Final newline after streaming

# Clean up token count and tool names
token_count.pop(event.index, None)
Expand All @@ -344,5 +488,8 @@ async def _print_response_banner() -> None:
next_kind = getattr(event, "next_part_kind", None)
if next_kind not in ("text", "thinking", "tool-call"):
resume_all_spinners()
spinner_paused = False

# Spinner is resumed in PartEndEvent when appropriate (based on next_part_kind)
if spinner_paused:
resume_all_spinners()
Loading