Skip to content

Commit 1070287

Browse files
committed
tweak codebuff-local-cli from runs by gpt-5.4
1 parent 9c65ed1 commit 1070287

File tree

6 files changed

+232
-2
lines changed

6 files changed

+232
-2
lines changed

.agents/codebuff-local-cli.ts

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,16 @@ const baseDefinition = createCliAgent({
1212
'No permission flags needed for Codebuff local dev server.',
1313
model: 'anthropic/claude-opus-4.6',
1414
skipPrepPhase: true,
15+
cliSpecificDocs: `## Codebuff CLI Specific Guidance
16+
17+
- The ready state is the Codebuff banner, working directory, and bordered input box with the agent selector.
18+
- For smoke tests, \`/help\` is useful because it validates the overlay, shortcuts, features, and credits copy in one step.
19+
- For implementation-oriented tests, prefer asking the CLI to inspect or reason about a specific file rather than making edits unless the parent prompt explicitly asks for edits.
20+
- Long Codebuff responses live in a scrollable viewport. If the bottom of the answer already shows the core recommendation, do not spend many extra steps trying to reconstruct every hidden line.
21+
- Avoid key combinations like Shift+Arrow or repeated history/navigation probing unless you have a clear reason; they can open overlays or mutate the input state unexpectedly.
22+
- A good implementation-test flow is usually: initial ready capture → task sent/in-progress capture → response-complete capture → optional follow-up-ready or follow-up-complete capture.
23+
- If you need a follow-up, keep it narrow and specific rather than re-asking the whole task.
24+
- If the current session becomes clearly unusable, report that failure; do not silently start a replacement session and continue as though nothing happened.`,
1525
spawnerPromptExtras: `**Purpose:** E2E visual testing of the Codebuff CLI itself. This agent starts a local dev Codebuff CLI instance and interacts with it to verify UI behavior.
1626
1727
**When to use:**
@@ -97,7 +107,7 @@ const definition: AgentDefinition = {
97107
input: {
98108
role: 'user',
99109
content: 'A ' + CLI_NAME + ' tmux session has been started: `' + sessionName + '`\n\n' +
100-
'Use this session for all CLI interactions. The session name must be included in your final output.\n\n' +
110+
'Use this session for all CLI interactions. Treat it as the canonical session for this run. If it fails, report that explicitly instead of silently starting another session. The session name must be included in your final output.\n\n' +
101111
'Proceed with the task using the helper scripts:\n' +
102112
'- Send commands: `./scripts/tmux/tmux-cli.sh send "' + sessionName + '" "..."`\n' +
103113
'- Capture output: `./scripts/tmux/tmux-cli.sh capture "' + sessionName + '" --label "..."`\n' +

.agents/lib/cli-agent-prompts.ts

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,16 @@ export function getSystemPrompt(config: CliAgentConfig): string {
111111
112112
**Important:** ${config.permissionNote}
113113
${cliSpecificSection}
114+
## Operating Heuristics
115+
116+
- Treat the provided tmux session as the single source of truth. Do not start a second session unless the current one has clearly failed and you are explicitly recovering from that failure.
117+
- Prefer fewer, higher-value captures over many overlapping captures.
118+
- A capture is worth taking when the UI meaningfully changes: startup ready state, help overlay open, task in progress, task complete, clean follow-up-ready state, or an error state.
119+
- Avoid exploratory key presses that can mutate the UI state unless they are necessary for the task.
120+
- If the CLI already shows enough evidence in the current viewport, do not keep scrolling or recapturing just to get a more perfect screenshot.
121+
- If a long response is partially off-screen, prefer summarizing from the visible evidence instead of repeatedly trying viewport-recovery tricks unless the missing content is essential.
122+
- Do not use \`read_files\` on tmux capture artifacts from inside the CLI tester run; rely on the terminal capture output you already obtained and let the parent agent inspect saved capture files later if needed.
123+
114124
## Helper Scripts
115125
116126
Use these scripts in \`scripts/tmux/\` to interact with the CLI session:
@@ -238,6 +248,8 @@ Use ${config.cliName} to complete implementation tasks like building features, f
238248
./scripts/tmux/tmux-cli.sh capture "$SESSION" --label "work-continued" --wait 30
239249
\`\`\`
240250
251+
Prefer at most 1-2 progress captures before deciding whether you already have enough evidence.
252+
241253
4. **Send follow-up prompts** if needed to refine or continue the work:
242254
\`\`\`bash
243255
./scripts/tmux/tmux-cli.sh send "$SESSION" "<follow-up instructions>"
@@ -258,7 +270,7 @@ Use ${config.cliName} to complete implementation tasks like building features, f
258270
### Tips
259271
260272
- Break complex tasks into smaller prompts
261-
- Capture frequently to track progress
273+
- Prefer high-value captures tied to meaningful UI changes rather than frequent overlapping captures
262274
- Use descriptive labels for captures
263275
- Check intermediate results before moving on`
264276
}
Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
# Lessons: CLI tester efficiency and CLI knowledge improvements
2+
3+
## What went well
4+
5+
- The SDK-driven harness made it straightforward to collect full event streams, stream chunks, structured outputs, and tmux capture paths for repeated `codebuff-local-cli` runs.
6+
- The baseline runs clearly exposed behavior patterns instead of relying on intuition.
7+
- The Codebuff CLI itself was capable and informative during implementation-oriented runs; most inefficiency came from the tester agent’s workflow rather than the CLI under test.
8+
9+
## What was tricky
10+
11+
- The `codebuff-local-cli` agent uses only `run_terminal_command`, `add_message`, and `set_output`, so all tester intelligence has to come from prompt/instruction quality rather than richer tooling.
12+
- Long Codebuff CLI responses live in a scrollable viewport. The tester spent many extra steps trying to recover hidden content even when the visible portion already contained enough evidence.
13+
- One smoke run silently started a second tmux session mid-run, showing that the current guidance was too weak about preserving session continuity and treating failure recovery explicitly.
14+
- Reading tmux capture artifacts from inside the tester run is ineffective because the agent does not have `read_files`; attempts to recover more evidence should therefore be avoided unless the current viewport is truly insufficient.
15+
16+
## Quantified before/after findings
17+
18+
### Smoke scenario
19+
20+
- Baseline smoke runs: `27` and `38` total events, with one run silently starting a replacement tmux session mid-run.
21+
- Post-change smoke run: `27` total events, `10` tool calls, `3` captures, no replacement session, and clearer capture labels (`initial-state`, `after-help`, `after-2plus2`).
22+
23+
### Implementation scenario
24+
25+
- Baseline implementation runs:
26+
- tool calls: `19` and `21`
27+
- captures: `8` and `7`
28+
- total cost: `30` and `40`
29+
- strong evidence of wasted viewport-recovery actions (page up/down, history keys, extra captures, direct tmux scrollback commands)
30+
- Post-change implementation run:
31+
- tool calls: `10`
32+
- captures: `4`
33+
- total cost: `14`
34+
- no viewport-recovery thrashing; the tester captured the ready state, in-progress state, response, and follow-up response and then stopped.
35+
36+
## Baseline findings
37+
38+
- Smoke runs were mostly efficient, but their capture labels were generic and the agent did not explicitly reason about why each capture was worth taking.
39+
- One smoke run restarted the session instead of treating the original session as canonical, inflating event/tool counts.
40+
- Implementation runs showed the biggest inefficiency: excessive viewport recovery actions (page up/down, arrow keys, extra captures, direct tmux scrollback commands) after the key recommendation was already visible.
41+
- The tester lacked Codebuff-specific guidance about:
42+
- what the ready state looks like,
43+
- when `/help` is especially valuable,
44+
- how to structure a good implementation-oriented test,
45+
- and when to stop chasing perfect captures of long responses.
46+
47+
## What changed behavior most
48+
49+
- Adding a canonical-session instruction prevented silent session replacement behavior and made failure handling expectations explicit.
50+
- Adding the shared “high-value capture” heuristic reduced redundant captures and discouraged overlapping progress snapshots.
51+
- Adding explicit guidance to stop chasing hidden viewport text eliminated the biggest source of waste in implementation-oriented runs.
52+
- Adding Codebuff-specific flow guidance improved follow-up quality and reduced exploratory key usage.
53+
54+
## Changes made from baseline evidence
55+
56+
- Added shared operating heuristics to bias CLI testers toward fewer, higher-value captures and away from unnecessary UI mutation.
57+
- Added explicit guidance to avoid `read_files` on tmux artifacts from inside the tester run.
58+
- Added Codebuff-specific testing guidance covering ready state, smoke-test flow, implementation-test flow, long-response behavior, and session continuity expectations.
59+
- Added best-effort harness cleanup when a run throws after a tmux session has already been created.
60+
61+
## Cautionary note
62+
63+
- Different runs may disagree about whether adjacent edge cases are worth fixing. For example, one post-change implementation run argued that the original-case `isEnvFile` call path was acceptable because `.env` files are conventionally lowercase, while earlier baseline runs framed nearby case handling as security-sensitive. Future work should settle those questions with source-of-truth tests or project policy, not by trusting a single run’s opinion.
64+
65+
## Known limitation
66+
67+
- The analysis harness now does best-effort tmux cleanup when a run throws after a session has already been created, but it still does not implement a hard per-run abort/timeout with guaranteed teardown if `client.run()` stalls indefinitely. Future iterations should add explicit run cancellation once the preferred timeout mechanism is settled.
68+
69+
## What we intentionally did not change
70+
71+
- We did not change the tmux helper scripts because the baseline problems were primarily agent-behavior issues, not script failures.
72+
- We did not broaden the tester’s tool access; this pass focuses on making the current workflow smarter rather than increasing power.
73+
- We did not change the shared output schema because the existing `set_output` contract was sufficient for analysis once the agent behavior improved.
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
# Plan: CLI tester efficiency and CLI knowledge improvements
2+
3+
## Implementation Steps
4+
5+
1. Build an SDK-driven analysis harness for the CLI tester runs.
6+
- Add a reproducible script or test helper that runs `codebuff-local-cli` through the SDK with `handleEvent` and `handleStreamChunk` collection.
7+
- Standardize artifact naming for comparison (for example `baseline-smoke-run1`, `baseline-implementation-run2`, `post-smoke-run1`).
8+
- Define and persist a consistent metrics schema per run, including event counts by type, tool-call counts, unique tool names, spawned-agent counts, capture counts, and notable wait/capture observations.
9+
- Build in explicit failure-path handling for missing API key, auth failure, tmux startup failure, and hung runs, including cleanup where possible.
10+
11+
2. Execute baseline mixed-scenario runs and document findings.
12+
- Run the smoke scenario twice and the implementation scenario twice.
13+
- Keep the comparison controlled by using the same prompts, logging granularity, and timeout policy across baseline runs.
14+
- Inspect each run’s SDK trace and tmux session logs.
15+
- Record concrete inefficiencies, wasted actions, and missing Codebuff-CLI knowledge to drive the prompt/template changes.
16+
17+
3. Improve the shared CLI tester prompt layer.
18+
- Update `.agents/lib/cli-agent-prompts.ts` so CLI testers have sharper workflow guidance.
19+
- Add targeted guidance on when to gather prep context, when to capture, how to detect progress/completion, and how to avoid low-value repeated actions.
20+
- Keep knowledge additions evidence-based and avoid prompt bloat.
21+
22+
4. Improve shared CLI tester orchestration and the concrete `codebuff-local-cli` agent.
23+
- Update `.agents/lib/create-cli-agent.ts` if shared orchestration behavior needs refinement.
24+
- Update `.agents/codebuff-local-cli.ts` with Codebuff-CLI-specific knowledge and workflow refinements informed by baseline evidence.
25+
- Ensure the agent remains focused on CLI UI testing and uses the tmux helper scripts efficiently.
26+
- Keep output contract compatibility intact.
27+
28+
5. Add or update validation coverage.
29+
- Add tests for shared CLI-agent prompt/template behavior and/or the analysis harness.
30+
- Include compatibility-oriented checks for the shared CLI-agent layer.
31+
- At minimum, verify the `.agents` layer still typechecks and that `claude-code-cli`, `codex-cli`, `gemini-cli`, and `codebuff-local-cli` still satisfy shared construction/schema expectations.
32+
33+
6. Re-run post-change verification scenarios.
34+
- Run at least one smoke and one implementation scenario after changes using the same prompts and comparison controls.
35+
- Compare outputs/artifacts against the baseline.
36+
- Treat the step as successful if the post-change runs show at least two improvement signals such as fewer duplicate captures, fewer redundant waits/follow-ups, clearer evidence in captures/output, or better scenario-specific verification behavior.
37+
38+
7. Write session documentation and capture durable lessons.
39+
- Record before/after findings in `LESSONS.md`.
40+
- Document what was intentionally not changed and why.
41+
- Update relevant skill files only with broadly reusable insights.
42+
43+
## Dependencies / Ordering
44+
45+
- Step 1 must happen before baseline analysis in Step 2.
46+
- Step 2 should happen before Steps 3–4 so improvements are evidence-based.
47+
- Step 3 should happen before or alongside Step 4 because shared prompt guidance informs the concrete agent behavior.
48+
- Step 5 should follow implementation so tests validate the actual behavior.
49+
- Step 6 depends on Steps 3–5 being complete.
50+
- Step 7 should happen after validation so lessons reflect the final state.
51+
52+
## Risk Areas
53+
54+
- The requested `cli-ui-tester` name does not exist directly in the repo, so the harness must target the correct concrete agent (`codebuff-local-cli`) and shared template layer consistently.
55+
- SDK-driven CLI runs may fail due to auth, tmux availability, or local CLI startup issues; the harness should make failures inspectable rather than opaque.
56+
- Richer CLI knowledge can easily become prompt bloat, so additions must stay targeted to observed failures.
57+
- Shared-layer changes can affect multiple CLI tester agents, so compatibility checks are important.

0 commit comments

Comments
 (0)