-
Notifications
You must be signed in to change notification settings - Fork 527
feat: Interactive Planning Mode with Clarification Questions #564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Interactive Planning Mode with Clarification Questions #564
Conversation
Allow AI agents to ask clarification questions during the planning phase when "Require approval" is enabled. This helps gather requirements before generating a spec, resulting in better-targeted implementations. Key changes: - Add AskUserQuestion tool to Auto Mode when requirePlanApproval is true - Detect tool invocation in message stream and pause for user response - Add ClarificationQuestionsDialog UI component with radio/checkbox options - Add clarificationInstructions prompt customization (Settings > Prompts) - Create notification when clarification is needed - Add WebSocket events for clarification request/response flow Activation conditions: - Feature uses Claude model (haiku, sonnet, opus) - "Require approval" is enabled on the feature - Planning mode is lite, spec, or full The clarification instructions prompt can be customized in Settings under Prompt Customization > Auto Mode > Interactive Planning section.
- Add `enableInteractivePlanning` global setting (default: true) - Make clarification prompt more assertive (Required vs Optional) - Add checkbox in Feature Defaults to enable/disable the feature - Setting is always visible with explanation of when it activates - Update prompt customization descriptions to reference Feature Defaults - Properly sync setting with server and include in import/export
- Replace `as` type assertion with proper runtime validation function for req.body in clarification-response.ts - Consolidate repetitive validation into a loop-based approach - Add parseClarificationQuestions() type guard for validating tool output instead of using `as any[]` cast - Remove unnecessary `as any` casts from event type strings - Extract '__other__' magic string to OTHER_OPTION_VALUE constant Addresses review feedback from gemini-code-assist on PR AutoMaker-Org#543
Add data-testid with sanitized project name for E2E test assertions.
The testid format is 'project-switcher-project-{name}' where name is
the lowercased project name with spaces replaced by hyphens.
This fixes E2E tests that expect to find projects by name in the
project switcher.
📝 WalkthroughWalkthroughThis PR adds an interactive clarification workflow to AutoMaker, enabling users to answer clarification questions during planning mode when using Claude. It introduces SDK option toggling, a new server endpoint for handling responses, service-level clarification state management, UI dialogs for question collection, settings controls, type definitions, and prompt integration across client and server. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant BoardView as Board View (UI)
participant Service as AutoMode Service
participant Streaming as Streaming Loop
participant Handler as Clarification Handler
participant Dialog as Dialog Component
User->>Streaming: Agent planning with AskUserQuestion
Streaming->>Service: Detects AskUserQuestion block
Service->>Service: parseClarificationQuestions()
Service->>Service: Create clarification request
Service->>BoardView: Emit clarification:questions-required
BoardView->>Dialog: Render ClarificationQuestionsDialog
User->>Dialog: Answer questions
User->>Dialog: Submit answers
Dialog->>Handler: POST /clarification-response
Handler->>Service: resolveClarification(featureId, requestId, answers)
Service->>Service: Resolve pending clarification
Service->>Streaming: Continue execution with answers
Streaming->>BoardView: Emit clarification:questions-answered
BoardView->>Dialog: Close dialog, clear state
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @stefandevo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request reintroduces and refines the Interactive Planning Mode, a crucial feature designed to enhance the quality and relevance of AI-generated implementation plans. By enabling the AI to proactively ask clarification questions during the planning phase, the system can gather essential user requirements and preferences upfront. This ensures that the resulting plan is more closely aligned with the user's vision, reducing rework and improving overall efficiency. The mode is configurable through a new global setting, supports custom prompts for tailoring the AI's questioning style, and is specifically integrated with Claude models via the Claude Code CLI. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request reintroduces the Interactive Planning Mode, allowing the AI to ask clarification questions before generating an implementation plan. This is a significant feature enhancement that improves the planning phase by gathering user requirements upfront. The changes are extensive, touching the server, UI, and shared libraries. Key additions include a new AskUserQuestion tool, a new API endpoint to handle user responses, a dialog for presenting questions to the user, and state management to handle the interactive flow. The implementation is robust, with timeouts for pending questions and validation for inputs. My feedback includes one suggestion to improve recovery after a server restart.
| if (!pending) { | ||
| logger.info(`No pending clarification in Map for feature ${featureId}`); | ||
| return { | ||
| success: false, | ||
| error: `No pending clarification found for feature ${featureId}`, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resolveClarification method doesn't handle server restarts. If the server restarts while waiting for a user's response to clarification questions, the pendingClarifications map will be empty. When the user eventually submits their answers, this check will fail and return an error, leaving the feature in a broken state that requires manual intervention (stopping and restarting the feature).
Consider implementing a recovery mechanism similar to the one in resolvePlanApproval (lines 1969-2037) which checks the feature's persisted state to recover from a server restart. While this might be a larger effort, the current implementation could lead to a confusing user experience if the server restarts during this interactive step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@apps/server/src/routes/auto-mode/routes/clarification-response.ts`:
- Around line 102-107: The current handler in clarification-response.ts returns
500 when resolveClarification returns result.success === false; change this to
return a 4xx (e.g., 400 Bad Request) for expected client-side failures: in the
block that checks result.success, replace res.status(500).json(...) with
res.status(400).json(...) (or map certain result.error codes to 404/409 if you
have error semantics) so only unexpected exceptions remain 500; keep the same
payload structure and only alter the status code returned for
resolveClarification failures.
In `@apps/server/src/services/auto-mode-service.ts`:
- Around line 2092-2146: In waitForClarificationAnswers, when a clarification is
timed out or cancelled you currently only reject the promise; update this to
also emit a UI-facing event (e.g., via your existing event emitter or a new
ClarificationCancelled/ClarificationTimedOut event) so the frontend can clear
dialogs; specifically, add an emit call wherever you call reject in the timeout
handler and in the wrappedReject path, include featureId/requestId/toolUseId in
the event payload and still clear this.pendingClarifications (the changes should
reference waitForClarificationAnswers, this.pendingClarifications, wrappedReject
and the timeout handler).
- Around line 3376-3450: The code waits for clarification answers but never
feeds them back into the provider/agent loop; after receiving answers from
waitForClarificationAnswers (use requestId and toolUseId), construct a
tool_result message payload containing the answers (e.g., JSON mapping or
formatted text) and inject it into the provider message stream or call the
provider continuation API so the agent receives the tool_result before
continuing; do this right after the answers are appended to responseText (near
responseText, scheduleWrite, and the waitForClarificationAnswers call) and
ensure the tool_result uses the same toolUseId/requestId so the agent associates
it with the AskUserQuestion tool call. Also update imports that currently
reference relative provider paths to use the `@automaker/`* namespace (replace the
relative imports on the provider-related import lines with
`@automaker/providers`).
In
`@apps/ui/src/components/layout/project-switcher/components/project-switcher-item.tsx`:
- Around line 40-48: The test-id built from project.name is brittle and can
collide; update the logic in project-switcher-item.tsx (the sanitizedName
creation used in the data-testid) to more aggressively slugify the name
(lowercase, strip/replace all non-alphanumeric characters, collapse multiple
hyphens) and then append a unique identifier such as project.id (e.g.,
`${slugifiedName}-${project.id}`) so the data-testid on the button is both
stable and unique; modify the sanitizedName variable and the data-testid
construction used in the button element accordingly.
In
`@apps/ui/src/components/views/board-view/dialogs/clarification-questions-dialog.tsx`:
- Around line 233-285: The RadioGroup's value currently reads only from answers
so selecting the "Other" option doesn't stay selected; change the value
expression to prefer showOther for this question (e.g. value =
showOther[question.header] ? '__other__' : ((answers[question.header] || [])[0]
|| '')) and replace the literal "__other__" with a single shared constant (e.g.
OTHER_VALUE) declared nearby; ensure handleSingleSelect and the existing
onValueChange logic still set showOther[question.header] when the Other option
is chosen so the UI remains in sync.
In `@apps/ui/src/lib/electron.ts`:
- Around line 545-550: Add a mock implementation for submitClarificationResponse
inside createMockAutoModeAPI: implement an async function matching the signature
submitClarificationResponse(projectPath, featureId, requestId, answers) that
logs the inputs (e.g., via console.log) and returns { success: true } so the
mock API surface is complete and type-safe under strict mode; ensure the
function is included in the returned object from createMockAutoModeAPI.
🧹 Nitpick comments (4)
libs/types/src/prompts.ts (1)
309-310: Minor comment clarity improvement.The comment states "Optional clarification instructions" but the field is actually required (non-optional
string). The semantics are that an empty string means disabled, which is correct behavior, but the word "Optional" in the comment could be misleading.Consider clarifying:
📝 Suggested comment clarification
- /** Optional clarification instructions - empty string means disabled */ + /** Clarification instructions for interactive planning - empty string means disabled */ clarificationInstructions: string;apps/ui/src/components/views/board-view.tsx (1)
1388-1416: Surface clarification submit failures to users.Right now failures only log. A toast helps users recover/retry.
💡 Suggested update
- if (result.success) { - logger.info('Clarification answers submitted successfully'); - setPendingClarification(null); - } else { - logger.error('Failed to submit clarification answers:', result.error); - } + if (result.success) { + logger.info('Clarification answers submitted successfully'); + setPendingClarification(null); + } else { + logger.error('Failed to submit clarification answers:', result.error); + toast.error('Failed to submit clarification answers', { + description: result.error || 'Please try again.', + }); + } } catch (error) { logger.error('Error submitting clarification answers:', error); + toast.error('Failed to submit clarification answers', { + description: error instanceof Error ? error.message : 'Please try again.', + }); } finally { setIsClarificationLoading(false); }apps/ui/src/types/electron.d.ts (1)
315-331: Reuse ClarificationQuestion from@automaker/types.This inline shape can drift from the shared model (e.g., optional
description). Using the shared type keeps client/server aligned.♻️ Suggested refactor
-import type { ClaudeUsageResponse, CodexUsageResponse } from '@/store/app-store'; +import type { ClaudeUsageResponse, CodexUsageResponse } from '@/store/app-store'; +import type { ClarificationQuestion } from '@automaker/types'; @@ - questions: Array<{ - question: string; - header: string; - options: Array<{ - label: string; - description: string; - }>; - multiSelect: boolean; - }>; + questions: ClarificationQuestion[];apps/server/src/services/auto-mode-service.ts (1)
243-296: Strengthen clarification validation to match schema constraints.
Current checks allow empty option lists or oversized headers, which can yield non-actionable dialogs. Consider enforcing the documented bounds (e.g., 2–4 options, header length).♻️ Proposed tightening of validation
- if (typeof q.question !== 'string' || typeof q.header !== 'string') { + if (typeof q.question !== 'string' || !q.question.trim()) { + return null; + } + if ( + typeof q.header !== 'string' || + !q.header.trim() || + q.header.trim().length > 12 + ) { return null; } - if (!Array.isArray(q.options)) { + if (!Array.isArray(q.options) || q.options.length < 2 || q.options.length > 4) { return null; } // Validate each option has label and description const options: Array<{ label: string; description: string }> = []; for (const opt of q.options) { if (!opt || typeof opt !== 'object') { return null; } const optObj = opt as Record<string, unknown>; - if (typeof optObj.label !== 'string') { + if (typeof optObj.label !== 'string' || !optObj.label.trim()) { return null; }
| if (!result.success) { | ||
| res.status(500).json({ | ||
| success: false, | ||
| error: result.error, | ||
| }); | ||
| return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a 4xx for expected resolve failures.
resolveClarification can fail due to missing/mismatched pending requests (client-side issue). Returning 500 misclassifies the error and may trigger retry logic.
🔧 Suggested adjustment
- if (!result.success) {
- res.status(500).json({
- success: false,
- error: result.error,
- });
- return;
- }
+ if (!result.success) {
+ res.status(409).json({
+ success: false,
+ error: result.error ?? 'No pending clarification for this request',
+ });
+ return;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!result.success) { | |
| res.status(500).json({ | |
| success: false, | |
| error: result.error, | |
| }); | |
| return; | |
| if (!result.success) { | |
| res.status(409).json({ | |
| success: false, | |
| error: result.error ?? 'No pending clarification for this request', | |
| }); | |
| return; |
🤖 Prompt for AI Agents
In `@apps/server/src/routes/auto-mode/routes/clarification-response.ts` around
lines 102 - 107, The current handler in clarification-response.ts returns 500
when resolveClarification returns result.success === false; change this to
return a 4xx (e.g., 400 Bad Request) for expected client-side failures: in the
block that checks result.success, replace res.status(500).json(...) with
res.status(400).json(...) (or map certain result.error codes to 404/409 if you
have error semantics) so only unexpected exceptions remain 500; keep the same
payload structure and only alter the status code returned for
resolveClarification failures.
| /** | ||
| * Wait for user to answer clarification questions. | ||
| * Returns a promise that resolves with the user's answers. | ||
| */ | ||
| waitForClarificationAnswers( | ||
| featureId: string, | ||
| projectPath: string, | ||
| requestId: string, | ||
| toolUseId: string | ||
| ): Promise<Record<string, string>> { | ||
| const CLARIFICATION_TIMEOUT_MS = 10 * 60 * 1000; // 10 minutes | ||
|
|
||
| logger.info( | ||
| `Registering pending clarification for feature ${featureId}, requestId=${requestId}` | ||
| ); | ||
| logger.info( | ||
| `Current pending clarifications: ${Array.from(this.pendingClarifications.keys()).join(', ') || 'none'}` | ||
| ); | ||
|
|
||
| return new Promise((resolve, reject) => { | ||
| // Set up timeout to prevent indefinite waiting | ||
| const timeoutId = setTimeout(() => { | ||
| const pending = this.pendingClarifications.get(featureId); | ||
| if (pending) { | ||
| logger.warn(`Clarification for feature ${featureId} timed out after 10 minutes`); | ||
| this.pendingClarifications.delete(featureId); | ||
| reject( | ||
| new Error('Clarification timed out after 10 minutes - feature execution cancelled') | ||
| ); | ||
| } | ||
| }, CLARIFICATION_TIMEOUT_MS); | ||
|
|
||
| // Wrap resolve/reject to clear timeout | ||
| const wrappedResolve = (answers: Record<string, string>) => { | ||
| clearTimeout(timeoutId); | ||
| resolve(answers); | ||
| }; | ||
|
|
||
| const wrappedReject = (error: Error) => { | ||
| clearTimeout(timeoutId); | ||
| reject(error); | ||
| }; | ||
|
|
||
| this.pendingClarifications.set(featureId, { | ||
| resolve: wrappedResolve, | ||
| reject: wrappedReject, | ||
| featureId, | ||
| projectPath, | ||
| requestId, | ||
| toolUseId, | ||
| }); | ||
|
|
||
| logger.info(`Pending clarification registered for feature ${featureId}`); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Emit a cancellation/timeout event so the UI can clear pending clarifications.
Timeout/cancel only rejects the promise; without a UI signal, dialogs can linger.
🧭 Suggested event on timeout/cancel
const timeoutId = setTimeout(() => {
const pending = this.pendingClarifications.get(featureId);
if (pending) {
logger.warn(`Clarification for feature ${featureId} timed out after 10 minutes`);
this.pendingClarifications.delete(featureId);
+ this.emitAutoModeEvent('clarification:questions-cancelled', {
+ featureId,
+ projectPath: pending.projectPath,
+ requestId: pending.requestId,
+ reason: 'timeout',
+ });
reject(
new Error('Clarification timed out after 10 minutes - feature execution cancelled')
);
}
}, CLARIFICATION_TIMEOUT_MS); if (pending) {
logger.info(`Found and cancelling pending clarification for feature ${featureId}`);
pending.reject(new Error('Clarification cancelled - feature was stopped'));
this.pendingClarifications.delete(featureId);
+ this.emitAutoModeEvent('clarification:questions-cancelled', {
+ featureId,
+ projectPath: pending.projectPath,
+ requestId: pending.requestId,
+ reason: 'cancelled',
+ });
} else {Also applies to: 2198-2215
🤖 Prompt for AI Agents
In `@apps/server/src/services/auto-mode-service.ts` around lines 2092 - 2146, In
waitForClarificationAnswers, when a clarification is timed out or cancelled you
currently only reject the promise; update this to also emit a UI-facing event
(e.g., via your existing event emitter or a new
ClarificationCancelled/ClarificationTimedOut event) so the frontend can clear
dialogs; specifically, add an emit call wherever you call reject in the timeout
handler and in the wrappedReject path, include featureId/requestId/toolUseId in
the event payload and still clear this.pendingClarifications (the changes should
reference waitForClarificationAnswers, this.pendingClarifications, wrappedReject
and the timeout handler).
| // Handle AskUserQuestion tool calls for interactive planning mode | ||
| // Cast input to get proper typing for the AskUserQuestion tool schema | ||
| const blockInput = block.input as Record<string, unknown> | undefined; | ||
| const blockId = (block as { id?: string }).id; | ||
| if ( | ||
| block.name === 'AskUserQuestion' && | ||
| requiresApproval && | ||
| blockInput?.questions && | ||
| Array.isArray(blockInput.questions) | ||
| ) { | ||
| logger.info(`AskUserQuestion tool detected for feature ${featureId}`); | ||
|
|
||
| // Validate and parse questions from tool input | ||
| const questions = parseClarificationQuestions(blockInput.questions); | ||
| if (!questions || questions.length === 0) { | ||
| logger.warn( | ||
| `Invalid or empty questions from AskUserQuestion tool for feature ${featureId}` | ||
| ); | ||
| continue; | ||
| } | ||
|
|
||
| const requestId = randomUUID(); | ||
| const toolUseId = blockId || randomUUID(); | ||
|
|
||
| // Emit WebSocket event for UI | ||
| this.emitAutoModeEvent('clarification:questions-required', { | ||
| featureId, | ||
| projectPath: finalProjectPath, | ||
| questions, | ||
| requestId, | ||
| toolUseId, | ||
| timestamp: new Date().toISOString(), | ||
| }); | ||
|
|
||
| // Create notification | ||
| try { | ||
| const notificationService = getNotificationService(); | ||
| await notificationService.createNotification({ | ||
| type: 'clarification_questions_required', | ||
| title: 'Clarification needed', | ||
| message: `AI has ${questions.length} question${questions.length === 1 ? '' : 's'} during planning`, | ||
| featureId, | ||
| projectPath: finalProjectPath, | ||
| }); | ||
| } catch (notifError) { | ||
| logger.error(`Failed to create clarification notification:`, notifError); | ||
| } | ||
|
|
||
| // Wait for user response | ||
| logger.info(`Waiting for clarification answers for feature ${featureId}`); | ||
| try { | ||
| const answers = await this.waitForClarificationAnswers( | ||
| featureId, | ||
| finalProjectPath, | ||
| requestId, | ||
| toolUseId | ||
| ); | ||
| logger.info( | ||
| `Clarification answers received for feature ${featureId}: ${Object.keys(answers).length} answers` | ||
| ); | ||
|
|
||
| // Add answers to response text for context | ||
| responseText += `\n\n### User Clarification Answers\n`; | ||
| for (const [header, answer] of Object.entries(answers)) { | ||
| responseText += `- **${header}**: ${answer}\n`; | ||
| } | ||
| scheduleWrite(); | ||
| } catch (clarificationError) { | ||
| // Clarification was cancelled or timed out | ||
| logger.warn( | ||
| `Clarification failed for feature ${featureId}: ${(clarificationError as Error).message}` | ||
| ); | ||
| throw clarificationError; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find AskUserQuestion handlers and any tool_result plumbing
echo "=== Searching for AskUserQuestion references ==="
rg -n "AskUserQuestion" apps --type ts --type tsx -C3 2>/dev/null | head -100
echo ""
echo "=== Searching for tool_result handling ==="
rg -n "tool_result|toolResult" apps --type ts --type tsx -C2 2>/dev/null | head -100
echo ""
echo "=== Searching for clarification answer handling ==="
rg -n "clarification:questions-answered|submitClarificationResponse|waitForClarificationAnswers" apps --type ts --type tsx -C3 2>/dev/null | head -150Repository: AutoMaker-Org/automaker
Length of output: 212
🏁 Script executed:
#!/bin/bash
# Check imports in the reviewed file to verify coding guideline compliance
echo "=== Checking imports in auto-mode-service.ts ==="
head -100 apps/server/src/services/auto-mode-service.ts | grep -n "^import"
echo ""
echo "=== Check if there are relative imports that should use `@automaker` ==="
rg -n "from '[.][./]" apps/server/src/services/auto-mode-service.ts 2>/dev/nullRepository: AutoMaker-Org/automaker
Length of output: 1810
🏁 Script executed:
#!/bin/bash
echo "=== Check file size and location ==="
wc -l apps/server/src/services/auto-mode-service.ts
echo ""
echo "=== Check actual content around lines 3376-3450 ==="
sed -n '3376,3450p' apps/server/src/services/auto-mode-service.ts | head -80
echo ""
echo "=== Search for clarification-related functions in the entire file ==="
rg -n "waitForClarification|AskUserQuestion|clarification" apps/server/src/services/auto-mode-service.ts --max-count 20Repository: AutoMaker-Org/automaker
Length of output: 5250
🏁 Script executed:
#!/bin/bash
# Broader search - check if AskUserQuestion exists anywhere in the repo
echo "=== Broad search for AskUserQuestion in entire apps directory ==="
find apps -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \) -exec grep -l "AskUserQuestion" {} \; 2>/dev/null
echo ""
echo "=== Check shared package structure to understand `@automaker` namespace ==="
find . -maxdepth 3 -type d -name "@automaker" 2>/dev/null | head -20Repository: AutoMaker-Org/automaker
Length of output: 354
🏁 Script executed:
#!/bin/bash
echo "=== Check what happens to responseText after answers are appended (lines 3450-3500) ==="
sed -n '3450,3550p' apps/server/src/services/auto-mode-service.ts
echo ""
echo "=== Search for where responseText is sent to provider or used in tool loop ==="
rg -n "responseText" apps/server/src/services/auto-mode-service.ts -B2 -A2 | tail -100Repository: AutoMaker-Org/automaker
Length of output: 8956
🏁 Script executed:
#!/bin/bash
echo "=== Examine the clarification response route handler ==="
cat apps/server/src/routes/auto-mode/routes/clarification-response.tsRepository: AutoMaker-Org/automaker
Length of output: 3677
🏁 Script executed:
#!/antml/bash
echo "=== Check for tool result mechanism or provider loop continuation ==="
rg -n "tool_result|toolResult|toolResult|providerResponse|continue|restart" apps/server/src/services/auto-mode-service.ts -B1 -A1 | head -80Repository: AutoMaker-Org/automaker
Length of output: 2990
🏁 Script executed:
#!/bin/bash
echo "=== Search for how tool blocks are normally processed in the main loop ==="
sed -n '3200,3350p' apps/server/src/services/auto-mode-service.ts | grep -A10 -B5 "block.type"
echo ""
echo "=== Look for any tool result message construction ==="
rg -n "type.*tool|result.*message|message.*tool" apps/server/src/services/auto-mode-service.ts -B2 -A2 | head -80Repository: AutoMaker-Org/automaker
Length of output: 3306
🏁 Script executed:
#!/bin/bash
echo "=== Check how the provider's message streaming loop works and if it expects tool results ==="
rg -n "simpleQuery|ProviderFactory|provider.query|provider.stream" apps/server/src/services/auto-mode-service.ts -B1 -A3 | head -120Repository: AutoMaker-Org/automaker
Length of output: 1883
🏁 Script executed:
#!/bin/bash
echo "=== Look for the broader context of the message loop at lines 3340-3380 ==="
sed -n '3340,3380p' apps/server/src/services/auto-mode-service.tsRepository: AutoMaker-Org/automaker
Length of output: 1920
🏁 Script executed:
#!/bin/bash
echo "=== Check the full stream loop context to see if tool results are handled elsewhere ==="
sed -n '2850,2900p' apps/server/src/services/auto-mode-service.ts
echo ""
echo "=== Search for any mechanism that builds or injects tool responses ==="
rg -n "toolUse|tool_use.*result|tool.*result" apps/server/src/services/auto-mode-service.ts -B1 -A1 | head -60Repository: AutoMaker-Org/automaker
Length of output: 2996
🏁 Script executed:
#!/antml/bash
echo "=== Check what simpleQuery does with tool use blocks ==="
cat apps/server/src/providers/simple-query-service.ts | head -150Repository: AutoMaker-Org/automaker
Length of output: 5060
The clarification answers are not injected back into the provider/tool loop.
The code waits for user answers and appends them to responseText (for file output), but this is insufficient for agentic workflows. Claude and similar APIs expect a tool_result message sent back through the message stream after a tool_use block is received. Currently, the provider stream just continues without acknowledging the tool call with a tool_result, which means the agent won't properly consume the clarification answers in its next iteration.
The answers should be wrapped in a tool_result message and injected back into the provider's message stream, or the agent should be restarted with the answers as context. Consider:
- Constructing a tool_result block with the clarification answers and adding it to the message stream
- Or calling the provider again with a continuation that includes the answers as a user message
Also, update imports to use the @automaker/* namespace per coding guidelines (e.g., lines 12–13 should import from @automaker/providers, not relative paths).
🤖 Prompt for AI Agents
In `@apps/server/src/services/auto-mode-service.ts` around lines 3376 - 3450, The
code waits for clarification answers but never feeds them back into the
provider/agent loop; after receiving answers from waitForClarificationAnswers
(use requestId and toolUseId), construct a tool_result message payload
containing the answers (e.g., JSON mapping or formatted text) and inject it into
the provider message stream or call the provider continuation API so the agent
receives the tool_result before continuing; do this right after the answers are
appended to responseText (near responseText, scheduleWrite, and the
waitForClarificationAnswers call) and ensure the tool_result uses the same
toolUseId/requestId so the agent associates it with the AskUserQuestion tool
call. Also update imports that currently reference relative provider paths to
use the `@automaker/`* namespace (replace the relative imports on the
provider-related import lines with `@automaker/providers`).
| // Create a sanitized project name for test ID (lowercase, hyphens instead of spaces) | ||
| const sanitizedName = project.name.toLowerCase().replace(/\s+/g, '-'); | ||
|
|
||
| return ( | ||
| <button | ||
| onClick={onClick} | ||
| onContextMenu={onContextMenu} | ||
| data-testid={`project-switcher-project-${sanitizedName}`} | ||
| className={cn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard test IDs against collisions and special characters.
Using only project.name can collide (duplicate names or case/space differences) and can include characters that make selectors brittle. Consider slugifying more aggressively and appending project.id (or another unique key) so tests remain stable.
✅ Suggested fix
- // Create a sanitized project name for test ID (lowercase, hyphens instead of spaces)
- const sanitizedName = project.name.toLowerCase().replace(/\s+/g, '-');
+ // Create a sanitized project name for test ID (slugify + unique suffix)
+ const sanitizedName = project.name
+ .trim()
+ .toLowerCase()
+ .replace(/[^a-z0-9]+/g, '-')
+ .replace(/^-+|-+$/g, '');
+ const testId = `project-switcher-project-${sanitizedName || 'untitled'}-${project.id}`;
...
- data-testid={`project-switcher-project-${sanitizedName}`}
+ data-testid={testId}🤖 Prompt for AI Agents
In
`@apps/ui/src/components/layout/project-switcher/components/project-switcher-item.tsx`
around lines 40 - 48, The test-id built from project.name is brittle and can
collide; update the logic in project-switcher-item.tsx (the sanitizedName
creation used in the data-testid) to more aggressively slugify the name
(lowercase, strip/replace all non-alphanumeric characters, collapse multiple
hyphens) and then append a unique identifier such as project.id (e.g.,
`${slugifiedName}-${project.id}`) so the data-testid on the button is both
stable and unique; modify the sanitizedName variable and the data-testid
construction used in the button element accordingly.
| <RadioGroup | ||
| value={(answers[question.header] || [])[0] || ''} | ||
| onValueChange={(value) => handleSingleSelect(question.header, value)} | ||
| className="space-y-2 pl-1" | ||
| disabled={isLoading} | ||
| > | ||
| {question.options.map((option, optIndex) => ( | ||
| <div key={optIndex} className="flex items-start space-x-3"> | ||
| <RadioGroupItem | ||
| value={option.label} | ||
| id={`${question.header}-${optIndex}`} | ||
| disabled={isLoading} | ||
| /> | ||
| <div className="flex flex-col"> | ||
| <Label | ||
| htmlFor={`${question.header}-${optIndex}`} | ||
| className="font-medium cursor-pointer" | ||
| > | ||
| {option.label} | ||
| </Label> | ||
| {option.description && ( | ||
| <span className="text-sm text-muted-foreground"> | ||
| {option.description} | ||
| </span> | ||
| )} | ||
| </div> | ||
| </div> | ||
| ))} | ||
| {/* Other option */} | ||
| <div className="flex items-start space-x-3"> | ||
| <RadioGroupItem | ||
| value="__other__" | ||
| id={`${question.header}-other`} | ||
| disabled={isLoading} | ||
| /> | ||
| <div className="flex flex-col flex-1"> | ||
| <Label | ||
| htmlFor={`${question.header}-other`} | ||
| className="font-medium cursor-pointer" | ||
| > | ||
| Other | ||
| </Label> | ||
| {showOther[question.header] && ( | ||
| <Input | ||
| className="mt-2" | ||
| placeholder="Please specify..." | ||
| value={customTexts[question.header] || ''} | ||
| onChange={(e) => handleCustomTextChange(question.header, e.target.value)} | ||
| disabled={isLoading} | ||
| /> | ||
| )} | ||
| </div> | ||
| </div> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
“Other” radio doesn’t stay selected.
Line 233 uses answers only for the RadioGroup value, so selecting “Other” clears the value and the radio appears unselected. Tie the value to showOther (and reuse the constant) so the UI reflects the choice.
🐛 Proposed fix
- <RadioGroup
- value={(answers[question.header] || [])[0] || ''}
+ <RadioGroup
+ value={
+ showOther[question.header]
+ ? OTHER_OPTION_VALUE
+ : (answers[question.header] || [])[0] || ''
+ }
onValueChange={(value) => handleSingleSelect(question.header, value)}
className="space-y-2 pl-1"
disabled={isLoading}
>
@@
- <div className="flex items-start space-x-3">
- <RadioGroupItem
- value="__other__"
+ <div className="flex items-start space-x-3">
+ <RadioGroupItem
+ value={OTHER_OPTION_VALUE}
id={`${question.header}-other`}
disabled={isLoading}
/>🤖 Prompt for AI Agents
In
`@apps/ui/src/components/views/board-view/dialogs/clarification-questions-dialog.tsx`
around lines 233 - 285, The RadioGroup's value currently reads only from answers
so selecting the "Other" option doesn't stay selected; change the value
expression to prefer showOther for this question (e.g. value =
showOther[question.header] ? '__other__' : ((answers[question.header] || [])[0]
|| '')) and replace the literal "__other__" with a single shared constant (e.g.
OTHER_VALUE) declared nearby; ensure handleSingleSelect and the existing
onValueChange logic still set showOther[question.header] when the Other option
is chosen so the UI remains in sync.
| submitClarificationResponse: ( | ||
| projectPath: string, | ||
| featureId: string, | ||
| requestId: string, | ||
| answers: Record<string, string> | ||
| ) => Promise<{ success: boolean; error?: string }>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a mock implementation for submitClarificationResponse.
Without it, the web/mock API surface is incomplete and calls will fail (or the type check will fail in strict mode).
✅ Suggested mock implementation (add to createMockAutoModeAPI)
submitClarificationResponse: async (
projectPath: string,
featureId: string,
requestId: string,
answers: Record<string, string>
) => {
console.log('[Mock] Clarification response:', { projectPath, featureId, requestId, answers });
return { success: true };
},🤖 Prompt for AI Agents
In `@apps/ui/src/lib/electron.ts` around lines 545 - 550, Add a mock
implementation for submitClarificationResponse inside createMockAutoModeAPI:
implement an async function matching the signature
submitClarificationResponse(projectPath, featureId, requestId, answers) that
logs the inputs (e.g., via console.log) and returns { success: true } so the
mock API surface is complete and type-safe under strict mode; ensure the
function is included in the returned object from createMockAutoModeAPI.
|
TODO: We should see if this would work for other models as well. |
This PR reintroduces the Interactive Planning Mode feature that was in the previous PR #543, now rebased against the v0.13.0rc branch.
Summary
This PR introduces Interactive Planning Mode, a feature that allows the AI to ask clarification questions before generating an implementation plan. This helps gather requirements and preferences upfront, leading to better implementation plans that match user expectations.
Key Features
enableInteractivePlanningsetting in Feature Defaults (default:true)Changes
New Setting
enableInteractivePlanning: booleanin GlobalSettings (default:true)Test Plan
enableInteractivePlanning: true→ AI should ask questionsenableInteractivePlanning: false→ AI should skip questionsSummary by CodeRabbit
New Features
✏️ Tip: You can customize this high-level summary in your review settings.