Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
231 changes: 231 additions & 0 deletions .agents/skills/create-a-plan/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,231 @@
---
name: create-a-plan
description: Conduct a focused technical planning interview to produce an implementable, parallelizable plan or spec with clear dependencies, risks, and open questions.
---

# Create a Plan Skill

This skill runs a structured technical interview to turn a rough idea or an existing spec into a detailed, implementable plan. The output is organized for parallel execution: foundations first, then independent workstreams, then merge and integration.

## Invocation

The user will provide one of:
- A path to a spec or plan file (for example: `SPEC.md`, `PLAN.md`, `RFC.md`)
- A rough description of what they want to build
- A feature request or problem statement

Output is always written to `PLAN.md` in the repo root.

## Process

### Phase 0: Preflight

1. If a file path is provided, read it first and note goals, non-goals, constraints, and gaps.
2. Confirm you will produce `PLAN.md` as the output in the repo root. If `PLAN.md` already exists, update it rather than creating a new file.

### Phase 1: Discovery

Summarize what is known, then identify missing details. Focus on:
- Goals and non-goals
- Constraints (time, budget, platform, dependencies)
- Success metrics and acceptance criteria

### Phase 2: Deep Interview

Use the `AskUserQuestion` (Claude) and/or `request_user_input` (Codex) tools in rounds. Ask 1-3 questions per round. Each round should go deeper and avoid repeating what is already known.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for AskUserQuestion and request_user_input usage in the codebase

echo "Searching for AskUserQuestion usage..."
rg -n "AskUserQuestion" --type md

echo -e "\nSearching for request_user_input usage..."
rg -n "request_user_input" --type md

echo -e "\nSearching for other interactive tool references..."
rg -nP "(ask.*user|user.*input|request.*input)" --type md -i | head -30

echo -e "\nSearching for tool definitions or documentation..."
rg -n "tool" .agents/skills/create-a-plan/SKILL.md -A 2 -B 2

Repository: dojoengine/dojo.js

Length of output: 496


🏁 Script executed:

cat -n .agents/skills/create-a-plan/SKILL.md

Repository: dojoengine/dojo.js

Length of output: 8664


Document the external tool references with links to their APIs.

The tools AskUserQuestion (Claude) and request_user_input (Codex) mentioned on line 35 lack supporting documentation in the repository. For clarity and maintainability, add links to the external API documentation for these tools or explain how they map to Claude and Codex function-calling capabilities.

🤖 Prompt for AI Agents
In @.agents/skills/create-a-plan/SKILL.md at line 35, Update the SKILL.md entry
for the tools AskUserQuestion and request_user_input to include explicit
external-doc links and/or a brief mapping to Claude and Codex function-calling:
reference the Claude function-calling docs (link to Claude API function-calling
page) for AskUserQuestion and the Codex/request_user_input API or SDK docs (link
to relevant OpenAI or Codex function-calling docs) for request_user_input, and
add a one-sentence note describing how each tool maps to the respective
platform’s function-calling signature/usage (e.g., parameters, expected return
shape, authentication) so readers can find and implement the external tool
integrations.


CRITICAL RULES:
1. Never ask obvious questions. If the codebase or spec already answers it, do not ask it again.
2. Ask about edge cases and failure modes.
3. Probe for hidden complexity (state transitions, migrations, concurrency).
4. Challenge assumptions when they create risk or ambiguity.
5. Identify parallelization boundaries and serial dependencies.
6. If the user is unsure, propose a default and ask for confirmation.

Question categories to cover as relevant:
- Technical architecture and data flow
- Data model and state management
- API contracts and versioning
- Caching and invalidation
- Background jobs, retries, and idempotency
- Error handling and recovery
- Observability and debugging
- Performance, scale, and SLAs
- Security, privacy, and compliance
- Integrations and external dependencies
- UX flows, accessibility, and responsiveness
- Rollout, migration, and rollback
- Testing strategy and validation

### Phase 3: Dependency Analysis

Identify:
1. Serial dependencies that must complete first
2. Parallel workstreams that can run independently
3. Merge points where work reconvenes

### Phase 4: Plan Generation

Write the final plan to `PLAN.md`. Ensure the plan includes concrete verification steps the agent can run end to end. If the user only wants a plan in chat, provide it inline and mention that it would be written to `PLAN.md`.

## Output Format

The generated plan MUST follow this structure:

```markdown
# [Feature Name] Implementation Plan

## Overview
[2-3 sentence summary of what this implements and why]

## Goals
- [Explicit goal 1]
- [Explicit goal 2]

## Non-Goals
- [What this explicitly does NOT do]

## Assumptions and Constraints
- [Known constraints or assumptions]

## Requirements

### Functional
- [Requirement]

### Non-Functional
- [Performance, reliability, security, compliance]

## Technical Design

### Data Model
[Schema changes, new entities, relationships]

### API Design
[New endpoints, request/response shapes, versioning]

### Architecture
[System diagram in text or mermaid, component interactions]

### UX Flow (if applicable)
[Key screens, loading states, error recovery]

---

## Implementation Plan

### Serial Dependencies (Must Complete First)

These tasks create foundations that other work depends on. Complete in order.

#### Phase 0: [Foundation Name]
**Prerequisite for:** All subsequent phases

| Task | Description | Output |
|------|-------------|--------|
| 0.1 | [Task description] | [Concrete deliverable] |
| 0.2 | [Task description] | [Concrete deliverable] |

---

### Parallel Workstreams

These workstreams can be executed independently after Phase 0.

#### Workstream A: [Name]
**Dependencies:** Phase 0
**Can parallelize with:** Workstreams B, C

| Task | Description | Output |
|------|-------------|--------|
| A.1 | [Task description] | [Concrete deliverable] |
| A.2 | [Task description] | [Concrete deliverable] |

#### Workstream B: [Name]
**Dependencies:** Phase 0
**Can parallelize with:** Workstreams A, C

| Task | Description | Output |
|------|-------------|--------|
| B.1 | [Task description] | [Concrete deliverable] |

---

### Merge Phase

After parallel workstreams complete, these tasks integrate the work.

#### Phase N: Integration
**Dependencies:** Workstreams A, B, C

| Task | Description | Output |
|------|-------------|--------|
| N.1 | [Integration task] | [Concrete deliverable] |

---

## Testing and Validation

- [Unit, integration, end-to-end coverage]
- [Manual test plan if needed]

## Rollout and Migration

- [Feature flags, staged rollout, migration steps]
- [Rollback plan]

## Verification Checklist

- [Exact commands or manual steps the agent can run to verify correctness]
- [Expected outputs or success criteria]

## Risk Assessment

| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
| [Risk description] | Low/Med/High | Low/Med/High | [Strategy] |

## Open Questions

- [ ] [Question that still needs resolution]

## Decision Log

| Decision | Rationale | Alternatives Considered |
|----------|-----------|------------------------|
| [Decision made] | [Why] | [What else was considered] |
```

## Interview Flow Example

Round 1: High-Level Architecture
- "The spec mentions a sync engine. Is this push-based (webhooks), pull-based (polling), or event-driven (queue)?"
- "What is the expected data volume and throughput?"

Round 2: Edge Cases
- "If a batch fails mid-run, do we retry the whole batch or resume from a checkpoint?"
- "What happens when source data is deleted but still referenced downstream?"

Round 3: Parallelization
- "Can we process different categories independently, or are there cross-category dependencies?"
- "Is there a natural partition key that allows sharding?"

Round 4: Operational
- "What is the acceptable latency for sync or processing?"
- "How will operators debug failures and what visibility do they need?"

## Key Behaviors

1. Persist until the plan is implementable and verifiable by the agent, but avoid user fatigue by batching questions.
2. Challenge vague answers when they affect design decisions.
3. Identify hidden work and operational overhead.
4. Think about the merge and integration steps early.
5. Summarize understanding and confirm before writing the final plan.

## Completing the Interview

After sufficient rounds of questions:
1. Summarize your understanding back to the user
2. Confirm the parallelization strategy
3. Write the complete plan to the target file
4. Ask if any sections need refinement
89 changes: 89 additions & 0 deletions .agents/skills/create-pr/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
name: create-pr
description: Create or update a PR from current branch to main, watch CI, and address feedback
---
The user likes the state of the code.

There are $`git status --porcelain | wc -l | tr -d ' '` uncommitted changes.
The current branch is $`git branch --show-current`.
The target branch is origin/main.

$`git rev-parse --abbrev-ref @{upstream} 2>/dev/null && echo "Upstream branch exists." || echo "There is no upstream branch yet."`

**Existing PR:** $`gh pr view --json number,title,url --jq '"#\(.number): \(.title) - \(.url)"' 2>/dev/null || echo "None"`

The user requested a PR.

Follow these exact steps:

## Phase 1: Review the code

1. Review test coverage
2. Check for silent failures
3. Verify code comments are accurate
4. Review any new types
5. General code review

## Phase 2: Create/Update PR

6. Run `git diff` to review uncommitted changes
7. Commit them. Follow any instructions the user gave you about writing commit messages.
8. Push to origin.
9. Use `git diff origin/main...` to review the full PR diff
10. Check if a PR already exists for this branch:
- **If PR exists**:
- Draft/update the description in a temp file (e.g. `/tmp/pr-body.txt`).
- Update the PR body using the non-deprecated script:
- `./.agents/skills/create-pr/scripts/pr-body-update.sh --file /tmp/pr-body.txt`
- Re-fetch the body with `gh pr view --json body --jq .body` to confirm it changed.
- **If no PR exists**: Use `gh pr create --base main` to create a new PR. Keep the title under 80 characters and the description under five sentences.

The PR description should summarize ALL commits in the PR, not just the latest changes.

## Phase 3: Monitor CI and Address Issues

Note: Keep commands CI-safe and avoid interactive `gh` prompts. Ensure `GH_TOKEN` or `GITHUB_TOKEN` is set in CI.

11. Watch CI status and feedback using the polling script (instead of running `gh` in a loop):
- Run `./.agents/skills/create-pr/scripts/poll-pr.sh --triage-on-change --exit-when-green` (polls every 30s for 10 mins).
- If checks fail, use `gh pr checks` or `gh run list` to find the failing run id, then:
- Fetch the failed check logs using `gh run view <run-id> --log-failed`
- Analyze the failure and fix the issue
- Commit and push the fix
- Continue polling until all checks pass

12. Check for merge conflicts:
- Run `git fetch origin main && git merge origin/main`
- If conflicts exist, resolve them sensibly
- Commit the merge resolution and push

13. Use the polling script output to notice new reviews and comments (avoid direct polling via `gh`):
- If you need a full snapshot, run `./.agents/skills/create-pr/scripts/triage-pr.sh` once.
- If you need full context after the script reports a new item, fetch details once with `gh pr view --comments` or `gh api ...`.
- **Address feedback**:
- For bot reviews, read the review body and any inline comments carefully
- Address comments that are clearly actionable (bug fixes, typos, simple improvements)
- Skip comments that require design decisions or user input
- For addressed feedback, commit fixes with a message referencing the review/comment

## Phase 4: Merge and Cleanup

14. Once CI passes and the PR is approved, ask the user if they want to merge the PR.

15. If the user confirms, merge the PR:
- Use `gh pr merge --squash --delete-branch` to squash-merge and delete the remote branch

16. After successful merge, check if we're in a git worktree:
- Run: `[ "$(git rev-parse --git-common-dir)" != "$(git rev-parse --git-dir)" ]`
- **If in a worktree**: Use the ask user question tool (`request_user_input`) to ask if they want to clean up the worktree. If yes, run `wt remove --yes --force` to remove the worktree and local branch, then switch back to the main worktree.
- **If not in a worktree**: Just switch back to main with `git checkout main && git pull`
Comment on lines +76 to +79
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Search for 'wt' script or executable in common locations and throughout the repo
echo "=== Searching for 'wt' as a file ==="
fd -t f 'wt' --hidden 2>/dev/null | head -20

echo -e "\n=== Searching for 'wt' in bin, scripts, .agents directories ==="
find bin/ scripts/ .agents/ -type f -name 'wt' 2>/dev/null

echo -e "\n=== Searching for 'wt remove' references in the codebase ==="
rg 'wt\s+remove' -t md -t sh -t js -t ts --hidden 2>/dev/null

echo -e "\n=== Searching for 'wt' as a command/alias definition ==="
rg '(alias|function)\s+wt\b' --hidden 2>/dev/null | head -10

echo -e "\n=== Check the SKILL.md file context ==="
cat -n .agents/skills/create-pr/SKILL.md | sed -n '70,85p'

Repository: dojoengine/dojo.js

Length of output: 1530


🏁 Script executed:

# Check package.json for scripts that might define 'wt'
echo "=== Checking package.json for 'wt' related scripts ==="
find . -name "package.json" -type f 2>/dev/null | head -5 | while read f; do
  echo "File: $f"
  rg 'wt' "$f" 2>/dev/null || echo "No 'wt' found"
done

echo -e "\n=== Searching entire codebase for 'git worktree remove' ==="
rg 'git\s+worktree\s+remove' --hidden 2>/dev/null

echo -e "\n=== Check if 'wt' might be a shorthand or tool name ==="
rg '"wt"' --hidden 2>/dev/null | head -10

Repository: dojoengine/dojo.js

Length of output: 475


wt remove --yes --force is not a standard git command and is not defined in this repository.

The instruction at line 78 references a non-existent command that will fail when executed. Replace it with the standard git command: git worktree remove --force <worktree-path>. If a custom wt tool should be used instead, it needs to be defined and documented in the repository first.

🤖 Prompt for AI Agents
In @.agents/skills/create-pr/SKILL.md around lines 76 - 79, Replace the
non-standard `wt remove --yes --force` call with the correct Git worktree
removal flow: after detecting a worktree via `[ "$(git rev-parse
--git-common-dir)" != "$(git rev-parse --git-dir)" ]`, use the
`request_user_input` tool to ask the user for confirmation and the target
worktree path, then run `git worktree remove --force <worktree-path>` (or
compute the path from `git rev-parse --git-common-dir`/other repo metadata) and
finally run `git checkout main && git pull`; if a custom `wt` helper was
intended, document and implement `wt` first and reference it from the SKILL text
instead of the invalid command.


## Completion

Report the final PR status to the user, including:
- PR URL
- CI status (passed/merged)
- Any unresolved review comments that need user attention
- Cleanup status (worktree removed or branch switched)

If any step fails in a way you cannot resolve, ask the user for help.
Loading
Loading