Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 43 additions & 36 deletions .cursor/rules/code-review-standards.mdc
Original file line number Diff line number Diff line change
@@ -1,62 +1,69 @@
---
description: When doing code reviews
alwaysApply: false
version: 1.0.0
version: 1.1.0
---

# Code Review Standards

This rule defines what feedback is actionable based on project priorities. Use this to
focus reviews on issues that actually get addressed.
This rule defines when bot feedback is incorrect given context bots lack. Use this to
identify suggestions that don't apply, not to skip valid feedback based on priority.

## Core Philosophy

We ship pragmatically. Fix critical bugs, ensure security, validate core behavior.
Ignore theoretical edge cases, minor polish, and over-engineering. Trust runtime
validation over compile-time perfection.
Address all suggestions where the bot's analysis is correct given full context. Decline
when you can articulate why the bot's reasoning doesn't hold - valid declines explain
why the analysis is incorrect, not why addressing it is inconvenient.

## Skip These Issues
## When Bot Suggestions Don't Apply

Do not include them in reviews.
These patterns describe situations where bot analysis is typically incorrect. Decline
with explanation when you can demonstrate the bot's reasoning doesn't hold.

### Transactions for Rare Race Conditions
### Single-Use Values

Only flag race conditions you can demonstrate happen in practice. Theoretical edge cases
requiring contrived timing are accepted as-is. Added complexity for extremely rare
scenarios is not justified.
Bots flag inline values as "magic strings" needing extraction. This suggestion is wrong
when the value appears exactly once and context makes the meaning clear. Extracting
`METHOD_INITIALIZE = "initialize"` for a single use adds indirection without DRY
benefit. Constants exist to stay DRY across multiple uses, not to avoid inline values.

### Magic Numbers and String Constants
### Theoretical Race Conditions

Constants exist to stay DRY, not to avoid "magic strings." This isn't Java. If a value
appears once and the meaning is clear from context, inline is better than indirection.
Only flag repeated values that would require multiple updates to change. Extracting
`METHOD_INITIALIZE = "initialize"` for a single use is cargo cult programming.
Bots flag potential race conditions based on static analysis. This suggestion is wrong
when operations are already serialized by a queue, mutex, or transaction the bot can't
see. Add synchronization when profiling or testing reveals actual race conditions.

### Accessibility Improvements
### Redundant Type Safety

Bots suggest stricter types or null checks. This suggestion is wrong when runtime
validation already handles the case correctly, or when the type system guarantees the
condition can't occur. TypeScript serves the code - working code with runtime safety
takes priority over compile-time type perfection.

ARIA labels, keyboard navigation, and screen reader support are not current project
priorities.
### Premature Optimization

### Minor Type Safety Improvements
Bots flag performance concerns without data. This suggestion is wrong when no profiling
shows actual performance problems. Optimize based on measurements - complexity should
yield measurable performance gains.

When runtime checks handle edge cases correctly, compile-time type refinement is
optional. TypeScript serves the code, not vice versa. Working code takes priority over
perfect types.
## Items Requiring Case-by-Case Judgment

### Test Edge Cases
These require evaluation in context - sometimes the bot is right, sometimes wrong.

Core behavior receives test coverage. Edge cases including malformed API responses,
empty arrays, and boundary conditions are deferred unless they cause production issues.
Target is 90% coverage, not 100%. Focus on user-facing functionality.
### Test Coverage Gaps

### Performance Micro-Optimizations
Bot requests for edge case tests: Address if the edge case could reasonably occur and
cause user-facing issues. Decline if you can demonstrate the scenario is already handled
by other validation or genuinely can't occur given system constraints.

Optimize when metrics show actual problems. Skipping unnecessary queries, adding
theoretical indexes, or batching operations for minor latency gains without profiling
data is premature optimization.
### Documentation Requests

### Documentation Enhancements
Bot requests for additional docs: Address if the code is genuinely unclear. Decline if
the documentation would merely restate what the code already says clearly.

### Accessibility Improvements

Core documentation exists. Troubleshooting sections, rate limit documentation, and
explanatory comments are iterative improvements based on user feedback. Perfect
documentation delays shipping.
Accessibility (ARIA labels, keyboard navigation, screen reader support) is a product
priority decision that varies by project. Check project or user configuration for the
team's stance. If no stance is declared, present the suggestion to the user with context
about the scope and ask whether to address or decline.
68 changes: 33 additions & 35 deletions plugins/core/commands/address-pr-comments.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,23 @@
description: Triage and address PR comments from code review bots intelligently
argument-hint: [pr-number]
model: sonnet
version: 1.4.0
version: 1.5.0
---

# Address PR Comments

<objective>
Get a PR to "ready to merge" by intelligently triaging bot feedback. You have context
bots lack - use judgment, not compliance. Declining feedback is as valid as accepting
it. The goal is "ready to merge," not "zero comments."
Get a PR to "ready to merge" by addressing valid feedback and declining incorrect
suggestions. You have context bots lack - use it to identify when a bot's analysis is
wrong, not to prioritize which valid issues to skip.

Read @rules/code-review-standards.mdc for triage principles:
If a suggestion would genuinely improve the code, fix it. Only decline when you can
articulate why the bot is mistaken given context it lacks.

- Fix critical bugs, ensure security, validate core behavior
- Skip theoretical edge cases, minor polish, over-engineering suggestions
- Trust runtime validation over compile-time perfection
- Constants for DRY, not to avoid "magic strings"
- Target 90% coverage, not 100%
- Optimize when metrics show problems, not preemptively </objective>
Read @rules/code-review-standards.mdc for patterns where bot suggestions typically don't
apply. Use these to identify incorrect suggestions - explain why the bot is wrong in
this specific case, not just that the issue is "minor" or "not blocking."
</objective>

<usage>
/address-pr-comments - Auto-detect PR from current branch
Expand Down Expand Up @@ -49,14 +48,14 @@ flag for user attention rather than auto-resolving.
<hotfix-mode>
If the branch name starts with `hotfix/`, switch to expedited review mode:

- Only address security vulnerabilities and actual bugs
- Decline all style, refactoring, and "improvement" suggestions
- Skip theoretical concerns - focus on "will this break production?"
- One pass only - don't wait for bot re-reviews after fixes
- Speed over polish - this is an emergency
- Focus on security vulnerabilities and bugs that could break production
- Speed and correctness take priority over polish
- One pass through comments, then push fixes immediately
- Style and refactoring suggestions get declined with "hotfix - addressing critical issues only"

Announce hotfix mode at start, explaining that you're running expedited review and will
only address security issues and bugs while declining all other feedback. </hotfix-mode>
Announce hotfix mode at start, explaining that this is an expedited review focusing on
security and correctness.
</hotfix-mode>

<comment-sources>
Code review bots comment at different API levels. Fetch from both endpoints:
Expand Down Expand Up @@ -95,8 +94,7 @@ After pushing fixes, re-check for merge conflicts (the base branch may have adva
while you were working) and return to polling since bots will re-analyze. Exit when all
review bots have completed and no new actionable feedback remains.

Avoid `gh pr checks --watch` - it's designed for human terminals and causes
unpredictable LLM behavior.
Use polling with adaptive sleep intervals to check bot status rather than watch mode.
</execution-model>

<narration>
Expand All @@ -111,26 +109,26 @@ Keep narration brief and informative.
</narration>

<triage-process>
For each bot comment, evaluate against code-review-standards.mdc:
For each bot comment, ask: "Is this suggestion correct given context I have?"

Address immediately:
Address the suggestion when the bot's analysis is correct given full context. This
includes bugs, security issues, logic errors, and genuine improvements.

- Security vulnerabilities
- Actual bugs that could cause runtime failures
- Core functionality issues
- Clear improvements to maintainability
When a bot correctly identifies an issue but suggests a suboptimal fix, address the
underlying issue with the appropriate solution. Credit the bot for the correct diagnosis.

Decline with explanation:
Decline with explanation when you can articulate why the bot is wrong:

- Theoretical race conditions without demonstrated impact
- Magic number/string extraction for single-use values
- Accessibility improvements (not current priority)
- Minor type safety refinements when runtime handles it
- Edge case tests for unlikely scenarios
- Performance micro-optimizations without profiling data
- Documentation enhancements beyond core docs
- Bot wants constant extraction, but this value appears once and context is clear
- Bot flags race condition, but operations are already serialized by queue/mutex
- Bot suggests null check, but type system guarantees non-null here
- Bot requests stricter types, but runtime validation already handles the case

Show triage summary, then proceed autonomously. </triage-process>
Valid declines explain why the bot's analysis is incorrect, not why addressing the issue
is inconvenient. If the feedback would improve the code, address it.

Show triage summary with your reasoning, then proceed autonomously.
</triage-process>

<feedback-as-training>
Responding to bot comments serves two purposes: record-keeping and training. Bots learn
Expand Down