Development cycle review gate (Gate 4) - executes parallel code review with 3 specialized reviewers, aggregates findings, and determines VERDICT for gate passage.
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
name: dev-review description: | Development cycle review gate (Gate 4) - executes parallel code review with 3 specialized reviewers, aggregates findings, and determines VERDICT for gate passage.
trigger: |
skip_when: |
NOT_skip_when: |
sequence: after: [dev-testing] before: [dev-validation]
related: complementary: [requesting-code-review, receiving-code-review]
verification: automated: - command: "cat docs/refactor/current-cycle.json | jq '.gates[4].code_reviewer.verdict'" description: "Code reviewer verdict captured" success_pattern: "PASS|FAIL|NEEDS_DISCUSSION" - command: "cat docs/refactor/current-cycle.json | jq '.gates[4].aggregate_verdict'" description: "Aggregate verdict determined" success_pattern: "PASS|FAIL" manual: - "All 3 reviewers dispatched in parallel (single message with 3 Task calls)" - "Critical/High issues addressed before PASS verdict" - "Review findings documented with file:line references"
examples:
See CLAUDE.md and requesting-code-review for canonical review requirements. This skill orchestrates Gate 4 execution.
Execute comprehensive code review using 3 specialized reviewers IN PARALLEL. Aggregate findings and determine VERDICT for gate passage.
Core principle: Three perspectives catch more bugs than one. Parallel execution = 3x faster feedback.
See shared-patterns/shared-pressure-resistance.md for universal pressure scenarios (including Combined Pressure Scenarios and Emergency Response).
Gate 4-specific note: ALL 3 reviewers MUST run in a SINGLE message with 3 Task tool calls. Sequential = violation. Parallel review = 10 min total.
FORBIDDEN Pattern: Dispatching reviewers in stages (e.g., code reviewer first, then others based on result)
Why This Is Wrong:
Examples of FORBIDDEN Staged Patterns:
❌ WRONG: Stage 1 - Code Reviewer First
1. Dispatch: code-reviewer
2. Wait for result
3. If PASS → Dispatch: business-logic-reviewer + security-reviewer
4. If FAIL → Return to Gate 0
❌ WRONG: Stage 1 - Code + Business, Stage 2 - Security
1. Dispatch: code-reviewer + business-logic-reviewer (2 agents)
2. Wait for results
3. If both PASS → Dispatch: security-reviewer
4. Aggregate
✅ CORRECT: All 3 in ONE Message
1. Dispatch ALL 3 in single message:
- Task(code-reviewer)
- Task(business-logic-reviewer)
- Task(security-reviewer)
2. Wait for all results
3. Aggregate findings
Detection:
Required Pattern: ALL 3 Task calls in ONE message = TRUE parallel execution
When Re-Review Is REQUIRED:
When Re-Review May Be Optional (Context-Dependent):
NEVER Skip Re-Review For:
Default Rule: When in doubt, re-run ALL 3 reviewers. Better safe than sorry.
NEEDS_DISCUSSION Verdict: Reviewer is uncertain, requires human decision
When Reviewers Return NEEDS_DISCUSSION:
Examples:
Aggregate Verdict When NEEDS_DISCUSSION Exists:
Do NOT:
See shared-patterns/shared-anti-rationalization.md for universal anti-rationalizations (including Review section).
Gate 4-specific rationalizations:
| Excuse | Reality |
|---|---|
| "Run code reviewer first, then others" | Staged execution is FORBIDDEN. All 3 in ONE message (parallel). |
| "Use general-purpose instead of code-reviewer" | Generic agents lack specialized review expertise. Only ring-default reviewers allowed. |
| "NEEDS_DISCUSSION is basically PASS" | NO. NEEDS_DISCUSSION = blocked until user decides. Cannot proceed to Gate 5. |
| "Deploy while reviews run" | CANNOT deploy until ALL 3 reviewers complete. Gate 4 BEFORE deployment, not during. |
| "Architect reviewed, counts as code review" | Informal reviews ≠ formal reviewers. The 3 specified agents MUST run. No substitutions. |
See shared-patterns/shared-red-flags.md for universal red flags (including Review section).
If you catch yourself thinking ANY of those patterns, STOP immediately. Run ALL 3 reviewers in parallel.
Gate 4 requires ALL issues CRITICAL/HIGH/MEDIUM to be FIXED before PASS verdict.
Severity mapping is absolute (matches dev-cycle Gate 4 policy):
MEDIUM Issue Response Protocol:
Anti-Rationalization: "Only 3 MEDIUM findings, can track with FIXME" → WRONG. MEDIUM = Fix NOW. No deferral, no tracking, no exceptions.
MEDIUM issues handling:
| MEDIUM Issue Response | Allowed? | Action Required |
|---|---|---|
| Fix immediately | ✅ REQUIRED | Fix, re-run all 3 reviewers |
| Add FIXME with issue link | ❌ NO | MEDIUM must be FIXED |
| Ignore silently | ❌ NO | This is a violation |
| "Risk accept" without user | ❌ NO | User must explicitly approve |
| Document in commit msg only | ❌ NO | Must be in code as FIXME |
BEFORE dispatching reviewers, verify you are NOT about to:
The ONLY valid action is: Open Task tool 3 times in THIS message, one for each reviewer.
If you catch yourself planning sequential execution → STOP → Re-plan as parallel.
Before dispatching reviewers, VERIFY agent names:
REQUIRED Format: {reviewer-name}
Valid Reviewers:
code-reviewerbusiness-logic-reviewersecurity-reviewerFORBIDDEN (Wrong Prefix/Name):
code-reviewer (missing prefix)ring:code-reviewer (wrong prefix)code-reviewer (wrong plugin)general-purpose (generic agent, not specialized reviewer)Explore (not a reviewer)Validation Rule:
IF agent_name does NOT start with ""
AND agent_name is NOT in [code-reviewer, business-logic-reviewer, security-reviewer]
THEN STOP - Invalid agent prefix
If validation fails:
Why This Is Critical:
You MUST dispatch all 3 reviewers in a SINGLE message:
Task tool #1: code-reviewer
Task tool #2: business-logic-reviewer
Task tool #3: security-reviewer
VIOLATIONS:
The ONLY acceptable pattern is 3 Task tools in 1 message.
If you already dispatched reviewers sequentially (separate messages):
If you violate Gate 4 protocol (skip reviewers, sequential dispatch, silent MEDIUM acceptance):
You CANNOT "fix" a violation by re-running reviews. The entire cycle is compromised.
Before starting this gate:
| Reviewer | Focus Area | Catches |
|---|---|---|
code-reviewer | Architecture, patterns, maintainability | Design flaws, code smells, DRY violations |
business-logic-reviewer | Correctness, requirements, edge cases | Logic errors, missing cases, requirement gaps |
security-reviewer | OWASP, auth, input validation | Vulnerabilities, injection risks, auth bypasses |
Gather: BASE_SHA=$(git merge-base HEAD main), HEAD_SHA=$(git rev-parse HEAD), git diff --name-only $BASE_SHA $HEAD_SHA
Review context: WHAT_WAS_IMPLEMENTED, PLAN_OR_REQUIREMENTS, ACCEPTANCE_CRITERIA (AC-1, AC-2...), BASE_SHA, HEAD_SHA, FILES_CHANGED
CRITICAL: Single message with 3 Task tool calls:
| Task | Agent | Prompt |
|---|---|---|
| #1 | code-reviewer | Review context (WHAT_WAS_IMPLEMENTED, PLAN, ACs, SHAs) |
| #2 | business-logic-reviewer | Same context |
| #3 | security-reviewer | Same context |
Wait for ALL three to complete before proceeding.
Each reviewer returns: VERDICT (PASS/FAIL/NEEDS_DISCUSSION), Summary, Issues (by severity), What Was Done Well, Next Steps.
Extract per reviewer: VERDICT + Critical/High/Medium/Low issue lists.
Combine all findings by severity with source attribution: Critical (MUST fix), High (MUST fix), Medium (MUST fix), Low (track as TODO).
| Condition | VERDICT | Action |
|---|---|---|
| All 3 reviewers PASS, no Critical/High/Medium | PASS | Proceed to Gate 5 |
| Any Critical finding | FAIL | Return to Gate 0 |
| Any High finding | FAIL | Return to Gate 0 |
| Any Medium finding | FAIL | Return to Gate 0 |
| Only Low findings | PASS | Track issues, proceed |
| VERDICT | Actions |
|---|---|
| PASS | Document → Add // TODO(review): for Low → Proceed to Gate 5 |
| FAIL | Document Critical/High/Medium → Create fix tasks → Return to Gate 0 → Re-run ALL 3 reviewers after fixes |
Review Summary contents: Task ID, Date, Reviewers (all 3), VERDICT, Individual Verdicts, Findings Summary (Critical/High/Medium/Low counts), Actions Taken, Next Steps.
After fixing Critical/High: Re-run ALL 3 reviewers (fixes may introduce new issues in other domains). Do NOT cherry-pick reviewers.
| Iteration | Duration | Max iterations: 3 |
|---|---|---|
| 1st-3rd review | 3-5 min each | Total: ~15-20 min including fixes |
If FAIL after 3 iterations: STOP → Request human intervention → Document recurring pattern → Consider architectural changes → Escalate.
Never:
Always:
Base metrics per shared-patterns/output-execution-report.md.
| Metric | Value |
|---|---|
| Duration | Xm Ys |
| Review Iterations | N |
| Reviewers | 3 (parallel) |
| Findings | X Critical, Y High, Z Medium, W Low |
| Fixes Applied | N |
| VERDICT | PASS/FAIL/NEEDS_DISCUSSION |
| Result | Gate passed / Returned to Gate 0 / Awaiting decision |
When feedback is incorrect: (1) Provide technical evidence (code, tests, docs) (2) Explain why implementation is correct (3) Reference requirements/architecture (4) Request clarification for ambiguous feedback (5) Security concerns need extra scrutiny before dismissing
Example: Reviewer says "SQL injection" → Response: "Using parameterized query on line 45: db.Query(..., $1, userID). Test in auth_test.go:78 verifies."