Main orchestrator for the 6-gate development cycle system. Loads tasks/subtasks from PM team output and executes through implementation, devops, SRE, testing, review, and validation gates with state persistence and metrics collection.
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
name: dev-cycle description: | Main orchestrator for the 6-gate development cycle system. Loads tasks/subtasks from PM team output and executes through implementation, devops, SRE, testing, review, and validation gates with state persistence and metrics collection.
trigger: |
skip_when: |
NOT_skip_when: |
sequence: before: [dev-feedback-loop]
related: complementary: [dev-implementation, dev-devops, dev-sre, dev-testing, dev-review, dev-validation, dev-feedback-loop]
verification: automated: - command: "test -f docs/refactor/current-cycle.json" description: "State file exists" success_pattern: "exit 0" - command: "cat docs/refactor/current-cycle.json | jq '.current_gate'" description: "Current gate is valid" success_pattern: "[0-5]" manual: - "All gates for current task show PASS in state file" - "No tasks have status 'blocked' for more than 3 iterations"
examples:
Before ANY gate execution, you MUST load Ring standards:
See CLAUDE.md as the canonical source. This table summarizes the loading process.
| Parameter | Value |
|---|---|
| url | `https://raw.githubusercontent.com/LerianStudio/ring/main/CLAUDE.md\` |
| prompt | "Extract Agent Modification Verification requirements, Anti-Rationalization Tables requirements, and Critical Rules" |
Execute WebFetch before proceeding. Do NOT continue until standards are loaded.
If WebFetch fails → STOP and report blocker. Cannot proceed without Ring standards.
The development cycle orchestrator loads tasks/subtasks from PM team output (or manual task files) and executes through 6 quality gates. Tasks are loaded at initialization - no separate import gate.
Announce at start: "I'm using the dev-cycle skill to orchestrate task execution through 6 gates."
See shared-patterns/shared-orchestrator-principle.md for full ORCHESTRATOR principle, role separation, forbidden/required actions, gate-to-agent mapping, and anti-rationalization table.
Summary: You orchestrate. Agents execute. If using Read/Write/Edit/Bash on source code → STOP. Dispatch agent.
| Decision Type | Examples | Action |
|---|---|---|
| Gate Failure | Tests not passing, review failed | STOP. Cannot proceed to next gate. |
| Missing Standards | No PROJECT_RULES.md | STOP. Report blocker and wait. |
| Agent Failure | Specialist agent returned errors | STOP. Diagnose and report. |
| User Decision Required | Architecture choice, framework selection | STOP. Present options with trade-offs. |
You CANNOT proceed when blocked. Report and wait for resolution.
| Requirement | Rationale | Consequence If Skipped |
|---|---|---|
| All 6 gates must execute | Each gate catches different issues | Missing critical defects, security vulnerabilities |
| Gates execute in order (0→5) | Dependencies exist between gates | Testing untested code, reviewing unobservable systems |
| Gate 4 requires ALL 3 reviewers | Different review perspectives are complementary | Missing security issues, business logic flaws |
| Coverage threshold ≥ 85% | Industry standard for quality code | Untested edge cases, regression risks |
| PROJECT_RULES.md must exist | Cannot verify standards without target | Arbitrary decisions, inconsistent implementations |
| Severity | Criteria | Examples |
|---|---|---|
| CRITICAL | Blocks deployment, security risk, data loss | Gate violation, skipped mandatory step |
| HIGH | Major functionality broken, standards violation | Missing tests, wrong agent dispatched |
| MEDIUM | Code quality, maintainability issues | Incomplete documentation, minor gaps |
| LOW | Best practices, optimization | Style improvements, minor refactoring |
Report ALL severities. Let user prioritize.
MEDIUM issues found in Gate 4 MUST be fixed. No exceptions.
| Request | Why It's WRONG | Required Action |
|---|---|---|
| "Can reviewer clarify if MEDIUM can defer?" | Reviewer already decided. MEDIUM means FIX. | Fix the issue, re-run reviewers |
| "Ask if this specific case is different" | Reviewer verdict accounts for context already. | Fix the issue, re-run reviewers |
| "Request exception for business reasons" | Reviewers know business context. Verdict is final. | Fix the issue, re-run reviewers |
Severity mapping is absolute:
No negotiation. No exceptions. No "special cases".
See shared-patterns/shared-pressure-resistance.md for universal pressure scenarios.
Gate-specific note: Execution mode selection affects CHECKPOINTS (user approval pauses), not GATES (quality checks). ALL gates execute regardless of mode.
See shared-patterns/shared-anti-rationalization.md for universal anti-rationalizations.
Gate-specific rationalizations:
| Excuse | Reality |
|---|---|
| "Automatic mode means faster" | Automatic mode skips CHECKPOINTS, not GATES. Same quality, less interruption. |
| "Automatic mode will skip review" | Automatic mode affects user approval pauses, NOT quality gates. ALL gates execute regardless. |
| "Defense in depth exists (frontend validates)" | Frontend can be bypassed. Backend is the last line. Fix at source. |
| "Backlog the Medium issue, it's documented" | Documented risk ≠ mitigated risk. Medium in Gate 4 = fix NOW, not later. |
| "Risk-based prioritization allows deferral" | Gates ARE the risk-based system. Reviewers define severity, not you. |
See shared-patterns/shared-red-flags.md for universal red flags.
If you catch yourself thinking ANY of those patterns, STOP immediately and return to gate execution.
The "just this once" pattern leads to complete gate erosion:
Day 1: "Skip review just this once" → Approved (precedent set)
Day 2: "Skip testing, we did it last time" → Approved (precedent extended)
Day 3: "Skip implementation checks, pattern established" → Approved (gates meaningless)
Day 4: Production incident from Day 1 code
Prevention rules:
A gate is COMPLETE only when ALL components finish successfully:
| Gate | Components Required | Partial = FAIL |
|---|---|---|
| 0.1 | TDD-RED: Failing test written + failure output captured | Test exists but no failure output = FAIL |
| 0.2 | TDD-GREEN: Implementation passes test | Code exists but test fails = FAIL |
| 0 | Both 0.1 and 0.2 complete | 0.1 done without 0.2 = FAIL |
| 1 | Dockerfile + docker-compose + .env.example | Missing any = FAIL |
| 2 | Structured JSON logs with trace correlation | Partial structured logs = FAIL |
| 3 | Coverage ≥ 85% + all AC tested | 84% = FAIL |
| 4 | ALL 3 reviewers PASS | 2/3 reviewers = FAIL |
| 5 | Explicit "APPROVED" from user | "Looks good" = NOT approved |
CRITICAL for Gate 4: Running 2 of 3 reviewers is NOT a partial pass - it's a FAIL. Re-run ALL 3 reviewers.
Anti-Rationalization for Partial Gates:
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "2 of 3 reviewers passed" | Gate 4 requires ALL 3. 2/3 = 0/3. | Re-run ALL 3 reviewers |
| "Gate mostly complete" | Mostly ≠ complete. Binary: done or not done. | Complete ALL components |
| "Can finish remaining in next cycle" | Gates don't carry over. Complete NOW. | Finish current gate |
| "Core components done, optional can wait" | No component is optional within a gate. | Complete ALL components |
Gates MUST execute in order: 0 → 1 → 2 → 3 → 4 → 5. No exceptions.
| Violation | Why It's WRONG | Consequence |
|---|---|---|
| Skip Gate 1 (DevOps) | "No infra changes" | Code without container = works on my machine only |
| Skip Gate 2 (SRE) | "Observability later" | Blind production = debugging nightmare |
| Reorder Gates | "Review before test" | Reviewing untested code wastes reviewer time |
| Parallel Gates | "Run 2 and 3 together" | Dependencies exist. Order is intentional. |
Gates are NOT parallelizable across different gates. Sequential execution is MANDATORY.
| Gate | Skill | Purpose | Agent |
|---|---|---|---|
| 0 | dev-implementation | Write code following TDD | Based on task language/domain |
| 1 | dev-devops | Infrastructure and deployment | devops-engineer |
| 2 | dev-sre | Observability (health, logging, tracing) | sre |
| 3 | dev-testing | Unit tests for acceptance criteria | qa-analyst |
| 4 | dev-review | Parallel code review | code-reviewer, business-logic-reviewer, security-reviewer (3x parallel) |
| 5 | dev-validation | Final acceptance validation | N/A (verification) |
PM Team Output → Dev Team Execution (/dev-cycle)
| Input Type | Path | Structure |
|---|---|---|
| Tasks only | docs/pre-dev/{feature}/tasks.md | T-001, T-002, T-003 with requirements + acceptance criteria |
| Tasks + Subtasks | docs/pre-dev/{feature}/ | tasks.md + subtasks/{task-id}/ST-XXX-01.md, ST-XXX-02.md... |
Core Principle: Each execution unit (task OR subtask) passes through all 6 gates before the next unit.
Flow: Unit → Gate 0-5 → 🔒 Unit Checkpoint (Step 7.1) → 🔒 Task Checkpoint (Step 7.2) → Next Unit
| Scenario | Execution Unit | Gates Per Unit |
|---|---|---|
| Task without subtasks | Task itself | 6 gates |
| Task with subtasks | Each subtask | 6 gates per subtask |
State is persisted to docs/refactor/current-cycle.json:
{
"version": "1.0.0",
"cycle_id": "uuid",
"started_at": "ISO timestamp",
"updated_at": "ISO timestamp",
"source_file": "path/to/tasks.md",
"execution_mode": "manual_per_subtask|manual_per_task|automatic",
"status": "in_progress|completed|failed|paused|paused_for_approval|paused_for_testing|paused_for_task_approval|paused_for_integration_testing",
"feedback_loop_completed": false,
"current_task_index": 0,
"current_gate": 0,
"current_subtask_index": 0,
"tasks": [
{
"id": "T-001",
"title": "Task title",
"status": "pending|in_progress|completed|failed|blocked",
"feedback_loop_completed": false,
"subtasks": [
{
"id": "ST-001-01",
"file": "subtasks/T-001/ST-001-01.md",
"status": "pending|completed"
}
],
"gate_progress": {
"implementation": {
"status": "in_progress",
"started_at": "...",
"tdd_red": {
"status": "pending|in_progress|completed",
"test_file": "path/to/test_file.go",
"failure_output": "FAIL: TestFoo - expected X got nil",
"completed_at": "ISO timestamp"
},
"tdd_green": {
"status": "pending|in_progress|completed",
"implementation_file": "path/to/impl.go",
"test_pass_output": "PASS: TestFoo (0.003s)",
"completed_at": "ISO timestamp"
}
},
"devops": {"status": "pending"},
"sre": {"status": "pending"},
"testing": {"status": "pending"},
"review": {"status": "pending"},
"validation": {"status": "pending"}
},
"artifacts": {},
"agent_outputs": {
"implementation": {
"agent": "backend-engineer-golang",
"output": "## Summary\n...",
"timestamp": "ISO timestamp",
"duration_ms": 0
},
"devops": null,
"sre": {
"agent": "sre",
"output": "## Summary\n...",
"timestamp": "ISO timestamp",
"duration_ms": 0
},
"testing": {
"agent": "qa-analyst",
"output": "## Summary\n...",
"verdict": "PASS",
"coverage_actual": 87.5,
"coverage_threshold": 85,
"iteration": 1,
"timestamp": "ISO timestamp",
"duration_ms": 0
},
"review": {
"code_reviewer": {"agent": "code-reviewer", "output": "...", "timestamp": "..."},
"business_logic_reviewer": {"agent": "business-logic-reviewer", "output": "...", "timestamp": "..."},
"security_reviewer": {"agent": "security-reviewer", "output": "...", "timestamp": "..."}
},
"validation": {
"result": "approved|rejected",
"timestamp": "ISO timestamp"
}
}
}
],
"metrics": {
"total_duration_ms": 0,
"gate_durations": {},
"review_iterations": 0,
"testing_iterations": 0
}
}
"Update state" means BOTH update the object AND write to file. Not just in-memory.
You MUST execute these steps after completing ANY gate (0, 1, 2, 3, 4, or 5):
# Step 1: Update state object with gate results
state.tasks[current_task_index].gate_progress.[gate_name].status = "completed"
state.tasks[current_task_index].gate_progress.[gate_name].completed_at = "[ISO timestamp]"
state.current_gate = [next_gate_number]
state.updated_at = "[ISO timestamp]"
# Step 2: Write to file (MANDATORY - use Write tool)
Write tool:
file_path: "docs/refactor/current-cycle.json"
content: [full JSON state]
# Step 3: Verify persistence (MANDATORY - use Read tool)
Read tool:
file_path: "docs/refactor/current-cycle.json"
# Confirm current_gate and gate_progress match expected values
| After | MUST Update | MUST Write File |
|---|---|---|
| Gate 0.1 (TDD-RED) | tdd_red.status, tdd_red.failure_output | ✅ YES |
| Gate 0.2 (TDD-GREEN) | tdd_green.status, implementation.status | ✅ YES |
| Gate 1 (DevOps) | devops.status, agent_outputs.devops | ✅ YES |
| Gate 2 (SRE) | sre.status, agent_outputs.sre | ✅ YES |
| Gate 3 (Testing) | testing.status, agent_outputs.testing | ✅ YES |
| Gate 4 (Review) | review.status, agent_outputs.review | ✅ YES |
| Gate 5 (Validation) | validation.status, task status | ✅ YES |
| Step 7.1 (Unit Approval) | status = "paused_for_approval" | ✅ YES |
| Step 7.2 (Task Approval) | status = "paused_for_task_approval" | ✅ YES |
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "I'll save state at the end" | Crash/timeout loses ALL progress | Save after EACH gate |
| "State is in memory, that's updated" | Memory is volatile. File is persistent. | Write to JSON file |
| "Only save on checkpoints" | Gates without saves = unrecoverable on resume | Save after EVERY gate |
| "Write tool is slow" | Write takes <100ms. Lost progress takes hours. | Write after EVERY gate |
| "I updated the state variable" | Variable ≠ file. Without Write tool, nothing persists. | Use Write tool explicitly |
After each gate, the state file MUST reflect:
current_gate = next gate numberupdated_at = recent timestampstatus = "completed"If verification fails → State was not persisted. Re-execute Write tool.
NON-NEGOTIABLE. Cycle CANNOT proceed without project standards.
Check: (1) docs/PROJECT_RULES.md (2) docs/STANDARDS.md (legacy) (3) --resume state file → Found: Proceed | NOT Found: STOP with blocker missing_prerequisite
Pressure Resistance: "Standards slow us down" → No target = guessing = rework | "Use defaults" → Defaults are generic, YOUR project has specific conventions | "Create later" → Later = refactoring
Input: path/to/tasks.md OR path/to/pre-dev/{feature}/
docs/refactor/current-cycle.json, set indices to 0docs/refactor/current-cycle.json, validate| Status | Action |
|---|---|
paused_for_approval | Re-present Step 7.1 checkpoint |
paused_for_testing | Ask if testing complete → continue or keep paused |
paused_for_task_approval | Re-present Step 7.2 checkpoint |
paused_for_integration_testing | Ask if integration testing complete |
paused (generic) | Ask user to confirm resume |
in_progress | Resume from current gate |
Task files are generated by /pre-dev-* or /dev-refactor, which handle content validation. The dev-cycle performs basic format checks:
| Check | Validation | Action |
|---|---|---|
| File exists | Task file path is readable | Error: abort |
| Task headers | At least one ## Task: found | Error: abort |
| Task ID format | ## Task: {ID} - {Title} | Warning: use line number as ID |
| Acceptance criteria | At least one - [ ] per task | Warning: task may fail validation gate |
REQUIRED SUB-SKILL: Use dev-implementation
Execution Unit: Task (if no subtasks) OR Subtask (if task has subtasks)
See shared-patterns/shared-orchestrator-principle.md for full details.
Gate 0 has TWO explicit sub-phases with a HARD GATE between them:
┌─────────────────────────────────────────────────────────────────┐
│ GATE 0.1: TDD-RED │
│ ───────────────── │
│ Write failing test → Run test → Capture FAILURE output │
│ │
│ ═══════════════════ HARD GATE ═══════════════════════════════ │
│ CANNOT proceed to 0.2 until failure output is captured │
│ ════════════════════════════════════════════════════════════ │
│ │
│ GATE 0.2: TDD-GREEN │
│ ────────────────── │
│ Implement minimal code → Run test → Verify PASS │
└─────────────────────────────────────────────────────────────────┘
Record gate start timestamp
Set gate_progress.implementation.tdd_red.status = "in_progress"
Determine appropriate agent based on content:
Dispatch to selected agent for TDD-RED ONLY:
See shared-patterns/template-tdd-prompts.md for the TDD-RED prompt template.
Include: unit_id, title, requirements, acceptance_criteria in the prompt.
Receive TDD-RED report from agent
VERIFY FAILURE OUTPUT EXISTS (HARD GATE): See shared-patterns/template-tdd-prompts.md for verification rules.
Update state:
gate_progress.implementation.tdd_red = {
"status": "completed",
"test_file": "[path from agent]",
"failure_output": "[actual failure output from agent]",
"completed_at": "[ISO timestamp]"
}
Display to user:
┌─────────────────────────────────────────────────┐
│ ✓ TDD-RED COMPLETE │
├─────────────────────────────────────────────────┤
│ Test: [test_file]:[test_function] │
│ Failure: [first line of failure output] │
│ │
│ Proceeding to TDD-GREEN... │
└─────────────────────────────────────────────────┘
Proceed to Gate 0.2
PREREQUISITE: gate_progress.implementation.tdd_red.status == "completed"
Set gate_progress.implementation.tdd_green.status = "in_progress"
Dispatch to same agent for TDD-GREEN:
See shared-patterns/template-tdd-prompts.md for the TDD-GREEN prompt template (includes observability requirements).
Include: unit_id, title, tdd_red.test_file, tdd_red.failure_output in the prompt.
Receive TDD-GREEN report from agent
VERIFY PASS OUTPUT EXISTS (HARD GATE): See shared-patterns/template-tdd-prompts.md for verification rules.
Update state:
gate_progress.implementation.tdd_green = {
"status": "completed",
"implementation_file": "[path from agent]",
"test_pass_output": "[actual pass output from agent]",
"completed_at": "[ISO timestamp]"
}
gate_progress.implementation.status = "completed"
artifacts.implementation = {files_changed, commit_sha}
agent_outputs.implementation = {
agent: "[selected_agent]",
output: "[full agent output for feedback analysis]",
timestamp: "[ISO timestamp]",
duration_ms: [execution time]
}
Display to user:
┌─────────────────────────────────────────────────┐
│ ✓ GATE 0 COMPLETE (TDD-RED → TDD-GREEN) │
├─────────────────────────────────────────────────┤
│ RED: [test_file] - FAIL captured ✓ │
│ GREEN: [impl_file] - PASS verified ✓ │
│ │
│ Proceeding to Gate 1 (DevOps)... │
└─────────────────────────────────────────────────┘
⛔ SAVE STATE TO FILE (MANDATORY):
Write tool:
file_path: "docs/refactor/current-cycle.json"
content: [full updated state JSON]
See "State Persistence Rule" section. State MUST be written to file after Gate 0.
Proceed to Gate 1
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Test passes on first run, skip RED" | Passing test ≠ TDD. Test MUST fail first. | Delete test, rewrite to fail first |
| "I'll capture failure output later" | Later = never. Failure output is the gate. | Capture NOW or cannot proceed |
| "Failure output is in the logs somewhere" | "Somewhere" ≠ captured. Must be in state. | Extract and store in tdd_red.failure_output |
| "GREEN passed, RED doesn't matter now" | RED proves test validity. Skip = invalid test. | Re-run RED phase, capture failure |
| "Agent already did TDD internally" | Internal ≠ verified. State must show evidence. | Agent must output failure explicitly |
REQUIRED SUB-SKILL: Use dev-devops
For current execution unit:
1. Record gate start timestamp
2. Check if unit requires DevOps work:
- Infrastructure changes?
- New service deployment?
3. If DevOps needed:
Task tool:
subagent_type: "devops-engineer"
prompt: |
Review and update infrastructure for: [unit_id]
Implementation changes:
[implementation_artifacts]
Check:
- Dockerfile updates needed?
- docker-compose updates needed?
- Helm chart updates needed?
- Environment variables?
Report: changes made, deployment notes.
4. If DevOps not needed:
- Mark as "skipped" with reason
- agent_outputs.devops = null
5. If DevOps executed:
- agent_outputs.devops = {
agent: "devops-engineer",
output: "[full agent output for feedback analysis]",
timestamp: "[ISO timestamp]",
duration_ms: [execution time]
}
6. Update state
7. **⛔ SAVE STATE TO FILE (MANDATORY):**
Write tool → "docs/refactor/current-cycle.json"
See "State Persistence Rule" section.
8. Proceed to Gate 2
REQUIRED SUB-SKILL: Use dev-sre
For current execution unit:
1. Record gate start timestamp
2. Check if unit requires SRE observability:
- New service or API?
- External dependencies added?
- Performance-critical changes?
3. If SRE needed:
Task tool:
subagent_type: "sre"
prompt: |
Validate observability for: [unit_id]
Service Information:
- Language: [Go/TypeScript/Python]
- Service type: [API/Worker/Batch]
- External dependencies: [list]
- Implementation from Gate 0: [summary of what was implemented]
Validation Requirements:
- Verify structured JSON logging with trace correlation
- Verify OpenTelemetry tracing (if external calls)
Report: validation results with PASS/FAIL for each component (logging, tracing), issues found by severity, verification commands executed.
4. If SRE not needed:
- Mark as "skipped" with reason
- agent_outputs.sre = null
5. If SRE executed:
- agent_outputs.sre = {
agent: "sre",
output: "[full agent output for feedback analysis]",
timestamp: "[ISO timestamp]",
duration_ms: [execution time]
}
6. Update state
7. **⛔ SAVE STATE TO FILE (MANDATORY):**
Write tool → "docs/refactor/current-cycle.json"
See "State Persistence Rule" section.
8. Proceed to Gate 3
REQUIRED SUB-SKILL: Use dev-testing
┌─────────────────────────────────────────────────────────┐
│ QA runs tests → checks coverage against threshold │
│ │
│ PASS (coverage ≥ threshold) → Proceed to Gate 4 │
│ FAIL (coverage < threshold) → Return to Gate 0 │
│ │
│ Max 3 attempts, then escalate to user │
└─────────────────────────────────────────────────────────┘
docs/PROJECT_RULES.md1. Dispatch QA analyst with acceptance criteria and threshold
2. QA writes tests, runs them, checks coverage
3. QA returns VERDICT: PASS or FAIL
PASS → Proceed to Gate 4 (Review)
FAIL → Return to Gate 0 (Implementation) to add tests
QA provides: what lines/branches need coverage
4. Track iteration count (state.testing.iteration)
- Max 3 iterations allowed
- After 3rd failure: STOP and escalate to user
- Do NOT attempt 4th iteration automatically
5. **⛔ SAVE STATE TO FILE (MANDATORY):**
Write tool → "docs/refactor/current-cycle.json"
See "State Persistence Rule" section.
{
"testing": {
"verdict": "PASS|FAIL",
"coverage_actual": 87.5,
"coverage_threshold": 85,
"iteration": 1
}
}
REQUIRED SUB-SKILL: Use requesting-code-review
For current execution unit:
1. Record gate start timestamp
2. Dispatch all 3 reviewers in parallel (single message, 3 Task calls):
Task tool #1:
subagent_type: "code-reviewer"
model: "opus"
prompt: |
Review implementation for: [unit_id]
BASE_SHA: [pre-implementation commit]
HEAD_SHA: [current commit]
REQUIREMENTS: [unit requirements]
Task tool #2:
subagent_type: "business-logic-reviewer"
model: "opus"
prompt: [same structure]
Task tool #3:
subagent_type: "security-reviewer"
model: "opus"
prompt: [same structure]
3. Wait for all reviewers to complete
4. Store all reviewer outputs:
- agent_outputs.review = {
code_reviewer: {
agent: "code-reviewer",
output: "[full output for feedback analysis]",
timestamp: "[ISO timestamp]"
},
business_logic_reviewer: {
agent: "business-logic-reviewer",
output: "[full output for feedback analysis]",
timestamp: "[ISO timestamp]"
},
security_reviewer: {
agent: "security-reviewer",
output: "[full output for feedback analysis]",
timestamp: "[ISO timestamp]"
}
}
5. Aggregate findings by severity:
- Critical/High/Medium: Must fix
- Low: Add TODO(review): comment
- Cosmetic: Add FIXME(nitpick): comment
6. If Critical/High/Medium issues found:
- Dispatch fix to implementation agent
- Re-run all 3 reviewers in parallel
- Increment metrics.review_iterations
- Repeat until clean (max 3 iterations)
7. When all issues resolved:
- Update state
- Proceed to Gate 5
For current execution unit:
1. Record gate start timestamp
2. Verify acceptance criteria:
For each criterion in acceptance_criteria:
- Check if implemented
- Check if tested
- Mark as PASS/FAIL
3. Run final verification:
- All tests pass?
- No Critical/High/Medium review issues?
- All acceptance criteria met?
4. If validation fails:
- Log failure reasons
- Determine which gate to revisit
- Loop back to appropriate gate
5. If validation passes:
- Set unit status = "completed"
- Record gate end timestamp
- agent_outputs.validation = {
result: "approved",
timestamp: "[ISO timestamp]",
criteria_results: [{criterion, status}]
}
- Proceed to Step 7.1 (Execution Unit Approval)
Checkpoint depends on execution_mode: manual_per_subtask → Execute | manual_per_task / automatic → Skip
status = "paused_for_approval", save state| Response | Action |
|---|---|
| Continue | Set in_progress, move to next unit (or Step 7.2 if last) |
| Test First | Set paused_for_testing, STOP, output resume command |
| Stop Here | Set paused, STOP, output resume command |
Checkpoint depends on execution_mode: manual_per_subtask / manual_per_task → Execute | automatic → Skip
status = "completed", cycle status = "paused_for_task_approval", save stateAfter completing all subtasks of a task:
0. Check execution_mode from state:
- If "automatic": Still run feedback, then skip to next task
- If "manual_per_subtask" OR "manual_per_task": Continue with checkpoint below
1. Set task status = "completed"
2. **⛔ MANDATORY: Run dev-feedback-loop skill**
```yaml
Skill tool:
skill: "dev-feedback-loop"
Note: dev-feedback-loop manages its own TodoWrite tracking internally.
The skill will:
After feedback-loop completes, update state:
tasks[current].feedback_loop_completed = true in state fileAnti-Rationalization for Feedback Loop:
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Task was simple, skip feedback" | Simple tasks still contribute to patterns | Execute Skill tool |
| "Already at 100% score" | High scores need tracking for replication | Execute Skill tool |
| "User approved, feedback unnecessary" | Approval ≠ process quality metrics | Execute Skill tool |
| "No issues found, nothing to report" | Absence of issues IS data | Execute Skill tool |
| "Time pressure, skip metrics" | Metrics take <2 min, prevent future issues | Execute Skill tool |
⛔ HARD GATE: You CANNOT proceed to step 3 without executing the Skill tool above.
Hook Enforcement: A UserPromptSubmit hook (feedback-loop-enforcer.sh) monitors state and will inject reminders if feedback-loop is not executed.
Set cycle status = "paused_for_task_approval"
Save state
Present task completion summary (with feedback metrics): ┌─────────────────────────────────────────────────┐ │ ✓ TASK COMPLETED │ ├─────────────────────────────────────────────────┤ │ Task: [task_id] - [task_title] │ │ │ │ Subtasks Completed: X/X │ │ ✓ ST-001-01: [title] │ │ ✓ ST-001-02: [title] │ │ ✓ ST-001-03: [title] │ │ │ │ Total Duration: Xh Xm │ │ Total Review Iterations: N │ │ │ │ ═══════════════════════════════════════════════ │ │ FEEDBACK METRICS │ │ ═══════════════════════════════════════════════ │ │ │ │ Assertiveness Score: XX% (Rating) │ │ │ │ Prompt Quality by Agent: │ │ backend-engineer-golang: 90% (Excellent) │ │ qa-analyst: 75% (Acceptable) │ │ code-reviewer: 88% (Good) │ │ │ │ Improvements Suggested: N │ │ Feedback Location: │ │ docs/feedbacks/cycle-YYYY-MM-DD/ │ │ │ │ ═══════════════════════════════════════════════ │ │ │ │ All Files Changed This Task: │ │ - file1.go │ │ - file2.go │ │ - ... │ │ │ │ Next Task: [next_task_id] - [next_task_title] │ │ Subtasks: N (or "TDD autonomous") │ │ OR "No more tasks - cycle complete" │ └─────────────────────────────────────────────────┘
ASK FOR EXPLICIT APPROVAL using AskUserQuestion tool:
Question: "Task [task_id] complete. Ready to start the next task?" Options: a) "Continue" - Proceed to next task b) "Integration Test" - User wants to test the full task integration c) "Stop Here" - Pause cycle
Handle user response:
If "Continue":
If "Integration Test":
If "Stop Here":
**Note:** Tasks without subtasks execute both 7.1 and 7.2 in sequence.
## Step 8: Cycle Completion
1. **Calculate metrics:** total_duration_ms, average gate durations, review iterations, pass/fail ratio
2. **Update state:** `status = "completed"`, `completed_at = timestamp`
3. **Generate report:** Task | Subtasks | Duration | Review Iterations | Status
4. **⛔ MANDATORY: Run dev-feedback-loop skill for cycle metrics**
```yaml
Skill tool:
skill: "dev-feedback-loop"
Note: dev-feedback-loop manages its own TodoWrite tracking internally.
After feedback-loop completes, update state:
feedback_loop_completed = true at cycle level in state file⛔ HARD GATE: Cycle is NOT complete until feedback-loop executes.
Hook Enforcement: A UserPromptSubmit hook (feedback-loop-enforcer.sh) monitors state and will inject reminders if feedback-loop is not executed.
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "Cycle done, feedback is extra" | Feedback IS part of cycle completion | Execute Skill tool |
| "Will run feedback next session" | Next session = never. Run NOW. | Execute Skill tool |
| "All tasks passed, no insights" | Pass patterns need documentation too | Execute Skill tool |
# Full PM workflow then dev execution
/pre-dev-full my-feature
/dev-cycle docs/pre-dev/my-feature/
# Simple PM workflow then dev execution
/pre-dev-feature my-feature
/dev-cycle docs/pre-dev/my-feature/tasks.md
# Manual task file
/dev-cycle docs/tasks/sprint-001.md
# Resume interrupted cycle
/dev-cycle --resume
| Type | Condition | Action |
|---|---|---|
| Recoverable | Network timeout | Retry with exponential backoff |
| Recoverable | Agent failure | Retry once, then pause for user |
| Recoverable | Test flakiness | Re-run tests up to 2 times |
| Non-Recoverable | Missing required files | Stop and report |
| Non-Recoverable | Invalid state file | Must restart (cannot resume) |
| Non-Recoverable | Max review iterations | Pause for user |
On any error: Update state → Set status (failed/paused) → Save immediately → Report (what failed, why, how to recover, resume command)
| Metric | Value |
|---|---|
| Duration | Xh Xm Ys |
| Tasks Processed | N/M |
| Current Gate | Gate X - [name] |
| Review Iterations | N |
| Result | PASS/FAIL/IN_PROGRESS |
| Gate | Duration | Status |
|---|---|---|
| Implementation | Xm Ys | in_progress |
| DevOps | - | pending |
| SRE | - | pending |
| Testing | - | pending |
| Review | - | pending |
| Validation | - | pending |
docs/refactor/current-cycle.json