Autonomous plan execution - fresh subagent per task with automated code review between tasks. No human-in-loop, high throughput with quality gates.
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
name: subagent-driven-development description: | Autonomous plan execution - fresh subagent per task with automated code review between tasks. No human-in-loop, high throughput with quality gates.
trigger: |
skip_when: |
sequence: after: [writing-plans, pre-dev-task-breakdown]
Execute plan by dispatching fresh subagent per task, with code review after each.
Core principle: Fresh subagent per task + review between tasks = high quality, fast iteration
vs. Executing Plans (parallel session):
When to use:
When NOT to use:
Read plan file, create TodoWrite with all tasks.
Dispatch: Task tool (general-purpose) with: Task N from [plan-file], instructions (implement, test with TDD, verify, commit, report back), working directory. Subagent reports summary.
CRITICAL: Single message with 3 Task tool calls - all reviewers execute simultaneously.
| Reviewer | Model | Context |
|---|---|---|
code-reviewer | opus | WHAT_WAS_IMPLEMENTED, PLAN, BASE_SHA, HEAD_SHA |
business-logic-reviewer | opus | Same context |
security-reviewer | opus | Same context |
Each returns: Strengths, Issues (Critical/High/Medium/Low/Cosmetic), Assessment (PASS/FAIL)
Aggregate all issues by severity across all 3 reviewers.
| Severity | Action |
|---|---|
| Critical/High/Medium | Dispatch fix subagent → Re-run all 3 reviewers → Repeat until clear |
| Low | Add # TODO(review): [issue] - reviewer, date, Severity: Low |
| Cosmetic | Add # FIXME(nitpick): [issue] - reviewer, date, Severity: Cosmetic |
Commit TODO/FIXME comments with fixes.
After all Critical/High/Medium issues resolved for current task:
Same pattern as Step 3 but reviewing entire implementation (all tasks, full BASE_SHA→HEAD_SHA range). Aggregate, fix, re-run until all 3 PASS.
After final review passes:
Task 1: Implement → All 3 reviewers PASS → Mark complete.
Task 2: Implement → Review finds: Critical (hardcoded secret), High (missing password reset, no rate limiting), Low (extract token logic) → Dispatch fix subagent → Re-run reviewers → All PASS → Add TODO for Low → Mark complete.
Final: All 3 reviewers PASS entire implementation → Done.
Why parallel: 3x faster, all feedback at once, TODO/FIXME tracks tech debt.
| vs. | Benefits |
|---|---|
| Manual execution | Fresh context per task, TDD enforced, parallel-safe |
| Executing Plans | Same session (no handoff), continuous progress, automatic review |
Cost: More invocations, but catches issues early (cheaper than debugging later).
Never:
Always:
model: "opus" for each reviewerIf subagent fails task:
Required workflow skills:
Subagents must use:
Alternative workflow:
See reviewer agent definitions: code-reviewer (agents/code-reviewer.md), security-reviewer (agents/security-reviewer.md), business-logic-reviewer (agents/business-logic-reviewer.md)