Gate 0 of the development cycle. Executes code implementation using the appropriate specialized agent based on task content and project language. Handles both tasks with subtasks (step-by-step) and tasks without (TDD autonomous). Follows project standards defined in docs/PROJECT_RULES.md.
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
name: dev-implementation description: | Gate 0 of the development cycle. Executes code implementation using the appropriate specialized agent based on task content and project language. Handles both tasks with subtasks (step-by-step) and tasks without (TDD autonomous). Follows project standards defined in docs/PROJECT_RULES.md.
trigger: |
skip_when: |
NOT_skip_when: | - "Code already exists" → DELETE it. TDD is test-first. - "Simple feature" → Simple ≠ exempt. TDD for all. - "Time pressure" → TDD saves time. No shortcuts. - "PROJECT_RULES.md doesn't require" → Ring ALWAYS requires TDD.
sequence: before: [dev-devops]
related: complementary: [dev-cycle, test-driven-development, requesting-code-review] similar: [subagent-driven-development, executing-plans]
agent_selection: criteria: - pattern: ".go" keywords: ["go.mod", "golang", "Go"] agent: "backend-engineer-golang" - pattern: ".ts" keywords: ["express", "fastify", "nestjs", "backend", "api", "server"] agent: "backend-engineer-typescript" - pattern: ".tsx" keywords: ["react", "next", "frontend", "component", "page"] agent: "frontend-bff-engineer-typescript" - pattern: ".css|*.scss" keywords: ["design", "visual", "aesthetic", "styling", "ui"] agent: "frontend-designer" fallback: "ASK_USER" # Do NOT assume language - ask user detection_order: - "Check task.type field in tasks.md" - "Look for file patterns in task description" - "Detect project language from go.mod/package.json" - "Match keywords in task title/description" - "If no match → ASK USER (do not assume)" on_detection_failure: | If language cannot be detected, use AskUserQuestion: Question: "Could not detect project language. Which agent should implement this task?" Options: - "Go Backend" → backend-engineer-golang - "TypeScript Backend" → backend-engineer-typescript - "TypeScript Frontend" → frontend-bff-engineer-typescript - "Frontend Design" → frontend-designer NEVER assume Go. Wrong agent = wrong patterns = rework.
verification: automated: - command: "go build ./... 2>&1 | grep -c 'error'" description: "Go code compiles" success_pattern: "^0$" - command: "npm run build 2>&1 | grep -c 'error'" description: "TypeScript compiles" success_pattern: "^0$" manual: - "TDD RED phase failure output captured before implementation" - "Implementation follows project standards from PROJECT_RULES.md" - "No TODO comments without issue references"
examples:
See CLAUDE.md for canonical gate requirements.
This skill executes the implementation phase of the development cycle. It:
See shared-patterns/shared-pressure-resistance.md for universal pressure scenarios.
TDD-specific note: If code exists before test, DELETE IT. No exceptions. No "adapting". No "reference". ALL code gets TDD, not just most of it.
Definition: Time-boxed throwaway experiment to learn unfamiliar APIs/libraries. NOT for production code.
When Spike Is Legitimate:
Rules (NON-NEGOTIABLE):
When Spike Is NOT Acceptable:
After Spike Completes:
Spike vs Implementation:
| Spike (Learning) | TDD Implementation |
|---|---|
| Max 90 minutes total | No time limit |
| DELETE after | COMMIT to repo |
| No tests required | Test-first MANDATORY |
| Throwaway exploration | Production code |
| "How does X work?" | "Implement feature Y" |
Red Flag: If you want to keep spike code, you're not spiking - you're bypassing TDD. DELETE IT.
Question: When refactoring existing tests, does TDD apply?
Answer: Depends on what "refactoring" means:
Scenario A: Changing Test Implementation (TDD Does NOT Apply)
Example:
// Before: Flaky test with timing issues
it('should process async task', async () => {
await processTask();
await new Promise(resolve => setTimeout(resolve, 100)); // Bad: arbitrary wait
expect(result).toBe('done');
});
// Refactor: Fix flakiness (no TDD needed for test code itself)
it('should process async task', async () => {
const result = await processTask();
expect(result).toBe('done'); // Better: no timing dependency
});
Scenario B: Changing Implementation Code Covered By Tests (TDD APPLIES)
Example:
// Refactoring: Extract method from large function
// 1. Tests already exist and pass ✓
// 2. Extract method (refactor implementation)
// 3. Run tests - verify still pass ✓
// No new test needed if behavior unchanged
Scenario C: Adding Test Coverage for Untested Code (TDD APPLIES)
Summary:
See shared-patterns/shared-anti-rationalization.md for universal anti-rationalizations (including TDD section).
Implementation-specific rationalizations:
| Excuse | Reality |
|---|---|
| "Keep code as reference" | Reference = adapting = testing-after. Delete means DELETE. No "reference", no "backup", no "just in case". |
| "Save to branch, delete locally" | Saving anywhere = keeping. Delete from everywhere. |
| "Look at old code for guidance" | Looking leads to adapting. Delete means don't look either. |
See shared-patterns/shared-red-flags.md for universal red flags (including TDD section).
If you catch yourself thinking ANY of those patterns, STOP immediately. DELETE any existing code. Start with failing test.
DELETE means:
git checkout -- file.go (discard changes)rm file.go (remove file)git stash (that's keeping)mv file.go file.go.bak (that's keeping)DELETE verification:
# After deletion, this should show nothing:
git diff HEAD -- <file>
ls <file> # Should return "No such file"
If you can retrieve the code, you didn't delete it.
Mental reference is a subtle form of "keeping" that violates TDD:
| Type | Example | Why It's Wrong | Required Action |
|---|---|---|---|
| Memory | "I remember the approach I used" | You'll unconsciously reproduce patterns | Start fresh with new design |
| Similar code | "Let me check how auth works elsewhere" | Looking at YOUR prior work = adapting | Read external examples only |
| Mental model | "I know the structure already" | Structure should emerge from tests | Let tests drive the design |
| Clipboard | "I copied the method signature" | Clipboard content = keeping | Type from scratch |
Anti-Rationalization for Mental Reference:
See shared-patterns/shared-anti-rationalization.md for universal anti-rationalizations.
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "I deleted the code but remember it" | Memory = reference. You'll reproduce flaws. | Design fresh from requirements |
| "Looking at similar code for patterns" | If it's YOUR code, that's adapting. | Only external examples allowed |
| "I already know the approach" | Knowing = bias. Let tests discover approach. | Write test first, discover design |
| "Just using the same structure" | Same structure = not test-driven. | Structure emerges from tests |
| "Copying boilerplate is fine" | Even boilerplate should be test-driven. | Generate boilerplate via tests |
Valid external references:
Generated code (protobuf, OpenAPI, ORM) has special rules:
| Type | TDD Required? | Rationale |
|---|---|---|
| protobuf .pb.go | NO | Generated from .proto - test the .proto |
| swagger/openapi client | NO | Generated from spec - test the spec |
| ORM models | NO | Generated from schema - test business logic using them |
| Your code using generated code | YES | Your logic needs TDD |
Rule: Test what you write. Don't test what's generated. But test your usage of generated code.
See shared-patterns/shared-orchestrator-principle.md for full ORCHESTRATOR principle, role separation, forbidden/required actions, agent responsibilities (observability), library requirements, and anti-rationalization table.
Summary: You orchestrate. Agents execute. Agents implement observability (logs, traces). If using Read/Write/Edit/Bash on source code → STOP. Dispatch agent.
See shared-patterns/template-tdd-prompts.md for observability requirements to include in dispatch prompts.
HARD GATE: docs/PROJECT_RULES.md must exist (Read tool, NOT WebFetch). Not found → STOP with blocker.
Required: Tasks imported from dev-cycle, Agent selected (backend-engineer-golang/-typescript, frontend-bff-engineer-typescript, frontend-designer)
Note: PROJECT_RULES.md validated by dev-cycle Step 0, but Gate 0 re-checks (defense-in-depth).
Gate 0 is split into two explicit sub-phases with a HARD GATE between them:
┌─────────────────────────────────────────────────────────────────┐
│ GATE 0.1: TDD-RED │
│ Write failing test → Run test → Capture FAILURE output │
│ │
│ ═══════════════════ HARD GATE ═══════════════════════════════ │
│ CANNOT proceed to 0.2 until failure_output is captured │
│ ════════════════════════════════════════════════════════════ │
│ │
│ GATE 0.2: TDD-GREEN │
│ Implement minimal code → Run test → Verify PASS │
└─────────────────────────────────────────────────────────────────┘
State tracking:
{
"tdd_red": {
"status": "completed",
"test_file": "path/to/test.go",
"failure_output": "FAIL: TestFoo - expected X got nil"
},
"tdd_green": {
"status": "pending"
}
}
Required: Technical design (docs/plans/YYYY-MM-DD-{feature}.md), selected agent, PROJECT_RULES.md | Optional: PRD/TRD, existing patterns
Verify: Plan complete ✓ | Agent matches stack ✓ | Environment ready ✓ | Git branch clean ✓
Purpose: Write a test that captures expected behavior and FAILS.
Dispatch: Task(subagent_type: "{agent}", model: "opus") <!-- {agent} MUST be fully qualified: ring-{plugin}:{component} -->
See shared-patterns/template-tdd-prompts.md for the TDD-RED prompt template.
Agent returns: Test file + Failure output
On success: Store tdd_red.failure_output → Proceed to Gate 0.2
PREREQUISITE: tdd_red.status == "completed" with valid failure_output
Purpose: Write minimal code to make the failing test pass.
Dispatch: Task(subagent_type: "{agent}", model: "opus") <!-- {agent} MUST be fully qualified: ring-{plugin}:{component} -->
See shared-patterns/template-tdd-prompts.md for the TDD-GREEN prompt template (includes observability requirements).
Agent returns: Implementation + Pass output + Commit SHA
On success: Store tdd_green.test_pass_output → Gate 0 complete
| Approach | When to Use | Process |
|---|---|---|
| Subagent-Driven (recommended) | Real-time feedback needed, human intervention | Dispatch agent → Review → Code review at checkpoints → Repeat |
| Parallel Session | Well-defined plans, batch execution | New terminal in worktree → executing-plans with plan path |
Every 3-5 tasks: Use requesting-code-review → dispatch 3 reviewers in parallel (code, business-logic, security)
Severity handling: Critical/High/Medium → Fix immediately, re-run all | Low → TODO(review): | Cosmetic → FIXME(nitpick):
Proceed only when: Zero Critical/High/Medium + all Low/Cosmetic have comments
Track: Tasks (completed/in-progress), Files (created/modified), Code Review Status (checkpoint N: PASS/PENDING), Decisions, Issues+Resolutions.
Record for each significant decision: Task, Context (why it came up), Chosen Approach, Alternatives, Rationale, Impact. Focus on: deviations from design, performance optimizations, error handling, API changes, test coverage.
Use the agent selected in Gate 1 based on technology:
| Stack | Agent |
|---|---|
| Go backend | backend-engineer-golang |
| TypeScript backend | backend-engineer-typescript |
| React/Next.js frontend | frontend-bff-engineer-typescript |
| BFF layer (Next.js API Routes) | frontend-bff-engineer-typescript |
Cycle: RED (failing test) → GREEN (minimal code to pass) → REFACTOR (clean up) → COMMIT (atomic per task)
TDD details: PROJECT_RULES.md (project-specific config) + agent knowledge. Agent enforces when configured.
| Issue | Resolution Steps |
|---|---|
| Test Won't Pass | Verify test logic → Check implementation matches expectations → Check imports/deps → Check environment → If stuck: document + escalate |
| Design Change Needed | Document issue → Propose alternative → Update plan if approved → Implement new approach → Note in decisions |
| Performance Concerns | Document concern → Add benchmark test → Implement correctness first → Optimize with benchmarks → Document optimization |
Follow PROJECT_RULES.md for: Naming (vars, functions, files), Code structure (dirs, modules), Error handling (types, logging), Testing (location, naming, coverage), Documentation (comments, API docs), Git (commits, branches)
No PROJECT_RULES.md? STOP with blocker: "Cannot implement without project standards. REQUIRED: docs/PROJECT_RULES.md"
Gate 0 Handoff contents:
| Section | Content |
|---|---|
| Status | COMPLETE/PARTIAL |
| Files | Created: {list}, Modified: {list} |
| Environment Needs | Dependencies, env vars, services |
| Ready for DevOps | Code compiles ✓, Tests pass ✓, Review passed ✓, No Critical/High/Medium ✓ |
| DevOps Tasks | Dockerfile update (Y/N), docker-compose update (Y/N), new env vars, new services |
Base metrics per shared-patterns/output-execution-report.md.
| Metric | Value |
|---|---|
| Duration | Xm Ys |
| Iterations | N |
| Result | PASS/FAIL/PARTIAL |