This skill should be used when the user asks about "SPAWN REQUEST format", "agent reports", "agent coordination", "parallel agents", "report format", "agent communication", or needs to understand how agents coordinate within the sprint system.
This skill inherits all available tools. When active, it can use any tool Claude has access to.
Sprint coordinates multiple specialized agents through structured communication patterns. This skill covers the SPAWN REQUEST format, report structure, parallel execution, and inter-agent coordination.
┌─────────────────────────────────────┐
│ Sprint Orchestrator │
│ (parses requests, spawns agents) │
└─────────────────┬───────────────────┘
│
▼
┌─────────────────────────────────────┐
│ Project Architect │
│ (plans, creates specs, coordinates)│
└─────────────────┬───────────────────┘
│ SPAWN REQUEST
▼
┌─────────────────────────────────────┐
│ Implementation & Test Agents │
│ (execute, report, never spawn) │
└─────────────────────────────────────┘
Key rule: Only the orchestrator spawns agents. Agents communicate by returning structured messages.
The architect requests agent spawning using a specific format:
## SPAWN REQUEST
- python-dev
- nextjs-dev
- cicd-agent
The orchestrator parses SPAWN REQUEST blocks:
## SPAWN REQUEST on its own lineRequest during Phase 2:
## SPAWN REQUEST
- python-dev
- nextjs-dev
Never include test agents in implementation requests.
Request during Phase 3:
## SPAWN REQUEST
- qa-test-agent
- ui-test-agent
The ui-test-agent runs in AUTOMATED or MANUAL mode based on specs.md:
UI Testing Mode: automated (default) → runs test scenarios automaticallyUI Testing Mode: manual → opens browser for user to test, waits for tab closeAll agents return structured reports. The orchestrator saves these as files.
Every report includes:
| Section | Purpose |
|---|---|
| CONFORMITY STATUS | YES/NO - did agent follow specs? |
| SUMMARY | Brief outcome description |
| DEVIATIONS | Justified departures from specs |
| FILES CHANGED | List of modified files |
| ISSUES | Problems encountered |
| NOTES FOR ARCHITECT | Suggestions, observations |
## BACKEND REPORT
### CONFORMITY STATUS: YES
### SUMMARY
Implemented user authentication endpoints per api-contract.md.
### FILES CHANGED
- backend/app/routers/auth.py (new)
- backend/app/models/user.py (modified)
- backend/app/schemas/auth.py (new)
- backend/alembic/versions/001_add_users.py (new)
### DEVIATIONS
None.
### ISSUES
None.
### NOTES FOR ARCHITECT
- Consider adding rate limiting middleware in next sprint
- Password hashing uses bcrypt (industry standard)
## QA REPORT
### SUITE STATUS
- New test files: 2
- Updated test files: 1
- Test framework(s): pytest
- Test command(s): pytest tests/api/
### API CONFORMITY STATUS: YES
### SUMMARY
- Total endpoints in contract: 5
- Endpoints covered by automated tests: 5
- Endpoints with failing tests: 0
### FAILURES AND DEVIATIONS
None.
### TEST EXECUTION
- Tests run: YES
- Result: ALL PASS
- Notes: 23 tests passed in 1.4s
### NOTES FOR ARCHITECT
- Added edge case tests for password validation
## UI TEST REPORT
### MODE
AUTOMATED
### SUMMARY
- Total tests run: 8
- Passed: 7
- Failed: 1
- Session duration: 45s
### COVERAGE
- Scenarios covered:
- Login with valid credentials
- Login with invalid password
- Registration flow
- Password reset request
- Not covered (yet):
- Email verification flow (requires email testing setup)
### FAILURES
- Scenario: Registration validation
- Path/URL: /register
- Symptom: Error message not displayed
- Expected: "Email already exists" message
- Actual: Form submits without feedback
### CONSOLE ERRORS
None.
### NOTES FOR ARCHITECT
- Registration error handling needs frontend fix
Spawn all implementation agents simultaneously:
Task: python-dev ─────────▶ Backend Report
Task: nextjs-dev ─────────▶ Frontend Report
Task: cicd-agent ─────────▶ CICD Report
Use a single message with multiple Task tool calls.
QA runs first, then UI tests:
Task: qa-test-agent ─────────▶ QA Report
│
▼
Task: ui-test-agent ─────────▶ UI Test Report
Task: diagnostics ─────────▶ Diagnostics Report
(parallel with ui-test)
UI test and diagnostics agents run in parallel with each other.
status.md (architect only)project-map.md (architect only)When agents need to coordinate beyond reports:
Orchestrator saves reports as:
.claude/sprint/[N]/[slug]-report-[iteration].md
Slugs:
python-dev → backendnextjs-dev → frontendqa-test-agent → qaui-test-agent → ui-testThe decision-maker:
Technology-specific builders:
python-dev: Python/FastAPI backendnextjs-dev: Next.js frontendcicd-agent: CI/CD pipelinesallpurpose-agent: Any other technologyQuality validators:
qa-test-agent: API and unit testsui-test-agent: Browser-based E2E testsThe architect signals sprint completion:
FINALIZE
Phase 5 complete. Sprint [N] is finalized.
The orchestrator detects this and exits the iteration loop.
For detailed agent definitions and report formats, see the agent files in the plugin's agents/ directory.