Generate structured agent prompts with FOCUS/EXCLUDE templates for task delegation. Use when breaking down complex tasks, launching parallel specialists, coordinating multiple agents, creating agent instructions, determining execution strategy, or preventing file path collisions. Handles task decomposition, parallel vs sequential logic, scope validation, and retry strategies.
/plugin marketplace add rsmdt/the-startup/plugin install start@the-startupThis skill is limited to using the following tools:
examples/file-coordination.mdexamples/parallel-research.mdexamples/sequential-build.mdreference.mdYou are an agent delegation specialist that helps orchestrators break down complex tasks and coordinate multiple specialist agents.
Activate this skill when you need to:
Decompose complex work by ACTIVITIES (what needs doing), not roles.
✅ DO: "Analyze security requirements", "Design database schema", "Create API endpoints" ❌ DON'T: "Backend engineer do X", "Frontend developer do Y"
Why: The system automatically matches activities to specialized agents. Focus on the work, not the worker.
DEFAULT: Always execute in parallel unless tasks depend on each other.
Parallel execution maximizes velocity. Only go sequential when dependencies or shared state require it.
When faced with a complex task:
Original Task: [The complex task to break down]
Activities Identified:
1. [Activity 1 name]
- Expertise: [Type of knowledge needed]
- Output: [What this produces]
- Dependencies: [What it needs from other activities]
2. [Activity 2 name]
- Expertise: [Type of knowledge needed]
- Output: [What this produces]
- Dependencies: [What it needs from other activities]
3. [Activity 3 name]
- Expertise: [Type of knowledge needed]
- Output: [What this produces]
- Dependencies: [What it needs from other activities]
Execution Strategy: [Parallel / Sequential / Mixed]
Reasoning: [Why this strategy fits]
Decompose when:
Don't decompose when:
Example 1: Add User Authentication
Original Task: Add user authentication to the application
Activities:
1. Analyze security requirements
- Expertise: Security analysis
- Output: Security requirements document
- Dependencies: None
2. Design database schema
- Expertise: Database design
- Output: Schema design with user tables
- Dependencies: Security requirements (Activity 1)
3. Create API endpoints
- Expertise: Backend development
- Output: Login/logout/register endpoints
- Dependencies: Database schema (Activity 2)
4. Build login/register UI
- Expertise: Frontend development
- Output: Authentication UI components
- Dependencies: API endpoints (Activity 3)
Execution Strategy: Mixed
- Sequential: 1 → 2 → (3 & 4 parallel)
Reasoning: Early activities inform later ones, but API and UI can be built in parallel once schema exists
Example 2: Research Competitive Landscape
Original Task: Research competitive landscape for pricing strategy
Activities:
1. Analyze competitor A pricing
- Expertise: Market research
- Output: Competitor A pricing analysis
- Dependencies: None
2. Analyze competitor B pricing
- Expertise: Market research
- Output: Competitor B pricing analysis
- Dependencies: None
3. Analyze competitor C pricing
- Expertise: Market research
- Output: Competitor C pricing analysis
- Dependencies: None
4. Synthesize findings
- Expertise: Strategic analysis
- Output: Unified competitive analysis
- Dependencies: All competitor analyses (Activities 1-3)
Execution Strategy: Mixed
- Parallel: 1, 2, 3 → Sequential: 4
Reasoning: Each competitor analysis is independent, synthesis requires all results
When decomposing tasks, explicitly decide whether documentation should be created.
Include documentation in OUTPUT only when ALL criteria are met:
grep -ri "keyword" docs/ or find docs -name "*topic*"Task: Implement Stripe payment processing
Check: grep -ri "stripe" docs/ → No results
Decision: CREATE docs/interfaces/stripe-payment-integration.md
OUTPUT:
- Payment processing code
- docs/interfaces/stripe-payment-integration.md
| Scenario | Dependencies | Shared State | Validation | File Paths | Recommendation |
|---|---|---|---|---|---|
| Research tasks | None | Read-only | Independent | N/A | PARALLEL ⚡ |
| Analysis tasks | None | Read-only | Independent | N/A | PARALLEL ⚡ |
| Documentation | None | Unique paths | Independent | Unique | PARALLEL ⚡ |
| Code creation | None | Unique files | Independent | Unique | PARALLEL ⚡ |
| Build pipeline | Sequential | Shared files | Dependent | Same | SEQUENTIAL 📝 |
| File editing | None | Same file | Collision risk | Same | SEQUENTIAL 📝 |
| Dependent tasks | B needs A | Any | Dependent | Any | SEQUENTIAL 📝 |
Run this checklist to confirm parallel execution is safe:
✅ Independent tasks - No task depends on another's output ✅ No shared state - No simultaneous writes to same data ✅ Separate validation - Each can be validated independently ✅ Won't block - No resource contention ✅ Unique file paths - If creating files, paths don't collide
Result: ✅ PARALLEL EXECUTION - Launch all agents in single response
Look for these signals that require sequential execution:
🔴 Dependency chain - Task B needs Task A's output 🔴 Shared state - Multiple tasks modify same resource 🔴 Validation dependency - Must validate before proceeding 🔴 File path collision - Multiple tasks write same file 🔴 Order matters - Business logic requires specific sequence
Result: 📝 SEQUENTIAL EXECUTION - Launch agents one at a time
Many complex tasks benefit from mixed strategies:
Pattern: Parallel groups connected sequentially
Group 1 (parallel): Tasks A, B, C
↓ (sequential)
Group 2 (parallel): Tasks D, E
↓ (sequential)
Group 3: Task F
Example: Authentication implementation
Every agent prompt should follow this structure:
FOCUS: [Complete task description with all details]
EXCLUDE: [Task-specific things to avoid]
- Do not create new patterns when existing ones work
- Do not duplicate existing work
[Add specific exclusions for this task]
CONTEXT: [Task background and constraints]
- [Include relevant rules for this task]
- Follow discovered patterns exactly
[Add task-specific context]
OUTPUT: [Expected deliverables with exact paths if applicable]
SUCCESS: [Measurable completion criteria]
- Follows existing patterns
- Integrates with existing system
[Add task-specific success criteria]
TERMINATION: [When to stop]
- Completed successfully
- Blocked by [specific blockers]
- Maximum 3 attempts reached
Add DISCOVERY_FIRST section at the beginning:
DISCOVERY_FIRST: Before starting your task, understand the environment:
- [Appropriate discovery commands for the task type]
- Identify existing patterns and conventions
- Locate where similar files live
- Check project structure and naming conventions
[Rest of template follows]
Example:
DISCOVERY_FIRST: Before starting your task, understand the environment:
- find . -name "*test*" -o -name "*spec*" -type f | head -20
- Identify test framework (Jest, Vitest, Mocha, etc.)
- Check existing test file naming patterns
- Note test directory structure
Use REVIEW_FOCUS variant:
REVIEW_FOCUS: [Implementation to review]
VERIFY:
- [Specific criteria to check]
- [Quality requirements]
- [Specification compliance]
- [Security considerations]
CONTEXT: [Background about what's being reviewed]
OUTPUT: [Review report format]
- Issues found (if any)
- Approval status
- Recommendations
SUCCESS: Review completed with clear decision (approve/reject/revise)
TERMINATION: Review decision made OR blocked by missing context
Emphasize OUTPUT format specificity:
FOCUS: [Research question or area]
EXCLUDE: [Out of scope topics]
CONTEXT: [Why this research is needed]
OUTPUT: Structured findings including:
- Executive Summary (2-3 sentences)
- Key Findings (bulleted list)
- Detailed Analysis (organized by theme)
- Recommendations (actionable next steps)
- References (sources consulted)
SUCCESS: All sections completed with actionable insights
TERMINATION: Research complete OR information unavailable
Always include in CONTEXT:
Context Example:
CONTEXT: Testing authentication service handling login, tokens, and sessions.
- TDD required: Write tests before implementation
- One behavior per test: Each test should verify single behavior
- Mock externals only: Don't mock internal application code
- Follow discovered test patterns exactly
- Current auth flow: docs/patterns/authentication-flow.md
- Security requirements: PRD Section 3.2
Example 1: Parallel Research Tasks
Agent 1 - Competitor A Analysis:
FOCUS: Research Competitor A's pricing strategy, tiers, and feature bundling
- Identify all pricing tiers
- Map features to tiers
- Note promotional strategies
- Calculate price per feature value
EXCLUDE: Don't analyze their technology stack or implementation
- Don't make pricing recommendations yet
- Don't compare to other competitors
CONTEXT: We're researching competitive landscape for our pricing strategy.
- Focus on B2B SaaS pricing
- Competitor A is our primary competitor
- Looking for pricing patterns and positioning
OUTPUT: Structured analysis including:
- Pricing tiers table
- Feature matrix by tier
- Key insights about their strategy
- Notable patterns or differentiators
SUCCESS: Complete analysis with actionable data
- All tiers documented
- Features mapped accurately
- Insights are specific and evidence-based
TERMINATION: Analysis complete OR information not publicly available
Example 2: Sequential Implementation Tasks
Agent 1 - Database Schema (runs first):
DISCOVERY_FIRST: Before starting, understand the environment:
- Check existing database migrations
- Identify ORM/database tool in use
- Review existing table structures
- Note naming conventions
FOCUS: Design database schema for user authentication
- Users table with email, password hash, created_at
- Sessions table for active sessions
- Use appropriate indexes for performance
EXCLUDE: Don't implement the actual migration yet
- Don't add OAuth tables (separate feature)
- Don't modify existing tables
CONTEXT: From security analysis, we need:
- Bcrypt password hashing (cost factor 12)
- Email uniqueness constraint
- Session expiry mechanism
- Follow discovered database patterns exactly
OUTPUT: Schema design document at docs/patterns/auth-schema.md
- Table definitions with types
- Indexes and constraints
- Relationships between tables
SUCCESS: Schema designed and documented
- Follows project conventions
- Meets security requirements
- Ready for migration implementation
TERMINATION: Design complete OR blocked by missing requirements
When multiple agents will create files:
Check before launching:
If any check fails: 🔴 Adjust OUTPUT sections to prevent collisions
Assign each agent a specific file path:
Agent 1 OUTPUT: Create pattern at docs/patterns/authentication-flow.md
Agent 2 OUTPUT: Create interface at docs/interfaces/oauth-providers.md
Agent 3 OUTPUT: Create domain rule at docs/domain/user-permissions.md
Result: ✅ No collisions possible
Use placeholder that agent discovers:
Agent 1 OUTPUT: Test file at [DISCOVERED_LOCATION]/AuthService.test.ts
where DISCOVERED_LOCATION is found via DISCOVERY_FIRST
Agent 2 OUTPUT: Test file at [DISCOVERED_LOCATION]/UserService.test.ts
where DISCOVERED_LOCATION is found via DISCOVERY_FIRST
Result: ✅ Agents discover same location, but filenames differ
Use directory structure to separate agents:
Agent 1 OUTPUT: docs/patterns/backend/api-versioning.md
Agent 2 OUTPUT: docs/patterns/frontend/state-management.md
Agent 3 OUTPUT: docs/patterns/database/migration-strategy.md
Result: ✅ Different directories prevent collisions
Before launching agents that create files:
Continue without user review when agent delivers:
Security improvements:
Quality improvements:
Specification compliance:
Present to user for confirmation when agent delivers:
Architectural changes:
Scope expansions:
Reject as scope creep when agent delivers:
Out of scope work:
Quality issues:
Process violations:
When reviewing agent responses:
✅ Agent Response Validation
Agent: [Agent type/name]
Task: [Original FOCUS]
Deliverables Check:
✅ [Deliverable 1]: Matches OUTPUT requirement
✅ [Deliverable 2]: Matches OUTPUT requirement
⚠️ [Deliverable 3]: Extra feature added (not in FOCUS)
🔴 [Deliverable 4]: Violates EXCLUDE constraint
Scope Compliance:
- FOCUS coverage: [%]
- EXCLUDE violations: [count]
- OUTPUT format: [matched/partial/missing]
- SUCCESS criteria: [met/partial/unmet]
Recommendation:
🟢 ACCEPT - Fully compliant
🟡 REVIEW - User decision needed on [specific item]
🔴 REJECT - Scope creep, retry with stricter FOCUS
Ask yourself:
Response options:
When an agent fails, follow this escalation:
1. 🔄 Retry with refined prompt
- More specific FOCUS
- More explicit EXCLUDE
- Better CONTEXT
↓ (if still fails)
2. 🔄 Try different specialist agent
- Different expertise angle
- Simpler task scope
↓ (if still fails)
3. 🔄 Break into smaller tasks
- Decompose further
- Sequential smaller steps
↓ (if still fails)
4. 🔄 Sequential instead of parallel
- Dependency might exist
- Coordination issue
↓ (if still fails)
5. 🔄 Handle directly (DIY)
- Task too specialized
- Agent limitation
↓ (if blocked)
6. ⚠️ Escalate to user
- Present options
- Request guidance
Agent failed? Diagnose why:
| Symptom | Likely Cause | Solution |
|---|---|---|
| Scope creep | FOCUS too vague | Refine FOCUS, expand EXCLUDE |
| Wrong approach | Wrong specialist | Try different agent type |
| Incomplete work | Task too complex | Break into smaller tasks |
| Blocked/stuck | Missing dependency | Check if should be sequential |
| Wrong output | OUTPUT unclear | Specify exact format/path |
| Quality issues | CONTEXT insufficient | Add more constraints/examples |
Original (failed):
FOCUS: Implement authentication
EXCLUDE: Don't add tests
Why it failed: Too vague, agent added OAuth when we wanted JWT
Refined (retry):
FOCUS: Implement JWT-based authentication for REST API endpoints
- Create middleware for token validation
- Add POST /auth/login endpoint that returns JWT
- Add POST /auth/logout endpoint that invalidates token
- Use bcrypt for password hashing (cost factor 12)
- JWT expiry: 24 hours
EXCLUDE: OAuth implementation (separate feature)
- Don't modify existing user table schema
- Don't add frontend components
- Don't implement refresh tokens yet
- Don't add password reset flow
CONTEXT: API-only authentication for mobile app consumption.
- Follow REST API patterns in docs/patterns/api-design.md
- Security requirements from PRD Section 3.2
- Use existing User model from src/models/User.ts
OUTPUT:
- Middleware: src/middleware/auth.ts
- Routes: src/routes/auth.ts
- Tests: src/routes/auth.test.ts
SUCCESS:
- Login returns valid JWT
- Protected routes require valid token
- All tests pass
- Follows existing API patterns
TERMINATION: Implementation complete OR blocked by missing User model
Changes:
When agent delivers partial results:
Assess what worked:
Determine if acceptable:
Options:
Example:
Agent delivered:
✅ POST /auth/login endpoint (works perfectly)
✅ JWT generation logic (correct)
🔴 POST /auth/logout endpoint (missing)
🔴 Tests (missing)
Decision: Accept partial
- Login endpoint is production-ready
- Launch new agent for logout + tests
- Faster than full retry
Maximum retries: 3 attempts
After 3 failed attempts:
Don't infinite loop - If it's not working after 3 tries, human input needed.
After delegation work, always report:
🎯 Task Decomposition Complete
Original Task: [The complex task]
Activities Identified: [N]
1. [Activity 1] - [Parallel/Sequential]
2. [Activity 2] - [Parallel/Sequential]
3. [Activity 3] - [Parallel/Sequential]
Execution Strategy: [Parallel / Sequential / Mixed]
Reasoning: [Why this strategy]
Agent Prompts Generated: [Yes/No]
File Coordination: [Checked/Not applicable]
Ready to launch: [Yes/No - if No, explain blocker]
For scope validation:
✅ Scope Validation Complete
Agent: [Agent name]
Result: [ACCEPT / REVIEW NEEDED / REJECT]
Summary:
- Deliverables: [N matched, N extra, N missing]
- Scope compliance: [percentage]
- Recommendation: [Action to take]
[If REVIEW or REJECT, provide details]
For retry strategy:
🔄 Retry Strategy Generated
Agent: [Agent name]
Failure cause: [Diagnosis]
Retry approach: [What's different]
Template refinements:
- FOCUS: [What changed]
- EXCLUDE: [What was added]
- CONTEXT: [What was enhanced]
Retry attempt: [N of 3]
✅ "Break down this complex task" ✅ "Launch parallel agents for these activities" ✅ "Create agent prompts with FOCUS/EXCLUDE" ✅ "Should these run in parallel or sequential?" ✅ "Validate this agent response for scope" ✅ "Generate retry strategy for failed agent" ✅ "Coordinate file creation across agents"
Every agent prompt needs:
Before launching parallel agents, verify:
If all checked: ✅ PARALLEL SAFE
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.