Systematic agent creation using evidence-based prompting principles and 4-phase SOP methodology. Use when creating new specialist agents, refining existing agent prompts, or designing multi-agent systems. Applies Chain-of-Thought, few-shot learning, and role-based prompting. Includes validation scripts, templates, comprehensive tests, and production-ready examples.
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
CHANGELOG-v3.0.mdagent-identity-generation-guide.mdexamples/example-1-python-specialist.mdexamples/example-1-specialist.mdexamples/example-2-coordinator.mdexamples/example-3-prompt-engineering.mdgraphviz/agent-creation-process.dotgraphviz/workflow.dotreadme.mdreferences/agent-creator.mdreferences/agent-patterns.mdreferences/evidence-based-prompting.mdreferences/micro-skill-creator.mdreferences/prompting-principles.mdreferences/skill-creator-agent.mdreferences/skill-forge.mdresources/scripts/generate_agent.shresources/scripts/validate_agent.pyresources/templates/agent-spec.yamlresources/templates/capabilities.jsonname: agent-creation description: Systematic agent creation using evidence-based prompting principles and 4-phase SOP methodology. Use when creating new specialist agents, refining existing agent prompts, or designing multi-agent systems. Applies Chain-of-Thought, few-shot learning, and role-based prompting. Includes validation scripts, templates, comprehensive tests, and production-ready examples. version: 1.0.0 category: foundry tags:
NEVER:
self_consistency: "After agent creation, test with same task multiple times to verify consistent outputs and reasoning quality" program_of_thought: "Decompose agent creation into: 1) Domain analysis, 2) Capability mapping, 3) Prompt architecture, 4) Test design, 5) Validation, 6) Integration" plan_and_solve: "Plan: Research domain + identify capabilities -> Execute: Build prompts + test cases -> Verify: Multi-run consistency + edge case handling"
<!-- END SKILL SOP IMPROVEMENT -->Evidence-based agent creation following best practices for prompt engineering and agent specialization.
Use when creating new specialist agents for specific domains, refining existing agent capabilities, designing multi-agent coordination systems, or implementing role-based agent hierarchies.
Tools: Use resources/scripts/generate_agent.sh for automated generation
Reference: See references/prompting-principles.md for detailed techniques
resources/scripts/validate_agent.pyTests: Run tests from tests/ directory (basic, specialist, integration)
Examples: See examples/example-1-python-specialist.md for complete walkthrough
cd resources/scripts
./generate_agent.sh <agent-name> <category> --interactive
Categories: specialist, coordinator, hybrid, research, development, testing, documentation, security
python3 validate_agent.py <path-to-agent-spec.yaml>
Expected Output: All validation checks pass (metadata, role, capabilities, prompting, quality, integration)
# Copy to Claude-Flow agents directory
cp agent-spec.yaml ~/.claude-flow/agents/<agent-name>.yaml
# Test with Claude Code
Task("<Agent Name>", "<Task Description>", "<category>")
Clear agent identity and expertise improves performance by 15-30%
role:
identity: "You are a [Specific Role] with expertise in [Domain]"
expertise: ["skill-1", "skill-2", "skill-3"]
responsibilities: ["task-1", "task-2"]
Explicit reasoning steps improve accuracy by 20-40% on complex tasks
reasoning_steps:
- "Step 1: Analyze requirements"
- "Step 2: Identify solutions"
- "Step 3: Evaluate trade-offs"
- "Step 4: Select optimal approach"
2-5 examples improve performance by 30-50% compared to zero-shot
examples:
- input: "Concrete example input"
reasoning: "Step-by-step thinking"
output: "Expected output with explanation"
Planning before execution reduces errors by 25-35% on complex workflows
workflow:
- name: "Planning Phase"
steps: ["Understand requirements", "Outline approach"]
- name: "Execution Phase"
steps: ["Implement solution", "Handle edge cases"]
Reference: Complete guide in references/prompting-principles.md
Example: Python Backend Specialist with FastAPI, SQLAlchemy, pytest
Example: Backend Coordinator managing API, database, and testing agents
Example: Full-Stack Developer switching between backend and frontend
Reference: Complete patterns in references/agent-patterns.md
// Spawn agent for parallel execution
Task("Agent Name", "Task description", "category")
integration:
memory_mcp:
enabled: true
tagging_protocol:
WHO: "agent-name"
WHEN: "timestamp"
PROJECT: "project-name"
WHY: "intent"
integration:
hooks:
pre_task: ["npx claude-flow@alpha hooks pre-task"]
post_task: ["npx claude-flow@alpha hooks post-task"]
See graphviz/agent-creation-process.dot for visual workflow:
Decision Points: Quality checks at each phase with iteration loops
references/prompting-principles.mdexamples/example-1-python-specialist.mdresources/scripts/generate_agent.shresources/scripts/validate_agent.pytests/ directoryVersion: 2.0.0 (Gold Tier) Status: Production Ready Files: 14+ (scripts, templates, tests, examples, documentation)