SPARC (Specification, Pseudocode, Architecture, Refinement, Completion) comprehensive development methodology with multi-agent orchestration
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
name: sparc-methodology description: SPARC (Specification, Pseudocode, Architecture, Refinement, Completion) comprehensive development methodology with multi-agent orchestration version: 2.7.0 category: development tags:
SPARC (Specification, Pseudocode, Architecture, Refinement, Completion) is a systematic development methodology integrated with Claude Flow's multi-agent orchestration capabilities. It provides 17 specialized modes for comprehensive software development, from initial research through deployment and monitoring.
SPARC methodology emphasizes:
Goal: Define requirements, constraints, and success criteria
Key Modes: researcher, analyzer, memory-manager
Goal: Design system structure and component interfaces
Key Modes: architect, designer, orchestrator
Goal: Implement features with test-first approach
Key Modes: tdd, coder, tester
Goal: Ensure code quality, security, and performance
Key Modes: reviewer, optimizer, debugger
Goal: Integration, deployment, and monitoring
Key Modes: workflow-manager, documenter, memory-manager
orchestratorMulti-agent task orchestration with TodoWrite/Task/Memory coordination.
Capabilities:
Usage:
mcp__claude-flow__sparc_mode {
mode: "orchestrator",
task_description: "coordinate feature development",
options: { parallel: true, monitor: true }
}
swarm-coordinatorSpecialized swarm management for complex multi-agent workflows.
Capabilities:
workflow-managerProcess automation and workflow orchestration.
Capabilities:
batch-executorParallel task execution for high-throughput operations.
Capabilities:
coderAutonomous code generation with batch file operations.
Capabilities:
Quality Standards:
Usage:
mcp__claude-flow__sparc_mode {
mode: "coder",
task_description: "implement user authentication with JWT",
options: {
test_driven: true,
parallel_edits: true,
typescript: true
}
}
architectSystem design with Memory-based coordination.
Capabilities:
Memory Integration:
Design Patterns:
Usage:
mcp__claude-flow__sparc_mode {
mode: "architect",
task_description: "design scalable e-commerce platform",
options: {
detailed: true,
memory_enabled: true,
patterns: ["microservices", "event-driven"]
}
}
tddTest-driven development with comprehensive testing.
Capabilities:
TDD Workflow:
Testing Strategies:
Usage:
mcp__claude-flow__sparc_mode {
mode: "tdd",
task_description: "shopping cart feature with payment integration",
options: {
coverage_target: 90,
test_framework: "jest",
e2e_framework: "playwright"
}
}
reviewerCode review using batch file analysis.
Capabilities:
Review Criteria:
Batch Analysis:
Usage:
mcp__claude-flow__sparc_mode {
mode: "reviewer",
task_description: "review authentication module PR #123",
options: {
security_check: true,
performance_check: true,
test_coverage_check: true
}
}
researcherDeep research with parallel WebSearch/WebFetch and Memory coordination.
Capabilities:
Research Methods:
Memory Integration:
Usage:
mcp__claude-flow__sparc_mode {
mode: "researcher",
task_description: "research microservices best practices 2024",
options: {
depth: "comprehensive",
sources: ["academic", "industry", "news"],
citations: true
}
}
analyzerCode and data analysis with pattern recognition.
Capabilities:
optimizerPerformance optimization and bottleneck resolution.
Capabilities:
designerUI/UX design with accessibility focus.
Capabilities:
innovatorCreative problem-solving and novel solutions.
Capabilities:
documenterComprehensive documentation generation.
Capabilities:
debuggerSystematic debugging and issue resolution.
Capabilities:
testerComprehensive testing beyond TDD.
Capabilities:
memory-managerKnowledge management and context preservation.
Capabilities:
Best for: Integrated Claude Code workflows with full orchestration capabilities
// Basic mode execution
mcp__claude-flow__sparc_mode {
mode: "<mode-name>",
task_description: "<task description>",
options: {
// mode-specific options
}
}
// Initialize swarm for complex tasks
mcp__claude-flow__swarm_init {
topology: "hierarchical", // or "mesh", "ring", "star"
strategy: "auto", // or "balanced", "specialized", "adaptive"
maxAgents: 8
}
// Spawn specialized agents
mcp__claude-flow__agent_spawn {
type: "<agent-type>",
capabilities: ["<capability1>", "<capability2>"]
}
// Monitor execution
mcp__claude-flow__swarm_monitor {
swarmId: "current",
interval: 5000
}
Best for: Terminal usage or when MCP tools unavailable
# Execute specific mode
npx claude-flow sparc run <mode> "task description"
# Use alpha features
npx claude-flow@alpha sparc run <mode> "task description"
# List all available modes
npx claude-flow sparc modes
# Get help for specific mode
npx claude-flow sparc help <mode>
# Run with options
npx claude-flow sparc run <mode> "task" --parallel --monitor
# Execute TDD workflow
npx claude-flow sparc tdd "feature description"
# Batch execution
npx claude-flow sparc batch <mode1,mode2,mode3> "task"
# Pipeline execution
npx claude-flow sparc pipeline "task description"
Best for: Projects with local claude-flow installation
# If claude-flow is installed locally
./claude-flow sparc run <mode> "task description"
Best for: Complex projects with clear delegation hierarchy
// Initialize hierarchical swarm
mcp__claude-flow__swarm_init {
topology: "hierarchical",
maxAgents: 12
}
// Spawn coordinator
mcp__claude-flow__agent_spawn {
type: "coordinator",
capabilities: ["planning", "delegation", "monitoring"]
}
// Spawn specialized workers
mcp__claude-flow__agent_spawn { type: "architect" }
mcp__claude-flow__agent_spawn { type: "coder" }
mcp__claude-flow__agent_spawn { type: "tester" }
mcp__claude-flow__agent_spawn { type: "reviewer" }
Best for: Collaborative tasks requiring peer-to-peer communication
mcp__claude-flow__swarm_init {
topology: "mesh",
strategy: "balanced",
maxAgents: 6
}
Best for: Ordered workflow execution (spec → design → code → test → review)
mcp__claude-flow__workflow_create {
name: "development-pipeline",
steps: [
{ mode: "researcher", task: "gather requirements" },
{ mode: "architect", task: "design system" },
{ mode: "coder", task: "implement features" },
{ mode: "tdd", task: "create tests" },
{ mode: "reviewer", task: "review code" }
],
triggers: ["on_step_complete"]
}
Best for: Independent tasks that can run concurrently
mcp__claude-flow__task_orchestrate {
task: "build full-stack application",
strategy: "parallel",
dependencies: {
backend: [],
frontend: [],
database: [],
tests: ["backend", "frontend"]
}
}
Best for: Dynamic workloads with changing requirements
mcp__claude-flow__swarm_init {
topology: "hierarchical",
strategy: "adaptive", // Auto-adjusts based on workload
maxAgents: 20
}
// Step 1: Initialize TDD swarm
mcp__claude-flow__swarm_init {
topology: "hierarchical",
maxAgents: 8
}
// Step 2: Research and planning
mcp__claude-flow__sparc_mode {
mode: "researcher",
task_description: "research testing best practices for feature X"
}
// Step 3: Architecture design
mcp__claude-flow__sparc_mode {
mode: "architect",
task_description: "design testable architecture for feature X"
}
// Step 4: TDD implementation
mcp__claude-flow__sparc_mode {
mode: "tdd",
task_description: "implement feature X with 90% coverage",
options: {
coverage_target: 90,
test_framework: "jest",
parallel_tests: true
}
}
// Step 5: Code review
mcp__claude-flow__sparc_mode {
mode: "reviewer",
task_description: "review feature X implementation",
options: {
test_coverage_check: true,
security_check: true
}
}
// Step 6: Optimization
mcp__claude-flow__sparc_mode {
mode: "optimizer",
task_description: "optimize feature X performance"
}
// RED: Write failing test
mcp__claude-flow__sparc_mode {
mode: "tester",
task_description: "create failing test for shopping cart add item",
options: { expect_failure: true }
}
// GREEN: Minimal implementation
mcp__claude-flow__sparc_mode {
mode: "coder",
task_description: "implement minimal code to pass test",
options: { minimal: true }
}
// REFACTOR: Improve code quality
mcp__claude-flow__sparc_mode {
mode: "coder",
task_description: "refactor shopping cart implementation",
options: { maintain_tests: true }
}
Always use Memory for cross-agent coordination:
// Store architectural decisions
mcp__claude-flow__memory_usage {
action: "store",
namespace: "architecture",
key: "api-design-v1",
value: JSON.stringify(apiDesign),
ttl: 86400000 // 24 hours
}
// Retrieve in subsequent agents
mcp__claude-flow__memory_usage {
action: "retrieve",
namespace: "architecture",
key: "api-design-v1"
}
Batch all related operations in single message:
// ✅ CORRECT: All operations together
[Single Message]:
mcp__claude-flow__agent_spawn { type: "researcher" }
mcp__claude-flow__agent_spawn { type: "coder" }
mcp__claude-flow__agent_spawn { type: "tester" }
TodoWrite { todos: [8-10 todos] }
// ❌ WRONG: Multiple messages
Message 1: mcp__claude-flow__agent_spawn { type: "researcher" }
Message 2: mcp__claude-flow__agent_spawn { type: "coder" }
Message 3: TodoWrite { todos: [...] }
Every SPARC mode should use hooks:
# Before work
npx claude-flow@alpha hooks pre-task --description "implement auth"
# During work
npx claude-flow@alpha hooks post-edit --file "auth.js"
# After work
npx claude-flow@alpha hooks post-task --task-id "task-123"
Maintain minimum 90% coverage:
Document as you build:
Never save to root folder:
project/
├── src/ # Source code
├── tests/ # Test files
├── docs/ # Documentation
├── config/ # Configuration
├── scripts/ # Utility scripts
└── examples/ # Example code
[Single Message - Parallel Agent Execution]:
// Initialize swarm
mcp__claude-flow__swarm_init {
topology: "hierarchical",
maxAgents: 10
}
// Architecture phase
mcp__claude-flow__sparc_mode {
mode: "architect",
task_description: "design REST API with authentication",
options: { memory_enabled: true }
}
// Research phase
mcp__claude-flow__sparc_mode {
mode: "researcher",
task_description: "research authentication best practices"
}
// Implementation phase
mcp__claude-flow__sparc_mode {
mode: "coder",
task_description: "implement Express API with JWT auth",
options: { test_driven: true }
}
// Testing phase
mcp__claude-flow__sparc_mode {
mode: "tdd",
task_description: "comprehensive API tests",
options: { coverage_target: 90 }
}
// Review phase
mcp__claude-flow__sparc_mode {
mode: "reviewer",
task_description: "security and performance review",
options: { security_check: true }
}
// Batch todos
TodoWrite {
todos: [
{content: "Design API schema", status: "completed"},
{content: "Research JWT implementation", status: "completed"},
{content: "Implement authentication", status: "in_progress"},
{content: "Write API tests", status: "pending"},
{content: "Security review", status: "pending"},
{content: "Performance optimization", status: "pending"},
{content: "API documentation", status: "pending"},
{content: "Deployment setup", status: "pending"}
]
}
// Research phase
mcp__claude-flow__sparc_mode {
mode: "researcher",
task_description: "research AI-powered search implementations",
options: {
depth: "comprehensive",
sources: ["academic", "industry"]
}
}
// Innovation phase
mcp__claude-flow__sparc_mode {
mode: "innovator",
task_description: "propose novel search algorithm",
options: { memory_enabled: true }
}
// Architecture phase
mcp__claude-flow__sparc_mode {
mode: "architect",
task_description: "design scalable search system"
}
// Implementation phase
mcp__claude-flow__sparc_mode {
mode: "coder",
task_description: "implement search algorithm",
options: { test_driven: true }
}
// Documentation phase
mcp__claude-flow__sparc_mode {
mode: "documenter",
task_description: "document search system architecture and API"
}
// Analysis phase
mcp__claude-flow__sparc_mode {
mode: "analyzer",
task_description: "analyze legacy codebase dependencies"
}
// Planning phase
mcp__claude-flow__sparc_mode {
mode: "orchestrator",
task_description: "plan incremental refactoring strategy"
}
// Testing phase (create safety net)
mcp__claude-flow__sparc_mode {
mode: "tester",
task_description: "create comprehensive test suite for legacy code",
options: { coverage_target: 80 }
}
// Refactoring phase
mcp__claude-flow__sparc_mode {
mode: "coder",
task_description: "refactor module X with modern patterns",
options: { maintain_tests: true }
}
// Review phase
mcp__claude-flow__sparc_mode {
mode: "reviewer",
task_description: "validate refactoring maintains functionality"
}
# Step 1: Research and planning
npx claude-flow sparc run researcher "authentication patterns"
# Step 2: Architecture design
npx claude-flow sparc run architect "design auth system"
# Step 3: TDD implementation
npx claude-flow sparc tdd "user authentication feature"
# Step 4: Code review
npx claude-flow sparc run reviewer "review auth implementation"
# Step 5: Documentation
npx claude-flow sparc run documenter "document auth API"
# Step 1: Analyze issue
npx claude-flow sparc run analyzer "investigate bug #456"
# Step 2: Debug systematically
npx claude-flow sparc run debugger "fix memory leak in service X"
# Step 3: Create tests
npx claude-flow sparc run tester "regression tests for bug #456"
# Step 4: Review fix
npx claude-flow sparc run reviewer "validate bug fix"
# Step 1: Profile performance
npx claude-flow sparc run analyzer "profile API response times"
# Step 2: Identify bottlenecks
npx claude-flow sparc run optimizer "optimize database queries"
# Step 3: Implement improvements
npx claude-flow sparc run coder "implement caching layer"
# Step 4: Benchmark results
npx claude-flow sparc run tester "performance benchmarks"
# Execute full development pipeline
npx claude-flow sparc pipeline "e-commerce checkout feature"
# This automatically runs:
# 1. researcher - Gather requirements
# 2. architect - Design system
# 3. coder - Implement features
# 4. tdd - Create comprehensive tests
# 5. reviewer - Code quality review
# 6. optimizer - Performance tuning
# 7. documenter - Documentation
// Train patterns from successful workflows
mcp__claude-flow__neural_train {
pattern_type: "coordination",
training_data: "successful_tdd_workflow.json",
epochs: 50
}
// Save session state
mcp__claude-flow__memory_persist {
sessionId: "feature-auth-v1"
}
// Restore in new session
mcp__claude-flow__context_restore {
snapshotId: "feature-auth-v1"
}
// Analyze repository
mcp__claude-flow__github_repo_analyze {
repo: "owner/repo",
analysis_type: "code_quality"
}
// Manage pull requests
mcp__claude-flow__github_pr_manage {
repo: "owner/repo",
pr_number: 123,
action: "review"
}
// Real-time swarm monitoring
mcp__claude-flow__swarm_monitor {
swarmId: "current",
interval: 5000
}
// Bottleneck analysis
mcp__claude-flow__bottleneck_analyze {
component: "api-layer",
metrics: ["latency", "throughput", "errors"]
}
// Token usage tracking
mcp__claude-flow__token_usage {
operation: "feature-development",
timeframe: "24h"
}
Proven Results:
# List modes
npx claude-flow sparc modes
# Run specific mode
npx claude-flow sparc run <mode> "task"
# TDD workflow
npx claude-flow sparc tdd "feature"
# Full pipeline
npx claude-flow sparc pipeline "task"
# Batch execution
npx claude-flow sparc batch <modes> "task"
// Initialize swarm
mcp__claude-flow__swarm_init { topology: "hierarchical" }
// Execute mode
mcp__claude-flow__sparc_mode { mode: "coder", task_description: "..." }
// Monitor progress
mcp__claude-flow__swarm_monitor { interval: 5000 }
// Store in memory
mcp__claude-flow__memory_usage { action: "store", key: "...", value: "..." }
Remember: SPARC = Systematic, Parallel, Agile, Refined, Complete