Run multi-LLM council for adversarial debate and cross-validation. Orchestrates Claude, GPT-4, and Gemini for production-grade implementation, code review, architecture design, research, and security analysis.
This skill inherits all available tools. When active, it can use any tool Claude has access to.
subagents/architect.mdsubagents/assessor.mdsubagents/implementer.mdsubagents/planner.mdsubagents/red-team.mdsubagents/researcher.mdsubagents/reviewer.mdsubagents/router.mdsubagents/shipper.mdsubagents/test-designer.mdMulti-model council: parallel drafts → adversarial critique → validated synthesis.
Prerequisite: This skill requires the
the-llm-councilPython package to be installed. The skill provides IDE integration but the actual council runs via the installed CLI. If you seecommand not found: council, runpip install the-llm-councilfirst.
pip install the-llm-council>=0.2.0
# With specific provider SDKs
pip install the-llm-council[anthropic,openai,google]
| Provider | Environment Variable | Notes |
|---|---|---|
| OpenRouter | OPENROUTER_API_KEY | Recommended - single key for all models |
| OpenAI | OPENAI_API_KEY | Direct GPT access |
| Anthropic | ANTHROPIC_API_KEY | Direct Claude access |
GOOGLE_API_KEY or GEMINI_API_KEY | Direct Gemini access |
# Minimum setup (OpenRouter)
export OPENROUTER_API_KEY="your-key"
council doctor
council run <subagent> "<task>" [options]
| Option | Description |
|---|---|
--json | Output structured JSON |
--health-check | Run preflight provider check |
--verbose, -v | Verbose output |
--models, -m | Comma-separated model IDs |
--providers, -p | Comma-separated provider list |
--timeout, -t | Timeout in seconds (default: 120) |
--max-retries | Max retry attempts (default: 3) |
--no-artifacts | Disable artifact storage (faster) |
--no-degradation | Disable graceful degradation (strict mode) |
council doctor # Check provider health
council config # Show current configuration
| Subagent | Use For | Details |
|---|---|---|
implementer | Feature code, bug fixes | See subagents/implementer.md |
reviewer | Code review, security audit | See subagents/reviewer.md |
architect | System design, APIs | See subagents/architect.md |
researcher | Technical research | See subagents/researcher.md |
planner | Roadmaps, execution plans | See subagents/planner.md |
assessor | Build vs buy decisions | See subagents/assessor.md |
red-team | Security threat analysis | See subagents/red-team.md |
test-designer | Test suite design | See subagents/test-designer.md |
shipper | Release notes | See subagents/shipper.md |
router | Task classification | See subagents/router.md |
Run multiple models in parallel for adversarial debate:
# Via CLI flag
council run architect "Design caching layer" \
--models "anthropic/claude-3.5-sonnet,openai/gpt-4o,google/gemini-pro"
# Via environment variable
export COUNCIL_MODELS="anthropic/claude-3.5-sonnet,openai/gpt-4o,google/gemini-pro"
Fine-tune which models handle specific task types:
export COUNCIL_MODEL_FAST="anthropic/claude-3-haiku" # Quick tasks
export COUNCIL_MODEL_REASONING="anthropic/claude-3-opus" # Deep analysis
export COUNCIL_MODEL_CODE="openai/gpt-4o" # Code generation
export COUNCIL_MODEL_CRITIC="anthropic/claude-3.5-sonnet" # Adversarial critique
Optional YAML configuration:
# ~/.config/llm-council/config.yaml
providers:
- name: openrouter
api_key: ${OPENROUTER_API_KEY}
default_model: anthropic/claude-3-opus
defaults:
timeout: 120
max_retries: 3
summary_tier: actions
from llm_council import Council
council = Council(providers=["openrouter"])
result = await council.run(
task="Build a login page with OAuth",
subagent="implementer"
)
print(result.output)
Use council for:
Skip council for:
# Feature implementation
council run implementer "Add pagination to users API" --json
# Code review
council run reviewer "Review the authentication changes" --json
# Multi-model architecture design
council run architect "Design caching layer" \
--models "anthropic/claude-3.5-sonnet,openai/gpt-4o" --json
# Security threat model
council run red-team "Analyze auth system vulnerabilities" --json
.env, credentials) to the council. Context is sent to external LLM providers.SKILL.md and subagents/*.md as configuration code. Keep under version control.# Check all providers
council doctor
# Verbose output for debugging
council run implementer "task" --verbose
# Faster runs (skip artifact storage)
council run implementer "task" --no-artifacts