This skill should be used for fast AI consultation via cursor-agent. Triggers include context synthesis, progress tracking, pre-flight checks, or when the loop needs quick reasoning before escalating to consult-deep.
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
references/model-selection.mdreferences/packet-building.mdreferences/prompt-templates.mdfast consultation tier using cursor-agent. context synthesis, progress tracking, pre-flight checks. escalates to consult-deep when confidence < 7.
"cheap, fast, frequent - the first line of autonomous reasoning"
| principle | application |
|---|---|
| token efficiency | structured packets, not raw history |
| confidence scoring | 0-10 scale, threshold-based escalation |
| single model | gemini-3-pro for everything (fast, cheap) |
| fail fast | confidence < 7 → escalate to consult-deep |
| stdin for long prompts | pipe via heredoc, not inline args |
| use | skip |
|---|---|
| context synthesis | simple file reads |
| progress tracking | one-off commands |
| pre-flight validation | human-in-the-loop moments |
| quick sanity checks | architectural decisions (→ consult-deep) |
| hook-level reasoning | final review (→ consult-deep) |
gemini-3-pro via cursor-agent. fast, cheap, good reasoning.
# short prompts: inline
cursor-agent -p --model gemini-3-pro "prompt here"
# long prompts: stdin (required for multi-line)
cat <<'EOF' | cursor-agent -p --model gemini-3-pro
long prompt
with multiple lines
and structured content
EOF
{
"session": {
"id": "uuid",
"started": "ISO timestamp",
"issue": "ARB-123"
},
"progress": {
"phase": "execute",
"step": 3,
"total": 7
},
"recent_actions": [
"wrote tests for auth module",
"ran verify --format=summary",
"2 test failures in login.test.ts"
],
"current_state": "fixing test failures",
"question": "should I refactor the auth helper or patch inline?"
}
{
"action": "patch inline - minimal change for this fix",
"confidence": 8,
"reason": "isolated bug, refactor would expand scope",
"escalate": false
}
cat <<'EOF' | cursor-agent -p --model gemini-3-pro
Given this session state:
{packet}
Summarize progress and recommend next action.
Output JSON: {action, confidence, reason, escalate}
EOF
cat <<'EOF' | cursor-agent -p --model gemini-3-pro
Session: {packet}
Assess: what % complete? any blockers?
Output JSON: {percent_complete, blockers[], next_step, confidence}
EOF
cat <<'EOF' | cursor-agent -p --model gemini-3-pro
About to: {proposed_action}
Context: {packet}
Validate: safe to proceed? preconditions met?
Output JSON: {proceed: bool, issues[], confidence}
EOF
cat <<'EOF' | cursor-agent -p --model gemini-3-pro
Domain skill depth review.
Task: {task_description}
Sources consulted: {sources_list}
Artifacts created: {artifact_list}
Content includes:
{content_summary}
Validate:
1. Primary sources used? (not just summaries)
2. Decision trees present? (not just philosophy)
3. Concrete values? (numbers, timings)
4. Integration with user's tools?
Output JSON: {depth_score: 1-10, pass: bool, issues: [], confidence: 1-10}
EOF
cat <<'EOF' | cursor-agent -p --model gemini-3-pro
Code work review.
Task: {task_description}
Files changed: {file_list}
Tests: {test_summary}
Validate:
1. Tests cover new behavior?
2. Follows codebase patterns?
3. Edge cases handled?
Output JSON: {pass: bool, issues: [], confidence: 1-10}
EOF
when confidence < 7 or explicit escalate flag:
# escalate to consult-deep (Codex)
codex exec --model gpt-5.2-max --approval-mode xhigh \
"Review needed: {context}
consult-light confidence: {score}
Reason for escalation: {reason}
Provide thorough analysis."
# check status
cursor-agent status
# short prompt (inline)
cursor-agent -p --model gemini-3-pro "quick question"
# long prompt (stdin) - REQUIRED for multi-line
cat <<'EOF' | cursor-agent -p --model gemini-3-pro
multi-line prompt
with context
EOF
# with workspace context
cat prompt.txt | cursor-agent -p --model gemini-3-pro --workspace /path/to/project
# JSON output format
cursor-agent -p --model gemini-3-pro --output-format json "prompt"
# list available models (error will show them)
cursor-agent -p --model help "" 2>&1 | grep "Available models"