This skill should be used when orchestrating rich human-in-the-loop question flows, building layered AskUserQuestion calls that respond to context (codebase, CLI tools, prior answers). Triggers include "ask me questions", "help me think through", "what should we consider", or complex tasks benefiting from interactive clarification.
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
references/cli-context-gathering.mdreferences/conversation-patterns.mdreferences/question-archetypes.mdturn ambiguous tasks into clear specs through layered questions. context-first, progressive, feels like conversation not interrogation.
| use | skip |
|---|---|
| multiple valid approaches | single obvious path |
| user says "ask me questions" | user gave detailed specs |
| need to understand preferences | quick fix needed |
| architectural trade-offs | user says "just do it" |
before asking anything, understand the landscape:
layer .
outline --callers=relevantFunction src/
git log --oneline -10
outline --diff=HEAD~3
never ask about things you could discover.
| situation | opening |
|---|---|
| new feature | "what problem are we solving?" |
| refactor | "what's driving this change?" |
| bug fix | "how should this behave?" |
| architecture | "what trade-offs matter most?" |
one question, 2-4 options, descriptions that reveal implications.
answer: "context-aware ack"
→ "what context signals should drive personalization?"
answer: "all of the above"
→ "how dynamic should the system be?"
→ meta: "is there anything about this approach that feels off?"
each question builds on prior responses. see references/question-archetypes.md.
when user mentions something, verify:
# user mentions "auth module"
outline --callers=authenticate src/auth/
# user says "like the reaction agent"
fd -e ts . convex/agents | outline -c --search=reaction
report findings, ask refined questions based on what you learned.
step back periodically:
meta questions reveal unstated constraints.
after 3-5 rounds:
based on your answers:
1. [key decision A]
2. [key decision B]
3. [approach C]
ready to proceed, or dig deeper on any point?
tight - they show as chips:
bad: "JWT - JSON Web Tokens" good: "JWT - stateless, client stores token, easy to scale but harder to revoke"
multiSelect: true when choices aren't mutually exclusive:
| depth | rounds | stop when |
|---|---|---|
| shallow | 1-2 | simple clarification |
| moderate | 3-4 | feature planning |
| deep | 5-8 | architecture decisions |
stop signs: user says "that covers it", answers get redundant, clear path emerges, short impatient answers
continue signs: user says "dig deeper", answers reveal new dimensions, contradictions surface, user is vibing
| question | tool support |
|---|---|
| "where should this live?" | layer . + outline src/ |
| "what's the current pattern?" | outline --search=pattern |
| "who uses this?" | outline --callers=X |
| "what changed recently?" | outline --diff=HEAD~5 |
| "related issues?" | linear issue list --team X |
| "prior discussions?" | slack search "topic" |
see references/cli-context-gathering.md.