This skill should be used when the user asks to "create a prompt", "optimize a prompt", "improve this prompt", "engineer a prompt", "prompt engineering best practices", "make this prompt better", "recommend a model", "which model should I use", "best model for", "GPT vs Claude", "Opus vs Sonnet", "Haiku vs Sonnet", "analyze prompt quality", "fix my prompt", "prompt for Claude", "prompt for GPT", or needs help with prompt engineering techniques, model selection, or prompt optimization for any LLM (Claude Opus/Sonnet/Haiku 4.5, GPT 5.1/Codex, Gemini Pro 3.0).
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
examples/analysis-prompts.mdexamples/classification-prompts.mdexamples/code-generation-prompts.mdexamples/complex-workflow-prompts.mdexamples/creative-prompts.mdreference/comparisons/model-comparison-matrix.mdreference/comparisons/use-case-recommendations.mdreference/models/claude-haiku-4.5.mdreference/models/claude-opus-4.5.mdreference/models/claude-sonnet-4.5.mdreference/models/gemini-pro-3.0.mdreference/models/gpt-5.1-codex.mdreference/models/gpt-5.1.mdreference/optimization/model-migration.mdreference/optimization/optimization-checklist.mdreference/optimization/troubleshooting-guide.mdreference/techniques/01-xml-tags.mdreference/techniques/02-role-prompting.mdreference/techniques/03-be-clear-and-direct.mdreference/techniques/04-multishot-prompting.mdThis skill enables comprehensive prompt engineering across multiple LLM models. Engineer, optimize, and refine prompts using established best practices. Create new prompts from scratch or improve existing ones for maximum effectiveness. Recommend optimal models based on specific requirements through interactive analysis.
Supported Models:
Invoke this skill when the user requests:
Six universal techniques apply across all models:
| Technique | When to Use | Impact |
|---|---|---|
| XML Tags | 3+ components, structured output | High - clear separation |
| Role Prompting | Domain expertise needed | Medium - contextual knowledge |
| Clear & Direct | Always (baseline) | Critical - foundation |
| Multishot Prompting | Format/style consistency | High - 40-60% improvement |
| Chain of Thought | Complex reasoning | High - accuracy boost |
| Prompt Chaining | Multi-stage workflows | High - manages complexity |
Technique Selection Matrix:
| Task Type | Recommended Techniques |
|---|---|
| Simple question/task | Clear & Direct |
| Classification/extraction | Clear & Direct + Examples |
| Analysis/reasoning | Clear & Direct + Chain of Thought |
| Domain-specific task | Clear & Direct + Role Prompting |
| Complex structured output | Clear & Direct + XML Tags + Examples |
| Multi-step process | Clear & Direct + Prompt Chaining |
| Model | Best For | Speed | Quality | Cost |
|---|---|---|---|---|
| Opus 4.5 | Research, creative, complex analysis | Slow | Highest | $$$$$ |
| Sonnet 4.5 | Agentic coding, balanced production | Fast | High | $$ |
| Haiku 4.5 | Classification, high-volume, latency-critical | Very Fast | Good | $ |
| Model | Best For | Speed | Quality | Cost |
|---|---|---|---|---|
| GPT 5.1 | General-purpose, function calling | Fast | High | $$ |
| GPT 5.1 Codex | Code generation, review, debugging | Fast | High (code) | $$ |
| Model | Best For | Speed | Quality | Cost |
|---|---|---|---|---|
| Gemini Pro 3.0 | Multimodal, context caching, Google integration | Fast | High | $$ |
Gather essential information:
If model specified: Load the corresponding model guide from reference/models/.
If model not specified: Gather requirements via interactive dialog:
Then consult reference/comparisons/model-comparison-matrix.md and reference/comparisons/use-case-recommendations.md.
Always start with Clear & Direct (foundation technique).
Add based on needs:
Load technique documentation from reference/techniques/:
03-be-clear-and-direct.mdLoad model guide from reference/models/:
For new prompts:
For optimization:
Use reference/optimization/optimization-checklist.md to verify:
Provide:
Detailed documentation for each technique:
reference/techniques/01-xml-tags.md - Structuring promptsreference/techniques/02-role-prompting.md - System prompts and rolesreference/techniques/03-be-clear-and-direct.md - Foundation techniquereference/techniques/04-multishot-prompting.md - Using examplesreference/techniques/05-chain-of-thought.md - Step-by-step reasoningreference/techniques/06-prompt-chaining.md - Multi-stage workflowsModel-specific optimization guides:
reference/models/claude-opus-4.5.md - Opus capabilities and optimizationsreference/models/claude-sonnet-4.5.md - Sonnet capabilities and optimizationsreference/models/claude-haiku-4.5.md - Haiku capabilities and optimizationsreference/models/gpt-5.1.md - GPT 5.1 capabilities and optimizationsreference/models/gpt-5.1-codex.md - Codex capabilities and optimizationsreference/models/gemini-pro-3.0.md - Gemini capabilities and optimizationsCross-model analysis:
reference/comparisons/model-comparison-matrix.md - Capability comparisonreference/comparisons/use-case-recommendations.md - Task-based recommendationsQuality assurance and troubleshooting:
reference/optimization/optimization-checklist.md - Validation checklistreference/optimization/troubleshooting-guide.md - Common issues and fixesreference/optimization/model-migration.md - Adapting prompts between modelsWorking examples by category:
examples/classification-prompts.md - Classification tasksexamples/code-generation-prompts.md - Code generation tasksexamples/analysis-prompts.md - Analysis and research tasksexamples/creative-prompts.md - Creative writing tasksexamples/complex-workflow-prompts.md - Multi-step workflowsEvery prompt must have clear context, explicit instructions, defined success criteria, and specified output format. Without clarity, other techniques cannot compensate.
Simple tasks need simple prompts. Complex tasks benefit from multiple techniques. Match complexity to actual need.
Each model has unique characteristics. Apply model-specific optimizations after general techniques for best results.
First drafts rarely perfect. Test with real inputs, identify failure modes, refine systematically.
Load detailed references only when needed. Start with core workflow, dive into specifics as required.
Which model for coding?
Which model for analysis?
Which model for creative work?
Which model for high volume?
Which model for multimodal?
Provide complete prompt, techniques applied, usage instructions, and testing recommendations.
Provide analysis of issues, improved prompt, changes made with explanations, and testing recommendations.
Provide top 3 recommendations with match scores, comparison table, trade-off analysis, and prompt creation offer.
Provide strengths, weaknesses, techniques assessment, and prioritized improvement recommendations.