Active diagnostic tool for analyzing prompt quality, detecting anti-patterns, identifying token waste, and providing optimization recommendations
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
process-diagram.gvprocess.mdreadme.mdname: when-optimizing-prompts-use-prompt-optimization-analyzer version: 1.0.0 description: Active diagnostic tool for analyzing prompt quality, detecting anti-patterns, identifying token waste, and providing optimization recommendations tags:
NEVER:
self_consistency: "After completing this skill, verify output quality by [SKILL-SPECIFIC validation approach]" program_of_thought: "Decompose this skill execution into: [SKILL-SPECIFIC sequential steps]" plan_and_solve: "Plan: [SKILL-SPECIFIC planning phase] -> Execute: [SKILL-SPECIFIC execution phase] -> Verify: [SKILL-SPECIFIC verification phase]"
<!-- END SKILL SOP IMPROVEMENT -->Purpose: Analyze prompt quality and provide actionable optimization recommendations to reduce token waste, improve clarity, and enhance effectiveness.
# Analyze prompt for redundancy
npx claude-flow@alpha hooks pre-task --description "Analyzing prompt for token waste"
# Store original metrics
npx claude-flow@alpha memory store --key "optimization/original-tokens" --value "{
\"total_tokens\": <count>,
\"redundancy_score\": <0-100>,
\"verbosity_score\": <0-100>
}"
Analysis Script:
// Embedded token analysis
function analyzeTokenWaste(promptText) {
const metrics = {
totalWords: promptText.split(/\s+/).length,
totalChars: promptText.length,
redundancyScore: 0,
verbosityScore: 0,
issues: []
};
// Detect phrase repetition
const phrases = extractPhrases(promptText, 3); // 3-word phrases
const phraseCounts = countOccurrences(phrases);
const repeated = Object.entries(phraseCounts).filter(([_, count]) => count > 2);
if (repeated.length > 0) {
metrics.redundancyScore += repeated.length * 10;
metrics.issues.push({
type: "redundancy",
severity: "medium",
count: repeated.length,
examples: repeated.slice(0, 3).map(([phrase]) => phrase)
});
}
// Measure verbosity
const avgWordLength = promptText.split(/\s+/)
.reduce((sum, word) => sum + word.length, 0) / metrics.totalWords;
if (avgWordLength > 6) {
metrics.verbosityScore += 20;
metrics.issues.push({
type: "verbosity",
severity: "low",
avgWordLength: avgWordLength.toFixed(2),
suggestion: "Consider shorter, clearer words"
});
}
// Detect filler words
const fillerWords = ["very", "really", "just", "actually", "basically", "simply"];
const fillerCount = fillerWords.reduce((count, filler) => {
const regex = new RegExp(`\\b${filler}\\b`, 'gi');
return count + (promptText.match(regex) || []).length;
}, 0);
if (fillerCount > 5) {
metrics.redundancyScore += fillerCount * 2;
metrics.issues.push({
type: "filler-words",
severity: "low",
count: fillerCount,
suggestion: "Remove unnecessary filler words"
});
}
return metrics;
}
function extractPhrases(text, wordCount) {
const words = text.toLowerCase().split(/\s+/);
const phrases = [];
for (let i = 0; i <= words.length - wordCount; i++) {
phrases.push(words.slice(i, i + wordCount).join(' '));
}
return phrases;
}
function countOccurrences(items) {
return items.reduce((counts, item) => {
counts[item] = (counts[item] || 0) + 1;
return counts;
}, {});
}
Common Anti-Patterns:
Vague Instructions
Ambiguous Terminology
Conflicting Requirements
Missing Context
Over-Specification
Detection Script:
function detectAntiPatterns(promptText) {
const patterns = [];
// Vague instruction markers
const vagueMarkers = ["better", "good", "appropriate", "proper", "suitable"];
vagueMarkers.forEach(marker => {
if (new RegExp(`\\b${marker}\\b`, 'i').test(promptText)) {
patterns.push({
type: "vague-instruction",
marker: marker,
severity: "high",
suggestion: "Replace with specific, measurable criteria"
});
}
});
// Missing definitions
const technicalTerms = promptText.match(/\b[A-Z][A-Za-z]*(?:[A-Z][a-z]*)+\b/g) || [];
const definedTerms = (promptText.match(/\*\*[^*]+\*\*:/g) || []).length;
if (technicalTerms.length > 5 && definedTerms < technicalTerms.length * 0.3) {
patterns.push({
type: "undefined-jargon",
severity: "medium",
technicalTermCount: technicalTerms.length,
definedCount: definedTerms,
suggestion: "Add definitions for technical terms"
});
}
// Conflicting modal verbs
const mustStatements = (promptText.match(/\b(must|required|mandatory)\b/gi) || []).length;
const shouldStatements = (promptText.match(/\b(should|recommended|optional)\b/gi) || []).length;
if (mustStatements > 10 && shouldStatements > 10) {
patterns.push({
type: "requirement-confusion",
severity: "medium",
mustCount: mustStatements,
shouldCount: shouldStatements,
suggestion: "Separate MUST vs SHOULD requirements clearly"
});
}
return patterns;
}
function analyzeTriggers(triggerText) {
const issues = [];
// Check clarity
if (!triggerText.includes("when") && !triggerText.includes("if")) {
issues.push({
type: "unclear-condition",
severity: "high",
suggestion: "Use explicit 'when' or 'if' conditions"
});
}
// Check specificity
const vagueTerms = ["thing", "stuff", "something", "anything"];
vagueTerms.forEach(term => {
if (new RegExp(`\\b${term}\\b`, 'i').test(triggerText)) {
issues.push({
type: "vague-trigger",
term: term,
severity: "high",
suggestion: "Replace with specific entity or action"
});
}
});
// Check scope
if (triggerText.split(/\s+/).length < 5) {
issues.push({
type: "too-narrow",
severity: "medium",
wordCount: triggerText.split(/\s+/).length,
suggestion: "Consider broader applicability"
});
}
return issues;
}
Code Analyzer Agent Task:
# Spawn analyzer agent
# Agent instructions:
# 1. Analyze prompt structure and flow
# 2. Identify optimization opportunities
# 3. Generate before/after comparisons
# 4. Calculate token savings
# 5. Store recommendations in memory
npx claude-flow@alpha memory store --key "optimization/recommendations" --value "{
\"structural\": [...],
\"content\": [...],
\"examples\": [...],
\"estimated_savings\": \"X tokens (Y%)\"
}"
Optimization Report Format:
## Prompt Optimization Report
### Original Metrics
- Total tokens: <count>
- Redundancy score: <0-100>
- Verbosity score: <0-100>
- Anti-patterns found: <count>
### Issues Detected
#### High Severity
1. [Type] <description>
- Location: <section>
- Impact: <token/clarity impact>
- Fix: <recommendation>
#### Medium Severity
...
#### Low Severity
...
### Recommended Changes
#### Structural
- [ ] Reorganize sections for logical flow
- [ ] Consolidate redundant examples
- [ ] Extract repetitive content to references
#### Content
- [ ] Replace vague terms with specific criteria
- [ ] Add missing definitions
- [ ] Remove filler words (identified: <count>)
#### Examples
- [ ] Reduce example count from <old> to <new>
- [ ] Consolidate similar examples
- [ ] Add missing edge cases
### Estimated Improvements
- Token reduction: <count> tokens (<percentage>%)
- Clarity score: +<points>
- Maintainability: +<points>
### Before/After Comparison
**Before (excerpt):**
<original problematic section>
```
After (optimized):
<optimized version>
Savings: <tokens> tokens, <improvement description>
## Concrete Example: Real Analysis
### Input Prompt (Fragment)
```markdown
You are a code reviewer. Your job is to review code and make sure it's good.
You should look at the code and find problems. When you find problems, you
should tell the user about them. Make sure to check for bugs and also check
for style issues. You should be thorough and careful. Don't miss anything
important. Always be professional and constructive in your feedback. Try to
help the developer improve. Make suggestions that are actually helpful and
not just critical. Be nice but also be honest. Make sure your reviews are
really good and comprehensive.
Token Waste Analysis:
{
"totalWords": 98,
"totalTokens": 124,
"redundancyScore": 45,
"verbosityScore": 30,
"issues": [
{
"type": "redundancy",
"severity": "high",
"examples": [
"make sure" (3 occurrences),
"you should" (4 occurrences),
"be [adjective]" (3 occurrences)
]
},
{
"type": "vague-instructions",
"severity": "high",
"examples": [
"good", "thorough", "careful", "helpful",
"important", "professional", "constructive"
]
},
{
"type": "filler-words",
"severity": "medium",
"count": 8,
"examples": ["really", "actually", "just"]
}
]
}
Optimization Recommendations:
You are a code reviewer analyzing pull requests for quality and correctness.
Review Process:
1. Scan for logic errors, null checks, edge cases
2. Verify style compliance (linting, formatting)
3. Assess test coverage (>80% target)
4. Check documentation completeness
Feedback Format:
- Issue: [category] - [specific finding]
- Impact: [low/medium/high]
- Fix: [concrete suggestion with code example]
Tone: Professional, constructive, solution-focused
Results:
# 1. Analyze new skill
npx claude-flow@alpha hooks pre-task --description "Optimizing new skill prompt"
# 2. Run analysis (spawn analyzer agent)
# Agent performs full analysis as documented above
# 3. Review recommendations
npx claude-flow@alpha memory retrieve --key "optimization/recommendations"
# 4. Apply fixes
# Make recommended changes to skill
# 5. Re-analyze and verify improvements
# Re-run analysis, compare metrics
# 6. Store final metrics
npx claude-flow@alpha hooks post-task --task-id "skill-optimization"
when-analyzing-skill-gaps-use-skill-gap-analyzer - Analyze overall librarywhen-managing-token-budget-use-token-budget-advisor - Budget planningprompt-architect - Advanced prompt engineering