Invoke this skill first when authoring any AI agent instructions. Foundational principles for writing LLM instructions (skills, CLAUDE.md, rules, commands). Covers token economics, imperative language, formatting for LLM parsing, emphasis modifiers, terminology consistency, and common anti-patterns.
This skill inherits all available tools. When active, it can use any tool Claude has access to.
Core principles for writing instructions that LLMs execute efficiently and deterministically. Apply these principles across all instruction types: skills, CLAUDE.md files, rules, and slash commands.
The context window is a shared resource. Challenge each piece of information: "Does Claude already know this?" Only add context Claude lacks. Assume Claude is already very smart.
Remove decorative language. No "please", "remember", "make sure", "it's important".
Use examples over explanations. One concrete before/after example teaches more than three paragraphs of description.
Prefer tables for structured data. Compress related information into scannable format.
Match specificity to task requirements.
High freedom (text instructions): Use when multiple approaches are valid. Medium freedom (pseudocode, parameterized scripts): Use when a preferred pattern exists. Low freedom (exact scripts, no parameters): Use when operations are fragile or consistency is critical.
Every instruction must have exactly one interpretation.
Use explicit constraints, not suggestions. "Run pytest tests/ --strict-markers" not "Run tests with strict markers when appropriate."
Specify conditions completely. "Validate input at API boundaries" not "Validate input."
Eliminate hedge words: "consider", "try to", "when possible", "generally", "often".
Include all flags and arguments. dotnet test --logger "console;verbosity=detailed" not dotnet test.
Use absolute paths or precisely scoped paths. /src/api/, src/**/*.ts, not "the API code."
Specify tool versions when behavior differs. "Node.js 20+: use native fetch" not "Use fetch."
Use imperative form only. "Validate at boundaries" not "You should validate at boundaries."
Write direct commands in imperative mood.
Good: "Validate input at API boundaries" Avoid: "You should consider validating input"
State what to do, not what not to do when possible.
Good: "Let exceptions propagate" Avoid: "Do not catch exceptions unnecessarily"
Be specific in prohibitions and requirements.
Good: "Do not implement retry logic in background jobs" Avoid: "Avoid defensive patterns"
Use markdown structure that aids LLM understanding.
Headings: Establish context and scope. Content under a heading applies to that domain only.
Lists: Use only for discrete, parallel, independent items. Use prose when relationships between ideas matter.
Code blocks: For exact values, commands, identifiers, and patterns only.
Tables: For structured comparisons, reference data, or multi-dimensional information.
Bold/Italic: Use sparingly. If more than 10% of text is emphasized, nothing is emphasized.
White space: Use blank lines between paragraphs and sections for clarity. Aids parsing.
Tables: Structured data with categories - types, priorities, mappings, decision matrices.
Lists: Discrete, parallel items - required packages, file paths, command flags.
Prose: Relationships and context - when to use one approach vs another, why a constraint exists.
Use MUST, MUST NOT, REQUIRED only for hard constraints where violation causes failure.
Do not use modifiers for preferences or defaults. If every instruction uses MUST, none stand out.
Bold only for hard constraints where violation causes failure. Avoid over-emphasis.
Choose one term per concept and use it throughout.
Good: Always "API endpoint" Bad: Mix "API endpoint", "URL", "route", "path"
Place critical constraints first. Most important information at top of file.
Use progressive specificity. Global rules first, then domain-specific, then file-specific.
Separate concerns cleanly. One section per topic. Do not mix testing rules with deployment procedures.
End sections decisively. No trailing "etc." or "and more."
Suggestion language:
Vague quantifiers:
Ambiguous conditionals:
Multiple options without default:
Burying critical constraints:
Over-emphasis:
Lists as default:
Repeating framework documentation:
Generic best practices:
Time-sensitive information:
Decorative content: Welcome messages, motivational statements, background history.
Hypothetical scenarios: "If we ever migrate to Postgres..." Address when actual, not hypothetical.
Remove filler:
pytest before committing"Combine related instructions:
Full specification:
## Commands
- `pytest tests/ --strict-markers --cov=src --cov-report=html`: Run tests with coverage
- `dotnet build --configuration Release --no-restore`: Production build
- `npm run lint -- --fix`: Auto-fix linting issues
Not:
## Commands
Run pytest to test. Use dotnet build for building. Lint with npm.