Review implemented features and suggest new features using configurable prioritization heuristics. Supports GitHub issue creation for accepted suggestions.
This skill inherits all available tools. When active, it can use any tool Claude has access to.
modules/classification-system.mdmodules/configuration.mdmodules/scoring-framework.mdmodules/tradeoff-dimensions.mdReview currently implemented features and suggest new features using evidence-based prioritization. Features can be uploaded to GitHub as issues after user acceptance.
Core Belief: Feature decisions should be data-driven, not gut-driven. Every feature has tradeoffs that deserve explicit evaluation.
Three Pillars:
Discover and categorize existing features:
/feature-review --inventory
Evaluate features against prioritization framework:
/feature-review
Review gaps and suggest new features:
/feature-review --suggest
Create issues for accepted suggestions:
/feature-review --suggest --create-issues
feature-review:inventory-complete)Identify implemented features by analyzing:
Code artifacts
Documentation
Git history
Output: Feature inventory table with metadata
feature-review:classified)Classify each feature along two axes:
Axis 1: Proactive vs Reactive
| Type | Definition | Latency Tolerance | Examples |
|---|---|---|---|
| Proactive | Anticipates user needs | Higher (background OK) | Suggestions, prefetching, auto-saves |
| Reactive | Responds to explicit input | Low (must feel instant) | Form handling, click actions, validation |
Axis 2: Static vs Dynamic
| Type | Update Pattern | Storage Model | Lookup Cost |
|---|---|---|---|
| Static | Incremental, versioned | File-based, cached | O(1), deterministic |
| Dynamic | Continuous, streaming | Database, real-time | O(log n), variable |
See classification-system.md for details.
feature-review:scored)Apply hybrid RICE+WSJF scoring:
Feature Score = Value Score / Cost Score
Value Score = (Reach + Impact + Business Value + Time Criticality) / 4
Cost Score = (Effort + Risk + Complexity) / 3
Adjusted Score = Feature Score * Confidence
Scoring Scale: Fibonacci (1, 2, 3, 5, 8, 13)
Thresholds:
See scoring-framework.md for full framework.
feature-review:tradeoffs-analyzed)Evaluate each feature across quality dimensions:
| Dimension | Question | Scale |
|---|---|---|
| Quality | Does it deliver correct results? | 1-5 |
| Latency | Does it meet timing requirements? | 1-5 |
| Token Usage | Is it context-efficient? | 1-5 |
| Resource Usage | Is CPU/memory reasonable? | 1-5 |
| Redundancy | Does it handle failures gracefully? | 1-5 |
| Readability | Can others understand it? | 1-5 |
| Scalability | Will it handle 10x load? | 1-5 |
| Integration | Does it play well with others? | 1-5 |
| API Surface | Is it backward compatible? | 1-5 |
See tradeoff-dimensions.md for evaluation criteria.
feature-review:suggestions-generated)Based on inventory and scores:
feature-review:issues-created)For accepted suggestions:
Feature-review uses opinionated defaults but allows project customization.
Create .feature-review.yaml in project root to customize:
# .feature-review.yaml
version: 1
# Scoring weights (must sum to 1.0 within category)
weights:
value:
reach: 0.25
impact: 0.30
business_value: 0.25
time_criticality: 0.20
cost:
effort: 0.40
risk: 0.30
complexity: 0.30
# Score thresholds
thresholds:
high_priority: 2.5 # > this = implement soon
medium_priority: 1.5 # > this = roadmap
# below medium = backlog
# Classification defaults
classification:
default_type: reactive # proactive | reactive
default_data: static # static | dynamic
# Tradeoff dimension weights (0.0 to disable, 1.0 = normal)
tradeoffs:
quality: 1.0
latency: 1.0
token_usage: 1.0
resource_usage: 0.8
redundancy: 0.5
readability: 1.0
scalability: 0.8
integration: 1.0
api_surface: 1.0
# GitHub integration
github:
auto_label: true
label_prefix: "priority/"
default_labels:
- enhancement
issue_template: |
## Feature Request
**Classification:** {{ classification }}
**Priority Score:** {{ score }}
### Description
{{ description }}
### Value Proposition
{{ value_proposition }}
### Tradeoff Considerations
{{ tradeoffs }}
See configuration.md for all options.
These rules apply regardless of configuration:
When running feature-review, create these todos:
feature-review:inventory-completefeature-review:classifiedfeature-review:scoredfeature-review:tradeoffs-analyzedfeature-review:suggestions-generatedfeature-review:issues-created (if --create-issues)Feature-review feeds scope-guard decisions:
When fixing issues, check feature-review scores:
During brainstorming, invoke feature-review to:
| Feature | Type | Data | Score | Priority | Status |
|---------|------|------|-------|----------|--------|
| Auth middleware | Reactive | Dynamic | 2.8 | High | Stable |
| Skill loader | Reactive | Static | 2.3 | Medium | Needs improvement |
| Auto-suggestions | Proactive | Dynamic | 1.8 | Medium | New opportunity |
## Feature Suggestions
### High Priority (Score > 2.5)
1. **[Feature Name]** (Score: 2.7)
- Classification: Proactive/Dynamic
- Value: High reach, addresses user pain point
- Cost: Moderate effort, low risk
- Recommendation: Build in next sprint
### Medium Priority (Score 1.5-2.5)
...
### Backlog (Score < 1.5)
...
imbue:scope-guard - Prevents overengineering during implementationimbue:review-core - Structured review methodologysanctum:pr-review - Code-level feature review