Use HtmlGraph analytics to make smart work prioritization decisions. Activate when recommending work, finding bottlenecks, assessing risks, or analyzing project impact.
/plugin marketplace add Shakes-tzd/htmlgraph/plugin install htmlgraph@htmlgraphThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Trigger keywords:
Trigger situations:
HtmlGraph provides analytics that consider:
from htmlgraph import SDK
sdk = SDK(agent="claude")
# 1. What's blocking progress?
bottlenecks = sdk.find_bottlenecks(top_n=3)
if bottlenecks:
print("š§ BOTTLENECKS (fix these first):")
for bn in bottlenecks:
print(f" {bn['id']}: {bn['title']}")
print(f" Blocks {bn['blocks_count']} downstream tasks")
# 2. What should I work on?
recs = sdk.recommend_next_work(agent_count=1)
if recs:
best = recs[0]
print(f"\nš” RECOMMENDED: {best['title']}")
print(f" Score: {best['score']:.1f}")
print(f" Reasons: {', '.join(best['reasons'])}")
# 3. Can we parallelize?
parallel = sdk.get_parallel_work(max_agents=3)
print(f"\nā” Parallel capacity: {parallel['max_parallelism']} agents")
print(f" Ready now: {len(parallel['ready_now'])} tasks")
# 4. Any risks to watch?
risks = sdk.assess_risks()
if risks['high_risk_count'] > 0:
print(f"\nā ļø {risks['high_risk_count']} high-risk items")
sdk.find_bottlenecks(top_n=5)Find tasks that block the most downstream work.
bottlenecks = sdk.find_bottlenecks(top_n=3)
# Returns list of:
{
"id": "feat-001",
"title": "Database Schema",
"status": "todo",
"priority": "high",
"blocks_count": 5, # How many tasks it blocks
"blocks": ["feat-002", "feat-003", ...] # Which tasks
}
Use when:
sdk.recommend_next_work(agent_count=1)Get scored recommendations considering all factors.
recs = sdk.recommend_next_work(agent_count=3)
# Returns list of:
{
"id": "feat-001",
"title": "Authentication",
"score": 85.5,
"reasons": [
"high_priority",
"unblocks_many",
"no_dependencies"
],
"priority": "high",
"status": "todo",
"blocks_count": 3
}
Scoring factors:
sdk.get_parallel_work(max_agents=5)Find tasks that can run concurrently.
parallel = sdk.get_parallel_work(max_agents=5)
# Returns:
{
"max_parallelism": 4, # How many can run at once
"ready_now": ["f1", "f2", ...], # Level 0 (no deps)
"blocked": ["f3", "f4", ...], # Waiting on deps
"dependency_levels": [ # Topological layers
["f1", "f2"], # Level 0: no deps
["f3"], # Level 1: depends on Level 0
["f4", "f5"] # Level 2: depends on Level 1
]
}
Use when:
sdk.assess_risks()Check for project health issues.
risks = sdk.assess_risks()
# Returns:
{
"high_risk_count": 2,
"circular_dependencies": [], # Cycles in dep graph
"single_points_of_failure": [ # Tasks blocking many
{"id": "feat-001", "blocks": 5}
],
"stale_in_progress": [ # Stuck tasks
{"id": "feat-002", "days_stale": 7}
]
}
Use when:
sdk.analyze_impact(feature_id)Understand what completing a task unlocks.
impact = sdk.analyze_impact("feat-001")
# Returns:
{
"unlocks_count": 3,
"unlocks": ["feat-002", "feat-003", "feat-004"],
"transitive_impact": 7, # Total downstream tasks
"critical_path": True # On longest dependency chain
}
Use when:
sdk = SDK(agent="claude")
# Quick context
info = sdk.get_session_start_info()
print("š Project Status:")
print(f" In-progress: {info['status']['wip_count']}")
print(f" Bottlenecks: {len(info['bottlenecks'])}")
print(f" Parallel capacity: {info['parallel']['max_parallelism']}")
# What to work on
if info['recommendations']:
rec = info['recommendations'][0]
print(f"\nš” Start with: {rec['title']}")
# Find what's causing the block
bottlenecks = sdk.find_bottlenecks(top_n=5)
for bn in bottlenecks:
if bn['status'] == 'todo':
print(f"šÆ Unblock by completing: {bn['title']}")
print(f" This will enable {bn['blocks_count']} tasks")
break
# Check if parallelization makes sense
parallel = sdk.get_parallel_work(max_agents=3)
risks = sdk.assess_risks()
if parallel['max_parallelism'] >= 2 and risks['high_risk_count'] == 0:
print("ā
Safe to parallelize")
print(f" Dispatch up to {parallel['max_parallelism']} agents")
# Get recommendations for each agent
recs = sdk.recommend_next_work(agent_count=parallel['max_parallelism'])
for i, rec in enumerate(recs):
print(f" Agent {i+1}: {rec['title']}")
else:
print("ā ļø Sequential execution recommended")
if risks['high_risk_count'] > 0:
print(f" Reason: {risks['high_risk_count']} high-risk items")
# Compare two potential tasks
task_a = "feat-001"
task_b = "feat-002"
impact_a = sdk.analyze_impact(task_a)
impact_b = sdk.analyze_impact(task_b)
print(f"Task A unlocks: {impact_a['unlocks_count']} tasks")
print(f"Task B unlocks: {impact_b['unlocks_count']} tasks")
if impact_a['transitive_impact'] > impact_b['transitive_impact']:
print(f"š” Prioritize Task A (higher leverage)")
else:
print(f"š” Prioritize Task B (higher leverage)")
The sdk.smart_plan() method combines these analytics:
plan = sdk.smart_plan(
description="Real-time collaboration",
create_spike=True,
timebox_hours=4
)
# Returns context with:
# - bottlenecks_count
# - high_risk_count
# - parallel_capacity
# - Created spike for research
from htmlgraph import SDK
sdk = SDK(agent="claude")
# What's blocking us?
sdk.find_bottlenecks(top_n=5)
# What should I do?
sdk.recommend_next_work(agent_count=1)
# Can we parallelize?
sdk.get_parallel_work(max_agents=5)
# Any risks?
sdk.assess_risks()
# What does this unlock?
sdk.analyze_impact("feat-id")
# All-in-one session start
sdk.get_session_start_info()
# Smart planning
sdk.smart_plan("description")
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.