Extract and remember knowledge across sessions
/plugin marketplace add mironmax/claude-plugins-marketplace/plugin install mironmax-memory-memory-plugin@mironmax/claude-plugins-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
The knowledge graph captures patterns, insights, and relationships worth remembering, prioritizing deep learning over trivial facts. Each entry should be atomic and linkable. The goal: maximum recovered insight per added symbol, with special value on meta that speeds up future work.
Two entry types:
Nodes — Named concepts, patterns, or insights
{
"id": "silent-dependency-pattern",
"gist": "hidden load-order dependencies that fail late",
"touches": ["config.py", "db/init.py"],
"notes": ["seen three times, worth a lint rule?"]
}
Edges — Relationships between things (files, concepts, nodes)
{
"from": "config.py",
"to": "db/init.py",
"rel": "must-load-before",
"notes": ["discovered during cold-start debugging"]
}
Use short descriptive kebab-case for id and rel. Reference artifacts directly by path or path:line — no need to wrap them in nodes.
{
"from": "stage2-profile-building",
"to": "ARCHITECTURE.md:157-226",
"rel": "defined-in",
"notes": ["multi source enrichment, creating detailed profiles"]
}
Not every fact needs remembering, sometimes it's enough to create a pointer like path:line to artefact.
The touches field is for light, tentative references — when you sense relevance but the relationship isn't crisp enough to be an edge yet.
The notes field holds caveats, rationale, open questions, or any other context. Optional on both edges and nodes.
Compress the meaning: use entries as short as possible while retaining maximum information, in a way humans can reconstruct fluently. Maximum recovered insight per added symbol.
Tactical (project-level):
Strategic (user-level - prioritize these):
Test: "Would this help me avoid a similar mistake/inefficiency/pitfall in a different situation next time?" → Capture it at user level.
Compression rule still applies, but favor depth over breadth. Better: One architectural generalisation Worse: Ten tactical file-path fixes
kg_read()
Returns both user and project graphs. Active nodes only.
→ {"user": {"nodes": [...], "edges": [...]}, "project": {...}}
kg_sync(session_id)
Returns changes since session start, excluding your own writes.
→ {"since_ts": 1234567890, "changes": {...}, "total_changes": 5}
kg_put_node(level, id, gist, touches?, notes?, session_id?)
Add or update a node.
level: "user" or "project"id: kebab-case identifiergist: the insight itselftouches: optional list of related nodesnotes: optional context, caveats, rationalekg_put_edge(level, from, to, rel, notes?, session_id?)
Add or update an edge.
from/to: node IDs or artifact pathsrel: relationship type (kebab-case)kg_delete_node(level, id)
Removes node and all connected edges.
kg_delete_edge(level, from, to, rel)
Removes a specific edge.
kg_register_session()
Register for sync tracking. Returns your session_id.
kg_recall(level, id)
Read the archived node and retrieve back into active context.
kg_ping()
Health check. Returns node/edge counts and active sessions.
The graph automatically manages its size to fit context windows.
kg_read() or kg_sync()Nodes updated within 7 days are protected — never archived regardless of score.
For older nodes, percentile ranking across three dimensions:
Final score = recency_pct × connectedness_pct × richness_pct
Lowest scores archived first.
When a node is archived, edges pointing to it from active nodes remain visible. You'll see relationships like:
active-node → archived-node-id (relationship)
This is intentional — it hints that relevant knowledge exists. When you encounter a memory trace that might be relevant to your current task:
kg_recall(level, id) to bring it backThis lets you "drill down" into dusty knowledge when you need deeper context.
Nodes stay active by:
If you need to preserve a node, update it occasionally or connect it to active knowledge.
All sessions share the same MCP server. Changes are eventually shared between them with each write and sync.
kg_sync(session_id)Last write wins. Mitigated by:
When spawning subagents/tasks that need domain context:
Subagent writes are visible to parent via shared server (eventually). After subagent completes, parent can kg_sync to see written discoveries.
kg_put_node(
level="project",
id="config-load-order",
gist="Config must load before DB init or connections fail silently",
touches=["config.py", "db/init.py"],
notes=["Discovered debugging cold-start issue, took 2 hours"]
)
kg_put_edge(
level="project",
from="config.py",
to="db/init.py",
rel="must-load-before"
)
You see an edge: auth-module → old-security-decision (influenced-by)
kg_recall(level="project", id="old-security-decision")
→ {"recalled": true, "node": {"id": "old-security-decision", "gist": "..."}}
Now you have context for why auth works the way it does.
When capturing, prioritize by abstraction level:
Level 1: Facts (low priority)
Level 2: Patterns (medium priority)
Level 3: Principles (high priority)
Level 4: Meta-patterns (highest priority)
The graph should accumulate wisdom, not only facts. Wisdom should be useful, practical and beneficial for future work.
Capture immediately — Don't defer to end of session. Context is freshest at discovery.
Prefer edges — Connect existing things rather than creating new nodes.
Be terse — Maximum insight per symbol. Short gists, minimal notes.
Level consciously — User for personal wisdom, project for team knowledge.
Sync before push — In collaborative scenarios, pull updates first.
Follow memory traces — When you see edges to missing nodes, consider (from its id) if that context matters for your current task.
Create employment contracts, offer letters, and HR policy documents following legal best practices. Use when drafting employment agreements, creating HR policies, or standardizing employment documentation.
Implement GDPR-compliant data handling with consent management, data subject rights, and privacy by design. Use when building systems that process EU personal data, implementing privacy controls, or conducting GDPR compliance reviews.