name: implementing-database-caching
description: |
Use when you need to implement multi-tier caching to improve database performance.
This skill sets up Redis, in-memory caching, and CDN layers to reduce database load.
Trigger with phrases like "implement database caching", "add Redis cache layer",
"improve query performance with caching", or "reduce database load".
allowed-tools:
- Read
- Write
- Edit
- Grep
- Glob
- Bash(redis-cli:, docker:redis:)
version: 1.0.0
license: MIT
Prerequisites
Before using this skill, ensure:
- Redis server available or ability to deploy Redis container
- Understanding of application data access patterns and hotspots
- Knowledge of which queries/data benefit most from caching
- Monitoring tools to measure cache hit rates and performance
- Development environment for testing caching implementation
- Understanding of cache invalidation requirements for data consistency
Instructions
Step 1: Analyze Caching Requirements
- Profile database queries to identify slow or frequently executed queries
- Determine which data is read-heavy vs write-heavy
- Identify data that can tolerate eventual consistency
- Calculate expected cache size and Redis memory requirements
- Document current database load and target performance metrics
Step 2: Choose Caching Strategy
- Cache-Aside (Lazy Loading): Application checks cache first, loads from DB on miss
- Best for: Read-heavy workloads, unpredictable access patterns
- Pros: Only caches requested data, simple to implement
- Cons: Cache misses incur database hit, stale data possible
- Write-Through: Application writes to cache and database simultaneously
- Best for: Write-heavy workloads needing consistency
- Pros: Cache always consistent, no stale data
- Cons: Write latency, unnecessary caching of rarely-read data
- Write-Behind (Write-Back): Application writes to cache, async writes to database
- Best for: High write throughput requirements
- Pros: Low write latency, batched database writes
- Cons: Risk of data loss, complexity in implementation
Step 3: Design Cache Architecture
- Set up Redis as distributed cache layer (L2 cache)
- Implement in-memory LRU cache in application (L1 cache)
- Configure CDN for static assets (images, CSS, JS)
- Design cache key naming convention (e.g.,
user:123:profile)
- Define TTL (Time To Live) for different data types
Step 4: Implement Caching Code
- Add Redis client library to application dependencies
- Create cache wrapper functions (get, set, delete, invalidate)
- Modify database query code to check cache before DB query
- Implement cache population on cache miss
- Add error handling for cache failures (fail gracefully to database)
Step 5: Configure Cache Invalidation
- Implement TTL-based expiration for time-sensitive data
- Add explicit cache invalidation on data updates/deletes
- Use cache tags or patterns for bulk invalidation
- Implement cache warming for critical data after deployments
- Set up cache stampede prevention (lock/queue on miss)
Step 6: Monitor and Optimize
- Track cache hit rate, miss rate, and eviction rate
- Monitor Redis memory usage and eviction policy
- Analyze query performance improvements
- Adjust TTLs based on data update frequency
- Identify and cache additional hot data
Output
This skill produces:
Redis Configuration: Docker Compose or config files for Redis deployment with appropriate memory and eviction settings
Caching Code: Application code implementing cache-aside, write-through, or write-behind patterns
Cache Key Schema: Documentation of cache key naming conventions and TTL settings
Monitoring Dashboards: Metrics for cache hit rates, memory usage, and performance improvements
Cache Invalidation Logic: Code for explicit and implicit cache invalidation on data changes
Error Handling
Cache Connection Failures:
- Implement circuit breaker pattern to prevent cascading failures
- Fall back to database when cache is unavailable
- Log cache connection errors for monitoring
- Retry cache connections with exponential backoff
- Consider read-replica or cache cluster for high availability
Cache Stampede:
- Implement probabilistic early expiration (PER) for TTLs
- Use distributed locks (Redis SETNX) to prevent concurrent cache population
- Queue cache refresh requests instead of parallel execution
- Add jitter to TTLs to spread expiration times
- Use stale-while-revalidate pattern for acceptable delays
Stale Data Issues:
- Implement versioning in cache keys (e.g.,
user:123:v2)
- Use cache tags for related data invalidation
- Set aggressive TTLs for frequently changing data
- Implement active cache invalidation on data updates
- Monitor data consistency between cache and database
Memory Pressure:
- Configure Redis eviction policy (allkeys-lru recommended)
- Monitor Redis memory usage and set max memory limits
- Implement tiered caching (hot data in Redis, warm data in DB)
- Reduce TTLs for less critical data
- Scale Redis horizontally with cluster mode
Resources
Redis Configuration Templates:
- Docker Compose:
{baseDir}/docker/redis-compose.yml
- Redis config:
{baseDir}/config/redis.conf
- Cluster config:
{baseDir}/config/redis-cluster.conf
Caching Code Examples: {baseDir}/examples/caching/
- Cache-aside pattern (Node.js, Python, Java)
- Write-through pattern
- Cache invalidation strategies
- Distributed locking
Cache Key Design Guide: {baseDir}/docs/cache-key-design.md
Performance Tuning: {baseDir}/docs/cache-performance-tuning.md
Monitoring Setup: {baseDir}/monitoring/redis-dashboard.json