Result backend configuration patterns for Celery including Redis, Database, and RPC backends with serialization, expiration policies, and performance optimization. Use when configuring result storage, troubleshooting result persistence, implementing custom serializers, migrating between backends, optimizing result expiration, or when user mentions result backends, task results, Redis backend, PostgreSQL results, result serialization, or backend migration.
Limited to specific tools
Additional assets for this skill
This skill is limited to using the following tools:
examples/postgresql-backend.mdexamples/redis-backend-setup.mdexamples/result-expiration-policies.mdscripts/migrate-backend.shscripts/test-backend.shtemplates/custom-serializers.pytemplates/db-backend.pytemplates/redis-backend.pytemplates/result-expiration.pytemplates/rpc-backend.pyPurpose: Configure and optimize Celery result backends for reliable task result storage and retrieval.
Activation Triggers:
Key Resources:
templates/redis-backend.py - Redis result backend configurationtemplates/db-backend.py - Database (SQLAlchemy) backend setuptemplates/rpc-backend.py - RPC (AMQP) backend configurationtemplates/result-expiration.py - Expiration and cleanup policiestemplates/custom-serializers.py - Custom serialization patternsscripts/test-backend.sh - Backend connection and functionality testingscripts/migrate-backend.sh - Safe backend migration with data preservationexamples/ - Complete setup guides for each backend typeBest for:
Characteristics:
Use template: templates/redis-backend.py
Best for:
Characteristics:
Use template: templates/db-backend.py
Best for:
Characteristics:
Use template: templates/rpc-backend.py
Decision Matrix:
Performance Priority + Short Retention → Redis
Long-term Storage + Query Needs → Database
Immediate Consumption Only → RPC
Existing Redis Infrastructure → Redis
Existing Database Infrastructure → Database
# Copy appropriate template to your celeryconfig.py or settings
cp templates/redis-backend.py your_project/celeryconfig.py
# OR
cp templates/db-backend.py your_project/celeryconfig.py
# OR
cp templates/rpc-backend.py your_project/celeryconfig.py
Redis Example:
# Security: Use environment variables, never hardcode
import os
result_backend = f'redis://:{os.getenv("REDIS_PASSWORD", "")}@' \
f'{os.getenv("REDIS_HOST", "localhost")}:' \
f'{os.getenv("REDIS_PORT", "6379")}/0'
Database Example:
# Security: Use environment variables for credentials
import os
db_user = os.getenv("DB_USER", "celery")
db_pass = os.getenv("DB_PASSWORD", "your_password_here")
db_host = os.getenv("DB_HOST", "localhost")
db_name = os.getenv("DB_NAME", "celery_results")
result_backend = f'db+postgresql://{db_user}:{db_pass}@{db_host}/{db_name}'
Reference templates/custom-serializers.py for advanced patterns:
# JSON (default, secure, cross-language)
result_serializer = 'json'
result_accept_content = ['json']
# Enable compression for large results
result_compression = 'gzip'
# Store extended metadata (task name, args, retries)
result_extended = True
Reference templates/result-expiration.py:
# Expire after 24 hours (default: 1 day)
result_expires = 86400
# Disable expiration for critical results
result_expires = None
# Enable automatic cleanup (requires celery beat)
beat_schedule = {
'cleanup-results': {
'task': 'celery.backend_cleanup',
'schedule': crontab(hour=4, minute=0),
}
}
# Verify backend is reachable and functional
./scripts/test-backend.sh redis
# OR
./scripts/test-backend.sh postgresql
# OR
./scripts/test-backend.sh rpc
Connection Pooling:
redis_max_connections = 50 # Adjust based on worker count
redis_socket_timeout = 120
redis_socket_keepalive = True
redis_retry_on_timeout = True
Persistence vs Performance:
# For critical results, ensure Redis persistence
# Configure in redis.conf:
# save 900 1 # Save after 900s if 1 key changed
# save 300 10 # Save after 300s if 10 keys changed
# appendonly yes # Enable AOF for durability
Connection Management:
database_engine_options = {
'pool_size': 10,
'pool_recycle': 3600,
'pool_pre_ping': True, # Verify connections before use
}
# Resolve stale connections
database_short_lived_sessions = True
Table Customization:
database_table_names = {
'task': 'celery_taskmeta',
'group': 'celery_groupmeta',
}
# Auto-create tables at startup (Celery 5.5+)
database_create_tables_at_setup = True
MySQL Transaction Isolation:
# CRITICAL for MySQL
database_engine_options = {
'isolation_level': 'READ COMMITTED',
}
Persistent Messages:
# Make results survive broker restarts
result_persistent = True
# Configure result exchange
result_exchange = 'celery_results'
result_exchange_type = 'direct'
Advantages:
Limitations:
Example: See templates/custom-serializers.py
When to Use:
Implementation:
from kombu.serialization import register
def custom_encoder(obj):
# Your encoding logic
return serialized_data
def custom_decoder(data):
# Your decoding logic
return deserialized_obj
register(
'myformat',
custom_encoder,
custom_decoder,
content_type='application/x-myformat',
content_encoding='utf-8'
)
# Use in config
result_serializer = 'myformat'
result_accept_content = ['myformat', 'json']
# Use migration script for zero-downtime migration
./scripts/migrate-backend.sh redis postgresql
# Process:
# 1. Configure new backend alongside old
# 2. Dual-write to both backends
# 3. Verify new backend functionality
# 4. Switch reads to new backend
# 5. Deprecate old backend
1. Add new backend configuration:
# Keep old backend active
result_backend = 'redis://localhost:6379/0'
# Add new backend (not active yet)
# new_result_backend = 'db+postgresql://...'
2. Deploy with dual-write capability:
# Custom backend that writes to both
class DualBackend:
def __init__(self):
self.old_backend = RedisBackend(...)
self.new_backend = DatabaseBackend(...)
def store_result(self, task_id, result, state):
# Write to both backends
self.old_backend.store_result(task_id, result, state)
self.new_backend.store_result(task_id, result, state)
3. Verify and switch:
# Test new backend
./scripts/test-backend.sh postgresql
# Update config to use new backend
result_backend = 'db+postgresql://...'
# Global setting
task_ignore_result = True
# Per-task override
@app.task(ignore_result=True)
def fire_and_forget_task():
# Results not stored
pass
Redis:
redis_max_connections = None # No limit (use with caution)
# OR
redis_max_connections = worker_concurrency * 2 # Rule of thumb
Database:
database_engine_options = {
'pool_size': 20,
'max_overflow': 10,
}
# Compress large results
result_compression = 'gzip' # or 'bzip2'
# Only compress results over threshold
result_compression = 'gzip'
result_compression_level = 6 # 1-9, higher = more compression
# Retrieve multiple results efficiently
from celery.result import GroupResult
job = group(task.s(i) for i in range(100))()
results = job.get(timeout=10, propagate=False)
Check:
ignore_result is not set globallyDebug:
./scripts/test-backend.sh <backend-type>
Symptoms:
TypeError: Object of type X is not JSON serializablepickle.PicklingErrorSolutions:
templates/custom-serializers.py)result_serializer = 'pickle' (security risk!)Redis:
redis_max_connectionsDatabase:
task_id columndatabase_short_lived_sessionsCheck expiration settings:
# View current setting
print(app.conf.result_expires)
# Extend retention
result_expires = 7 * 86400 # 7 days
Enable automatic cleanup:
# Requires celery beat
beat_schedule = {
'cleanup-results': {
'task': 'celery.backend_cleanup',
'schedule': crontab(hour=4, minute=0),
}
}
Connection Credentials:
.env.example with placeholders.env to .gitignoreNetwork Security:
rediss://)Serialization:
All backend configurations have complete examples in examples/:
redis-backend-setup.md - Complete Redis setup with sentinel and clusterpostgresql-backend.md - PostgreSQL configuration with migrationsresult-expiration-policies.md - Expiration strategies and cleanup patternsTemplates: Complete configuration files in templates/ directory
Scripts: Testing and migration tools in scripts/ directory
Examples: Real-world setup guides in examples/ directory
Backend Support: Redis, PostgreSQL, MySQL, SQLite, MongoDB, RPC (AMQP) Celery Version: 5.0+ Last Updated: 2025-11-16