Production deployment configurations for Celery workers and beat schedulers across Docker, Kubernetes, and systemd environments. Use when deploying Celery to production, containerizing workers, orchestrating with Kubernetes, setting up systemd services, configuring health checks, implementing graceful shutdowns, or when user mentions deployment, Docker, Kubernetes, systemd, production setup, or worker containerization.
Limited to specific tools
Additional assets for this skill
This skill is limited to using the following tools:
examples/docker-deployment.mdexamples/kubernetes-deployment.mdexamples/systemd-setup.mdscripts/deploy.shscripts/health-check.shscripts/test-deployment.shtemplates/Dockerfile.workertemplates/docker-compose.ymltemplates/health-checks.pytemplates/kubernetes/celery-beat.yamltemplates/kubernetes/celery-worker.yamltemplates/systemd/celery-beat.servicetemplates/systemd/celery-worker.servicePurpose: Generate production-ready deployment configurations for Celery workers and beat schedulers across multiple deployment platforms.
Activation Triggers:
Key Resources:
scripts/deploy.sh - Complete deployment orchestrationscripts/test-deployment.sh - Validate deployment healthscripts/health-check.sh - Comprehensive health verificationtemplates/docker-compose.yml - Full Docker stacktemplates/Dockerfile.worker - Optimized worker containertemplates/kubernetes/ - K8s manifests for workers and beattemplates/systemd/ - Systemd service unitstemplates/health-checks.py - Python health check implementationexamples/ - Complete deployment scenariosUse Case: Local development, staging environments, simple production setups
Configuration: templates/docker-compose.yml
Services Included:
Quick Start:
# Generate Docker configuration
./scripts/deploy.sh docker --env=staging
# Start all services
docker-compose up -d
# Scale workers
docker-compose up -d --scale celery-worker=4
# View logs
docker-compose logs -f celery-worker
Key Features:
Use Case: Production environments requiring orchestration, auto-scaling, and high availability
Manifests: templates/kubernetes/
Resources:
celery-worker.yaml - Worker Deployment with HPAcelery-beat.yaml - Beat StatefulSet (singleton)celery-configmap.yaml - Environment configurationcelery-secrets.yaml - Sensitive credentialscelery-service.yaml - Internal service endpointscelery-hpa.yaml - Horizontal Pod AutoscalerQuick Start:
# Generate K8s manifests
./scripts/deploy.sh kubernetes --namespace=production
# Apply configuration
kubectl apply -f kubernetes/
# Scale workers
kubectl scale deployment celery-worker --replicas=10
# Monitor status
kubectl get pods -l app=celery-worker
kubectl logs -f deployment/celery-worker
Key Features:
Use Case: Traditional server deployments, VPS, dedicated servers
Service Units: templates/systemd/
Services:
celery-worker.service - Worker daemoncelery-beat.service - Beat scheduler daemoncelery-flower.service - Monitoring dashboardQuick Start:
# Generate systemd units
./scripts/deploy.sh systemd --workers=4
# Install services
sudo cp systemd/*.service /etc/systemd/system/
sudo systemctl daemon-reload
# Enable and start
sudo systemctl enable celery-worker@{1..4}.service celery-beat.service
sudo systemctl start celery-worker@{1..4}.service celery-beat.service
# Check status
sudo systemctl status celery-worker@*.service
sudo journalctl -u celery-worker@1.service -f
Key Features:
Location: templates/health-checks.py
Capabilities:
Usage:
from health_checks import CeleryHealthCheck
# Initialize checker
health = CeleryHealthCheck(app)
# Run all checks
status = health.run_all_checks()
# Individual checks
broker_ok = health.check_broker()
workers_ok = health.check_workers()
queues_ok = health.check_queue_depth(threshold=1000)
Integration:
# Flask endpoint
@app.route('/health')
def health_check():
checker = CeleryHealthCheck(celery_app)
result = checker.run_all_checks()
return jsonify(result), 200 if result['healthy'] else 503
# FastAPI endpoint
@app.get("/health")
async def health_check():
checker = CeleryHealthCheck(celery_app)
result = checker.run_all_checks()
return result if result['healthy'] else JSONResponse(
status_code=503, content=result
)
Location: scripts/health-check.sh
Features:
Usage:
# Basic health check
./scripts/health-check.sh
# With custom timeout
./scripts/health-check.sh --timeout=30
# JSON output
./scripts/health-check.sh --json
# Specific checks
./scripts/health-check.sh --check=broker,workers
Docker Integration:
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
CMD /app/scripts/health-check.sh || exit 1
Kubernetes Integration:
livenessProbe:
exec:
command: ["/app/scripts/health-check.sh"]
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command: ["/app/scripts/health-check.sh", "--check=workers"]
initialDelaySeconds: 10
periodSeconds: 5
Script: scripts/deploy.sh
Capabilities:
Usage:
# Deploy to Docker
./scripts/deploy.sh docker --env=production
# Deploy to Kubernetes
./scripts/deploy.sh kubernetes --namespace=prod --replicas=10
# Deploy systemd services
./scripts/deploy.sh systemd --workers=4 --user=celery
# Dry run (generate configs only)
./scripts/deploy.sh kubernetes --dry-run
# With custom configuration
./scripts/deploy.sh docker --config=custom-config.yml
Pre-deployment Checks:
Script: scripts/test-deployment.sh
Test Coverage:
Usage:
# Test Docker deployment
./scripts/test-deployment.sh docker
# Test Kubernetes deployment
./scripts/test-deployment.sh kubernetes --namespace=prod
# Test systemd services
./scripts/test-deployment.sh systemd
# Verbose output
./scripts/test-deployment.sh docker --verbose
# Continuous monitoring
./scripts/test-deployment.sh docker --watch --interval=60
Test Scenarios:
File: templates/docker-compose.yml
Highlights:
Key Sections:
services:
redis:
# Broker configuration with persistence
postgres:
# Result backend with backup volumes
celery-worker:
# Worker with auto-scaling support
deploy:
replicas: 3
resources:
limits:
cpus: '1.0'
memory: 1G
celery-beat:
# Singleton scheduler
deploy:
replicas: 1
flower:
# Monitoring dashboard
File: templates/Dockerfile.worker
Optimization:
Stages:
Size Optimization:
Worker Deployment: templates/kubernetes/celery-worker.yaml
Features:
Beat StatefulSet: templates/kubernetes/celery-beat.yaml
Features:
Autoscaling: templates/kubernetes/celery-hpa.yaml
Metrics:
Scaling Behavior:
minReplicas: 2
maxReplicas: 50
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 2
periodSeconds: 120
Worker Service: templates/systemd/celery-worker.service
Configuration:
[Unit]
Description=Celery Worker Instance %i
After=network.target redis.service
[Service]
Type=forking
User=celery
Group=celery
EnvironmentFile=/etc/celery/celery.conf
WorkingDirectory=/opt/celery
ExecStart=/opt/celery/venv/bin/celery -A myapp worker \
--loglevel=info \
--logfile=/var/log/celery/worker-%i.log \
--pidfile=/var/run/celery/worker-%i.pid \
--hostname=worker%i@%%h \
--concurrency=4
ExecStop=/bin/kill -s TERM $MAINPID
ExecReload=/bin/kill -s HUP $MAINPID
Restart=always
RestartSec=10s
# Resource limits
CPUQuota=100%
MemoryLimit=1G
[Install]
WantedBy=multi-user.target
Beat Service: templates/systemd/celery-beat.service
Singleton Management:
CRITICAL: Never hardcode credentials in configurations!
Docker Compose:
services:
celery-worker:
environment:
CELERY_BROKER_URL: ${CELERY_BROKER_URL}
CELERY_RESULT_BACKEND: ${CELERY_RESULT_BACKEND}
env_file:
- .env # Never commit this file!
Kubernetes:
# Use Secrets, not ConfigMaps
env:
- name: CELERY_BROKER_URL
valueFrom:
secretKeyRef:
name: celery-secrets
key: broker-url
Systemd:
EnvironmentFile=/etc/celery/secrets.env # Mode 0600, owned by celery user
Docker:
RUN useradd -m -u 1000 celery
USER celery
Kubernetes:
securityContext:
runAsUser: 1000
runAsNonRoot: true
readOnlyRootFilesystem: true
Systemd:
User=celery
Group=celery
Expose Metrics:
from prometheus_client import start_http_server, Counter, Gauge
task_counter = Counter('celery_task_total', 'Total tasks', ['name', 'state'])
worker_gauge = Gauge('celery_workers_active', 'Active workers')
# Start metrics server
start_http_server(8000)
Kubernetes ServiceMonitor:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: celery-metrics
spec:
selector:
matchLabels:
app: celery-worker
endpoints:
- port: metrics
Configuration:
# docker-compose.yml
flower:
image: mher/flower:latest
command: celery flower --broker=redis://redis:6379/0
ports:
- "5555:5555"
environment:
FLOWER_BASIC_AUTH: user:your_password_here # Use env var!
Docker:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Kubernetes:
Systemd:
journalctl -u celery-worker@1.service -f --output=json
Worker Shutdown:
from celery.signals import worker_shutdown
@worker_shutdown.connect
def graceful_shutdown(sender, **kwargs):
logger.info("Worker shutting down gracefully...")
# Finish current tasks
# Close connections
# Release resources
Docker:
STOPSIGNAL SIGTERM
Kubernetes:
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 15"]
terminationGracePeriodSeconds: 60
Systemd:
KillSignal=SIGTERM
TimeoutStopSec=60
Late ACK Pattern:
task_acks_late = True
task_reject_on_worker_lost = True
Ensures tasks are re-queued if worker dies during execution.
File: examples/docker-deployment.md
Scenario: Deploy full Celery stack with Redis, PostgreSQL, 3 workers, beat, and Flower
Steps:
File: examples/kubernetes-deployment.md
Scenario: Production-grade K8s deployment with autoscaling, monitoring, and HA
Components:
Advanced Features:
File: examples/systemd-setup.md
Scenario: Multi-server deployment with systemd management
Architecture:
Management:
# Start all workers across servers
ansible celery-workers -a "systemctl start celery-worker@{1..4}.service"
# Rolling restart
for i in {1..4}; do
systemctl restart celery-worker@$i.service
sleep 30
done
# Health check all workers
ansible celery-workers -m shell -a "/opt/celery/scripts/health-check.sh"
Workers not starting:
Tasks not executing:
Beat not scheduling:
High memory usage:
Container restarts:
Scripts: All deployment and health check scripts in scripts/ directory
Templates: Production-ready configuration templates in templates/ directory
Examples: Complete deployment scenarios with step-by-step instructions in examples/ directory
Documentation:
This skill follows strict security rules:
.gitignore protection documentedVersion: 1.0.0 Celery Compatibility: 5.0+ Platforms: Docker, Kubernetes, Systemd