Kubernetes security policies, RBAC, and Pod Security Standards for hardened cluster deployments. Use when implementing cluster security, defining network policies, or enforcing security compliance in Kubernetes environments.
This skill inherits all available tools. When active, it can use any tool Claude has access to.
Comprehensive guidance for implementing security policies in Kubernetes clusters, covering Pod Security Standards, Network Policies, RBAC, Security Contexts, admission control, secrets management, and runtime security for production-grade hardened deployments.
Pod Security Standards define three policies (Privileged, Baseline, Restricted) enforced through Pod Security Admission (PSA) controller built into Kubernetes 1.23+.
Three security levels:
Namespace-level enforcement:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
# Enforce restricted policy
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
# Audit violations against restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/audit-version: latest
# Warn users about violations
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/warn-version: latest
Progressive enforcement strategy:
# Development namespace - warn only
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
pod-security.kubernetes.io/warn: baseline
pod-security.kubernetes.io/audit: restricted
---
# Staging namespace - enforce baseline, audit restricted
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
---
# Production namespace - enforce restricted
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Fully hardened pod specification:
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
# Security Context at pod level
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: app
image: myapp:1.0.0
# Security Context at container level
securityContext:
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
seccompProfile:
type: RuntimeDefault
# Resource limits required for restricted
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
# Writable volumes for read-only filesystem
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
Audit-first migration approach:
# Step 1: Audit all namespaces
kubectl label namespace --all \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/warn=restricted
# Step 2: Identify violations
kubectl get pods -A --show-labels | grep "pod-security"
# Step 3: Fix workloads incrementally
# Step 4: Enforce baseline
kubectl label namespace production \
pod-security.kubernetes.io/enforce=baseline
# Step 5: Eventually enforce restricted
kubectl label namespace production \
pod-security.kubernetes.io/enforce=restricted
Start with zero-trust default deny:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Applies to all pods
policyTypes:
- Ingress
- Egress
Frontend to backend communication:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: production
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress:
# Allow from frontend pods
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
# Allow from ingress controller
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
egress:
# Allow to database
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
# Allow DNS queries
- to:
- namespaceSelector:
matchLabels:
name: kube-system
- podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
# Allow HTTPS to external services
- to:
- podSelector: {}
ports:
- protocol: TCP
port: 443
Strict database access control:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: postgres-policy
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
policyTypes:
- Ingress
- Egress
ingress:
# Only allow from specific app pods
- from:
- podSelector:
matchLabels:
app: backend
tier: api
ports:
- protocol: TCP
port: 5432
egress:
# Deny all egress (database shouldn't initiate connections)
[]
Cross-namespace communication with namespace selectors:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
# Allow Prometheus from monitoring namespace
- from:
- namespaceSelector:
matchLabels:
name: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 8080 # Metrics endpoint
Principle of least privilege:
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-sa
namespace: production
automountServiceAccountToken: false # Explicit opt-in
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
namespace: production
spec:
template:
spec:
serviceAccountName: app-sa
automountServiceAccountToken: true # Only if needed
Namespace-scoped permissions:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: production
rules:
# Read-only access to pods
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
# Read pod logs
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-pod-reader
namespace: production
subjects:
- kind: ServiceAccount
name: app-sa
namespace: production
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Cluster-wide permissions (use sparingly):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-reader
rules:
# Read nodes and metrics
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["metrics.k8s.io"]
resources: ["nodes"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitoring-node-reader
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitoring
roleRef:
kind: ClusterRole
name: node-reader
apiGroup: rbac.authorization.k8s.io
Application-specific permissions:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-operator
namespace: production
rules:
# Manage ConfigMaps for dynamic config
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "update", "patch"]
resourceNames: ["app-config"] # Restrict to specific ConfigMap
# Read secrets (no write)
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
resourceNames: ["app-credentials"]
# Create/delete ephemeral pods for batch jobs
- apiGroups: [""]
resources: ["pods"]
verbs: ["create", "delete", "get", "list", "watch"]
# Access own deployment for rollout status
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
resourceNames: ["app"]
# Check what a service account can do
kubectl auth can-i --list --as=system:serviceaccount:production:app-sa
# Check specific permission
kubectl auth can-i delete pods \
--as=system:serviceaccount:production:app-sa \
-n production
# Audit all ClusterRoleBindings
kubectl get clusterrolebindings -o json | \
jq -r '.items[] | select(.subjects[]?.kind=="ServiceAccount") |
"\(.metadata.name): \(.subjects[].namespace)/\(.subjects[].name)"'
Comprehensive container hardening:
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
securityContext:
# Pod-level settings
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
fsGroupChangePolicy: "OnRootMismatch"
seccompProfile:
type: RuntimeDefault
supplementalGroups: [2000]
containers:
- name: app
image: app:1.0
securityContext:
# Container-level (overrides pod-level)
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 1000
readOnlyRootFilesystem: true
# Drop all capabilities, add only required
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE # Only if binding to port <1024
# Seccomp profile
seccompProfile:
type: RuntimeDefault
Minimal capability sets:
# Web server needing port 80/443
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
- CHOWN
- SETGID
- SETUID
---
# Application with no special privileges
securityContext:
capabilities:
drop:
- ALL # Drop all, add none
Custom seccomp profile:
apiVersion: v1
kind: Pod
metadata:
name: app-with-seccomp
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/app-seccomp.json
containers:
- name: app
image: app:1.0
Example seccomp profile (profiles/app-seccomp.json):
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": ["SCMP_ARCH_X86_64"],
"syscalls": [
{
"names": [
"read", "write", "open", "close", "stat",
"fstat", "lstat", "poll", "lseek", "mmap",
"mprotect", "munmap", "brk", "rt_sigaction",
"rt_sigprocmask", "ioctl", "access", "socket",
"connect", "accept", "sendto", "recvfrom"
],
"action": "SCMP_ACT_ALLOW"
}
]
}
Install Gatekeeper:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
Constraint Template:
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("Missing required labels: %v", [missing])
}
Enforce the constraint:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: require-app-labels
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment", "StatefulSet"]
namespaces:
- production
parameters:
labels:
- "app"
- "team"
- "environment"
Deny privileged containers:
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sdisallowprivileged
spec:
crd:
spec:
names:
kind: K8sDisallowPrivileged
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdisallowprivileged
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.privileged
msg := sprintf("Container %v is privileged", [container.name])
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDisallowPrivileged
metadata:
name: deny-privileged-containers
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
excludedNamespaces:
- kube-system
Install Kyverno:
kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.11.0/install.yaml
Require resource limits:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-limits
spec:
validationFailureAction: Enforce
background: true
rules:
- name: require-cpu-memory-limits
match:
any:
- resources:
kinds:
- Pod
validate:
message: "CPU and memory limits are required"
pattern:
spec:
containers:
- resources:
limits:
memory: "?*"
cpu: "?*"
Disallow latest tag:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-latest-tag
spec:
validationFailureAction: Enforce
rules:
- name: require-image-tag
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Images must specify a tag other than 'latest'"
pattern:
spec:
containers:
- image: "!*:latest"
Mutate to add security context:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-security-context
spec:
rules:
- name: add-non-root
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- (name): "*"
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
Base64 is not encryption - use external secret management:
apiVersion: v1
kind: Secret
metadata:
name: app-credentials
namespace: production
type: Opaque
stringData: # Use stringData for clarity
username: admin
password: supersecret
database-url: postgresql://admin:supersecret@db:5432/myapp
Sync from AWS Secrets Manager:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: aws-secretsmanager
namespace: production
spec:
provider:
aws:
service: SecretsManager
region: us-west-2
auth:
jwt:
serviceAccountRef:
name: external-secrets-sa
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-credentials
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secretsmanager
kind: SecretStore
target:
name: app-credentials
creationPolicy: Owner
data:
- secretKey: password
remoteRef:
key: prod/app/database
property: password
Encrypt secrets for GitOps:
# Install sealed-secrets controller
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.24.0/controller.yaml
# Install kubeseal CLI
brew install kubeseal
# Create and seal a secret
kubectl create secret generic app-secret \
--from-literal=api-key=secret123 \
--dry-run=client -o yaml | \
kubeseal -o yaml > sealed-secret.yaml
# Commit sealed-secret.yaml to Git (safe)
SealedSecret manifest:
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
name: app-secret
namespace: production
spec:
encryptedData:
api-key: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq...
template:
metadata:
name: app-secret
namespace: production
type: Opaque
Vault Agent Injector:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "app-role"
vault.hashicorp.com/agent-inject-secret-config: "secret/data/app/config"
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/app/config" -}}
export DB_PASSWORD="{{ .Data.data.password }}"
export API_KEY="{{ .Data.data.api_key }}"
{{- end }}
spec:
serviceAccountName: app
containers:
- name: app
image: app:1.0
command: ["/bin/sh"]
args: ["-c", "source /vault/secrets/config && ./app"]
Enforce scanned images with Kyverno:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-image-signature
spec:
validationFailureAction: Enforce
rules:
- name: check-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "registry.example.com/*"
attestors:
- count: 1
entries:
- keys:
publicKeys: |
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
-----END PUBLIC KEY-----
Immutable image digests:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
# BAD: Mutable tag
- name: app
image: app:v1.0.0
# GOOD: Immutable digest
- name: app
image: app@sha256:abc123def456...
imagePullPolicy: IfNotPresent
Image pull secrets:
# Create docker registry secret
kubectl create secret docker-registry regcred \
--docker-server=registry.example.com \
--docker-username=robot \
--docker-password=secret \
--docker-email=team@example.com \
-n production
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-sa
namespace: production
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
serviceAccountName: app-sa
Pod Security:
Network Security:
Access Control:
Secrets Management:
Admission Control:
Image Security:
Runtime Security:
CIS Kubernetes Benchmark:
NIST SP 800-190:
PCI-DSS for Kubernetes: