ContainmentP1~120 min

Isolate Compromised Kubernetes Workload

Isolate the compromised workload with NetworkPolicy, namespace quarantine, and service-account revocation without losing forensic evidence. Preserve pod state, container image digest, and node-level artifacts before termination.

Actions

  1. 1

    Preserve evidence first: capture pod spec (`kubectl get pod <pod> -o yaml`), image digest, and last N minutes of container logs; snapshot the node if possible.

  2. 2

    Apply a deny-all NetworkPolicy to the affected namespace or specific pod labels to block egress and internal lateral traffic.

  3. 3

    Revoke the pod's service account bindings (`kubectl patch rolebinding --type=json`) and rotate the bound service account token.

  4. 4

    Cordon the node running the pod to prevent rescheduling; avoid drain until node-level evidence is preserved.

  5. 5

    Taint the compromised pod's label so ReplicaSet / Deployment controllers do not immediately replace the compromised pod before forensics.

  6. 6

    Delete the pod only after evidence is preserved; replace with a known-clean image pulled by digest.

Queries

kubectl get networkpolicies -A; kubectl describe networkpolicy -n <ns> <policy>
kubectl logs <pod> -n <ns> --previous --tail=-1 > pod_logs.txt
kubectl get rolebindings,clusterrolebindings -A -o json | jq '.items[] | select(.subjects[]? | .kind == "ServiceAccount" and .name == "<sa>")'

Notes

Deleting the pod without evidence preservation is the most common mistake; controller replacement is designed to be fast, so delete only after capture.

NetworkPolicy scope must match what the CNI actually enforces; Calico, Cilium, and AWS VPC CNI differ in defaults and feature support.

Rotating a service-account token does not immediately invalidate previously-issued bound tokens; audit their TTL.

Where to Go Next

Related Resources