Kubernetes Service Account Token Usage
Location
K8s API audit log (authentication and authorization decisions), cluster secret store, bound token volume projections inside podsDescription
Evidence of service-account token creation, rotation, and usage. Includes bound service-account tokens (projected into pods) and legacy non-expiring tokens stored as Secret resources in the cluster.
Forensic Value
Attackers commonly exfiltrate service-account tokens from compromised pods and reuse them from outside the cluster. Correlating token-use events in the API audit log against the expected pod scheduling and IP address identifies token-theft and replay.
Tools Required
Collection Commands
kubectl
kubectl get serviceaccounts -A -o json > serviceaccounts.json; kubectl get secrets -A --field-selector type=kubernetes.io/service-account-token -o json > sa_secrets.json
K8s audit log query
Query audit log for "user.username":"system:serviceaccount:<ns>:<sa>" grouped by sourceIPs to detect unexpected origin IPs
Collection Constraints
- •Bound tokens have short TTL and do not persist as Secrets; audit-log trails are the primary evidence
- •Non-expiring tokens stored as Secrets remain valid indefinitely unless explicitly rotated
MITRE ATT&CK Techniques
References
Used in Procedures
Related Blockers
Compromised Image Pulled from Untracked Registry
The running container image cannot be traced to a specific, approved build: it was pulled from an external or unapproved registry, built outside standard CI/CD, or has a non-deterministic tag like `:latest`. Provenance is missing, SBOM is unavailable, and the malicious content cannot be distinguished from a legitimate base.
Serverless Workload Cannot Host EDR Agent
The compromised workload is serverless (AWS Lambda, GCP Cloud Functions, Azure Functions, Cloudflare Workers) and cannot host a traditional EDR agent. Execution environments are ephemeral and container-isolated; evidence must come from cloud-provider execution logs, function code/config, trigger/event sources, and attached IAM role activity.
Evidence Spans Multiple Clouds and On-Premises
The incident crosses two or more cloud providers (AWS, Azure, GCP) and/or on-premises infrastructure. Each environment has different evidence formats, retention policies, and access patterns. Investigation time is lost to evidence-normalization and timeline-alignment rather than analysis.