Amazon EKS Kubernetes Audit Logs
Location
CloudWatch Logs group /aws/eks/<cluster>/cluster log type: auditDescription
Kubernetes audit records emitted by the EKS-managed API server when audit logging is enabled. Captures authenticated requests for pods, secrets, service accounts, configmaps, RBAC objects, exec sessions, and other cluster resources.
Forensic Value
Audit logs are the highest-fidelity source for reconstructing attacker actions in the cluster control plane. They reveal secret reads, pod exec abuse, role-binding changes, privilege escalation through service accounts, and direct access to sensitive resources that may never appear in node logs or application telemetry.
Tools Required
Collection Commands
AWS CLI
aws logs filter-log-events --log-group-name "/aws/eks/<cluster-name>/cluster" --filter-pattern ""audit"" --start-time 1709251200000 --end-time 1709856000000 > eks_audit_logs.json
CloudWatch Logs Insights
fields @timestamp, @message | filter @logStream like /audit/ | sort @timestamp desc | limit 200
kubectl
kubectl get events --all-namespaces --sort-by=.metadata.creationTimestamp > eks_k8s_events.txt
Collection Constraints
- •Audit visibility depends on EKS logging being enabled and retained for the cluster before the incident.
- •Audit events must be correlated with node and workload evidence to prove what happened after the API action completed.
MITRE ATT&CK Techniques
Used in Procedures
Related Blockers
Cloud or Container Logging Coverage Missing
The investigation depends on cloud-control-plane or container telemetry that was never enabled, was retained too briefly, or was routed to an unavailable destination. This creates blind spots around identity misuse, cluster administration, and workload behavior.
SaaS Audit Logging Not Enabled or Not Licensed
The investigation depends on SaaS audit evidence that was never enabled, is unavailable under the current subscription tier, or requires a higher-privilege admin role than the response team currently has. This creates blind spots for identity abuse, collaboration-platform misuse, and source-code access.
SaaS Audit Retention Expired Before Collection
The response started after the native retention window for Google Workspace, Okta, Slack, GitHub, or similar SaaS evidence had already passed. The necessary events are no longer available in the vendor UI or API even though the underlying accounts and content may still exist.