Collect Kubernetes Control-Plane Audit Trail
Collect API server audit logs, CNI flow records, and admission-controller decisions covering the incident window. These are the core control-plane evidence sources for any Kubernetes incident.
Actions
- 1
For EKS: export `/aws/eks/<cluster>/cluster` CloudWatch Logs; filter for api, audit, authenticator, controllerManager, scheduler log types.
- 2
For GKE: `gcloud logging read 'resource.type="k8s_cluster"'` with appropriate freshness; separate Data Access logs may need explicit enablement.
- 3
For AKS: Query Log Analytics `AzureDiagnostics | where Category in ("kube-audit","kube-apiserver","kube-controller-manager")`.
- 4
For self-managed: collect `/var/log/kube-apiserver-audit.log*` on all control-plane nodes and the audit policy file.
- 5
Collect admission-controller decisions (OPA Gatekeeper, Kyverno) for the window; admission audit shows what was allowed and what was denied.
- 6
Collect CNI flow records (Calico, Cilium Hubble, AWS VPC CNI) covering pod-to-pod and pod-to-external communication.
Queries
aws logs filter-log-events --log-group-name /aws/eks/<cluster>/cluster --filter-pattern "{ $.verb = \"create\" && $.objectRef.resource = \"pods\" }"gcloud logging read 'protoPayload.methodName="io.k8s.core.v1.pods.create"' --freshness=7d
Notes
Managed-cluster control-plane logging is often disabled by default; your first finding may be that the logs do not exist.
Audit policy determines what is captured; a policy with RequestResponse only for metadata may have dropped exactly the evidence you need.
CNI flow records are the network-layer equivalent of VPC flow logs but scoped to pod identity.