Cloud or Container Logging Coverage Missing
The investigation depends on cloud-control-plane or container telemetry that was never enabled, was retained too briefly, or was routed to an unavailable destination. This creates blind spots around identity misuse, cluster administration, and workload behavior.
Signals
- •CloudTrail exists only as event history and no retained trail covers the full investigation window
- •EKS control-plane logs, Kubernetes audit logs, Container Insights, or Session Manager transcripts were disabled before the incident
- •VPC Flow Logs, Route 53 Resolver query logs, or GuardDuty were not enabled for the affected AWS accounts or VPCs
- •Managed Kubernetes nodes rotate pod and kubelet logs before the IR team can export them
Pivot Actions
- 1.Preserve proof of the logging gap itself: export service configuration showing which log types and destinations were enabled or disabled during the incident window
- 2.Prioritize surviving compensating evidence such as CloudTrail, load-balancer logs, ECR activity, kubelet logs, host runtime state, and provider threat-detection findings
- 3.Collect point-in-time Kubernetes inventories and container runtime state before ephemeral workloads or snapshots disappear
- 4.Escalate the gap into the incident record and assign a post-incident owner to enable missing cloud and container telemetry
Alternate Evidence Sources
- •CloudTrail, IAM credential reports, and EC2 metadata when EKS or Session Manager detail is absent
- •Load-balancer, WAF, CDN, VPC Flow, or Route 53 logs that still bound workload communication
- •Host-level kubelet, containerd, Docker, and application logs preserved from the affected nodes
- •GuardDuty, EDR, and SIEM detections that retain a summarized record of suspicious behavior