AWS CloudTrail Management Events

Cloud & SaaSCloud InfrastructureAWSCloudTrailCloud Control PlaneSIEM / Log Aggregator

Location

AWS CloudTrail > Event history (last 90 days) or trail delivery in S3 / CloudWatch Logs

Description

AWS control-plane audit records for management events including console activity, API calls, IAM changes, role assumptions, service configuration updates, and destructive actions. Event history provides recent management events, while a trail is required for retained delivery to S3 or CloudWatch Logs.

Forensic Value

CloudTrail is the primary source for reconstructing attacker activity across AWS accounts. It identifies the calling principal, source IP, user agent, request parameters, and affected resources for changes to IAM, EC2, EKS, ECR, S3, and logging configuration itself. It also reveals anti-forensics such as trail deletion, region disabling, or tampering with guardrail services.

Tools Required

AWS ConsoleAWS CLIAthenaCloudWatch Logs InsightsSIEM

Collection Commands

AWS CLI

aws cloudtrail lookup-events --start-time 2026-03-01T00:00:00Z --end-time 2026-03-07T23:59:59Z --output json > cloudtrail_event_history.json

AWS CLI

aws s3 cp s3://<trail-bucket>/AWSLogs/<account-id>/CloudTrail/<region>/ ./cloudtrail/ --recursive

CloudWatch Logs Insights

fields @timestamp, eventSource, eventName, userIdentity.type, sourceIPAddress, userAgent | sort @timestamp desc | limit 200

Collection Constraints

  • Event history alone is short-lived and management-event focused; durable investigations require a retained trail or exported sink data.
  • Data events and organization-wide visibility depend on pre-incident CloudTrail configuration across the accounts and regions in scope.

MITRE ATT&CK Techniques

T1098T1078.004T1578T1562

Related Blockers

Critical Logs Rotated/Overwritten Before Collection

Key log files (Security EVTX, web server access logs, syslog) have been rotated out or overwritten due to aggressive retention settings, high volume, or attacker manipulation. The evidence window for those sources is now closed.

SIEM Not Ingesting Relevant Log Sources

The SIEM does not ingest logs from the affected systems, applications, or network segments. Correlation, alerting, and historical search capabilities are unavailable for the evidence sources most relevant to this incident.

Legal Requesting Preservation Conflicts with Containment

Legal counsel has issued a preservation hold requiring that certain systems, mailboxes, or data stores remain untouched. This directly conflicts with containment actions like reimaging hosts, resetting accounts, or blocking network segments.

Attacker Used Timestomping, Log Clearing, or Other Anti-Forensics

Evidence of deliberate anti-forensic activity has been found: timestamps modified, event logs cleared, prefetch/shimcache wiped, or tools designed to defeat forensic analysis were executed. Standard timeline analysis may be unreliable.

Cloud or Container Logging Coverage Missing

The investigation depends on cloud-control-plane or container telemetry that was never enabled, was retained too briefly, or was routed to an unavailable destination. This creates blind spots around identity misuse, cluster administration, and workload behavior.

SaaS Audit Logging Not Enabled or Not Licensed

The investigation depends on SaaS audit evidence that was never enabled, is unavailable under the current subscription tier, or requires a higher-privilege admin role than the response team currently has. This creates blind spots for identity abuse, collaboration-platform misuse, and source-code access.

SaaS Audit Retention Expired Before Collection

The response started after the native retention window for Google Workspace, Okta, Slack, GitHub, or similar SaaS evidence had already passed. The necessary events are no longer available in the vendor UI or API even though the underlying accounts and content may still exist.