# 🔗 Supply Chain Attack — Starter Kit

> Complete starter bundle for supply chain attack incident response.

**Generated:** 2026-04-18

---

## Part 1: IR Cheatsheet

### Triage
- **Supply-Chain Scope** (P1, ~90m)
  1. Pull the vendor advisory and record: affected product, affected versions, published IoCs (hashes, domains, certificates), and the earliest known malicious release timestamp.
  2. Query SBOMs and package registries (npm ls, pip freeze, go list -m all, Trivy SBOM, Syft output) across build systems and runtime hosts for the affected package and version range.
  3. Across EDR telemetry, search for the malicious hash(es) as `FileHash` and as a loaded module parent: `DeviceFileEvents | where SHA256 in~ (<hashes>)` joined with `DeviceProcessEvents | where SHA256 in~ (<hashes>)` to separate "present on disk" from "actually executed".

### Containment
- **Supply-Chain Rollback** (P1, ~120m)
  1. Pin safe-version constraints in package manifests: npm (`"pkg": "<version-safe>"` with lockfile update), pip requirements, Go modules `replace` directives; commit lockfile updates and block merges that revert.
  2. Publish internal proxy/repo denylist for the malicious version(s): JFrog Artifactory `excluded patterns`, Nexus routing rules, internal PyPI mirror blocklists, npm registry `deprecate` of tenant-mirrored copies.
  3. Freeze CI/CD pipelines that may redeploy infected containers: disable auto-deploy on affected repos; add a temporary admission check that rejects images built from the malicious release window.
- **Serverless Containment** (P1, ~90m)
  1. Preserve the current deployed package: `aws lambda get-function --function-name <name>` and download via the returned time-limited URL; save environment variables separately.
  2. Disable all event-source mappings (`aws lambda list-event-source-mappings` then `delete-event-source-mapping`) to stop inbound invocations.
  3. Remove the function's execution-role attachment or replace with a minimum-privilege role that only allows log writes during investigation.

### Collection
- **K8s Audit Collection** (P1, ~120m)
  1. For EKS: export `/aws/eks/<cluster>/cluster` CloudWatch Logs; filter for api, audit, authenticator, controllerManager, scheduler log types.
  2. For GKE: `gcloud logging read 'resource.type="k8s_cluster"'` with appropriate freshness; separate Data Access logs may need explicit enablement.
  3. For AKS: Query Log Analytics `AzureDiagnostics | where Category in ("kube-audit","kube-apiserver","kube-controller-manager")`.
- **Serverless Collection** (P1, ~90m)
  1. Export execution logs: Lambda via CloudWatch Logs, GCF via Cloud Logging, Azure Functions via Application Insights / Log Analytics.
  2. Export management-plane events for the function ARN / resource: CloudTrail for Lambda (`UpdateFunctionCode`, `UpdateFunctionConfiguration`), Cloud Audit Logs for GCF, Activity Log for Azure Functions.
  3. Preserve the currently-deployed package: `aws lambda get-function`, `gcloud functions describe`, equivalent for Azure; download before any rollback.

### Analysis
- **Backdoor Analysis** (P2, ~240m)
  1. Extract the malicious artifact from the quarantined copy preserved during containment; verify hash against vendor advisory and calculate additional hashes (ssdeep, TLSH) for fuzzy matching.
  2. Static analysis first: `file`, `strings`, `binwalk`, entropy analysis, disassembly (Ghidra/IDA). For JS/Python supply-chain attacks, inspect package.json scripts, post-install hooks, and any `eval`/`exec` calls with obfuscated input.
  3. Dynamic analysis in an isolated lab: run in a disconnected VM with fake network services (INetSim, FakeNet-NG); capture process creation, file drops, registry changes, network I/O, and DNS queries.

### Post-Incident Review
- **Vendor Review** (P2, ~180m)
  1. Write a short timeline: vendor compromise, malicious release, internal ingestion, detection, containment. Identify the earliest control that could have caught the issue and why it did not.
  2. Assess SBOM coverage: which systems produced SBOMs, how frequently, were they stored queryable, could you answer "who has this package version" in minutes, not days?
  3. Assess artifact signing and verification: was signature verification enforced on install and on runtime? Were allowlists used for acceptable signers? Was there drift?

## Part 2: Key Artifacts

### AmCache.hve
**Location:** `C:\Windows\appcompat\Programs\Amcache.hve`
**Value:** AmCache provides SHA1 hashes for executed binaries, enabling immediate VirusTotal lookups even after the attacker deletes the original file. First-execution timestamps establish when a tool was first introduced to the system. Entries persist across reboots and are harder to anti-forensic than Prefetch.

### Full Memory Dump
**Location:** `Acquired via live capture (RAM)`
**Value:** Memory analysis is the only reliable method to detect fileless malware, process injection, and reflective DLL loading that leave no disk artifacts. Active network connections with owning process context, decrypted credential material from LSASS, and in-memory-only scripts are all recoverable. Volatility profiles can reconstruct the full process tree, open handles, and loaded modules.

### Unified Audit Log (UAL)
**Location:** `Microsoft Purview > Audit > Search (or Search-UnifiedAuditLog cmdlet)`
**Value:** The UAL is the single most important artifact for M365 investigations. It captures mailbox access, file downloads, sharing changes, admin role assignments, and OAuth app consents in one searchable location. Correlating ClientIP and UserAgent across operations reveals session hijacking -- when the same session token appears from two different geolocations, a token theft is confirmed. Retention is 90 days (E3) or 365 days (E5).

### AWS CloudTrail Management Events
**Location:** `AWS CloudTrail > Event history (last 90 days) or trail delivery in S3 / CloudWatch Logs`
**Value:** CloudTrail is the primary source for reconstructing attacker activity across AWS accounts. It identifies the calling principal, source IP, user agent, request parameters, and affected resources for changes to IAM, EC2, EKS, ECR, S3, and logging configuration itself. It also reveals anti-forensics such as trail deletion, region disabling, or tampering with guardrail services.

### GitHub Enterprise Audit Log Events
**Location:** `GitHub Enterprise or organization audit log UI and REST API`
**Value:** GitHub audit logs are essential for source-code and CI/CD investigations. They show who changed org membership, created or rotated tokens, modified repository settings, added apps, or accessed administration features that could enable code theft or supply-chain abuse.

### Kubernetes API Server Audit Log
**Location:** `Kubernetes API server audit log (--audit-log-path) or managed-cluster equivalent (AKS diagnostic settings, EKS control-plane logging, GKE Cloud Logging)`
**Value:** The K8s API audit log reconstructs the attacker's control-plane activity: what objects were created or modified, which service accounts were used, what images were deployed, which secrets were accessed. On managed clusters, enabling and forwarding control-plane logs is a prerequisite to meaningful investigation.

### Kubelet Node-Level Logs
**Location:** `Kubelet systemd journal on each node (`journalctl -u kubelet`) and kubelet log files (/var/log/kubelet.log)`
**Value:** Kubelet logs fill gaps between the control-plane audit log and container runtime events: they show the node's perspective on pod startup, image pulls, sidecar injection, and health-probe failures. Critical when an ephemeral container has been evicted and only node-level records remain.

### Container Runtime State and Events
**Location:** `containerd or Docker daemon log (journalctl -u containerd / journalctl -u docker), runtime state directory (/var/lib/containerd, /var/lib/docker)`
**Value:** When a compromised container has been evicted or replaced, the runtime state directory may still hold the container configuration (CRI-O/containerd JSON files), recent log tail, and layer references. Combined with the image registry, these reconstruct what actually ran and for how long.

### AWS Lambda Execution Logs
**Location:** `CloudWatch Logs log group `/aws/lambda/<function-name>``
**Value:** Lambda execution logs show what each invocation did: inputs, outputs, error traces, timing anomalies, and any stdout-printed attacker activity. Combined with CloudTrail management events for the function, they reconstruct both the "who deployed" and "what happened during execution" dimensions.

### AWS Lambda Function Code and Configuration
**Location:** `Lambda function code (downloadable via `get-function`), function configuration (environment variables, layers, execution role, triggers)`
**Value:** Attackers frequently modify Lambda code (UpdateFunctionCode) or environment (UpdateFunctionConfiguration) as a persistence mechanism. Downloading the current deployed package and diffing against the expected CI/CD build identifies malicious drift. Environment variables often carry secrets worth rotating.

### GCP Cloud Functions Execution Logs
**Location:** `Cloud Logging with resource type `cloud_function``
**Value:** Execution logs reconstruct individual function invocations, identify abnormal invocation patterns, and capture any stdout-printed attacker output. Combined with Cloud Audit Logs for the function resource, they cover both deployment and runtime phases.

### /proc Filesystem (Live Process Data)
**Location:** `/proc/<pid>/ (cmdline, exe, fd/, maps, environ, net/)`
**Value:** /proc is essential for live triage when a memory dump is not feasible. /proc/<pid>/exe reveals the true binary path even if the process renamed itself. /proc/<pid>/cmdline shows launch arguments. /proc/<pid>/fd/ exposes deleted-but-open files that can still be recovered via cp. /proc/net/tcp provides a live network connection table with owning process inode mapping for identifying C2 connections.

### Systemd Journal (Persistent Binary Logs)
**Location:** `/var/log/journal/<machine-id>/*.journal`
**Value:** The systemd journal aggregates logs from all sources into a single queryable binary format that may contain entries not present in traditional text log files. Forward-secure sealing (FSS) cryptographically protects log integrity, making tamper detection possible. Journal entries include structured metadata fields (unit name, PID, UID) that enable precise filtering. Persistent journals in /var/log/journal survive reboots and may retain longer history than rotated text logs.

## Part 3: Key Queries

### Supply-Chain Scope
```
DeviceFileEvents | where SHA256 in~ ("<hash1>","<hash2>") | summarize first_seen=min(Timestamp), hosts=make_set(DeviceName) by SHA256, FileName
```

```
DeviceProcessEvents | where SHA256 in~ ("<hash1>","<hash2>") or InitiatingProcessSHA256 in~ ("<hash1>","<hash2>") | project Timestamp, DeviceName, AccountName, ProcessCommandLine, InitiatingProcessCommandLine
```

### Supply-Chain Rollback
```
DeviceProcessEvents | where SHA256 in~ (<malicious-hashes>) | project Timestamp, DeviceName, AccountName, ProcessCommandLine | summarize by DeviceName
```

```
kubectl get pods -A -o json | jq '.items[] | select(.spec.containers[].image | contains("<bad-image-digest>")) | {ns:.metadata.namespace, name:.metadata.name}'
```

### Backdoor Analysis
```
DeviceRegistryEvents | where RegistryKey has_any ("<malicious-reg-path-1>","<malicious-reg-path-2>") | project Timestamp, DeviceName, RegistryKey, RegistryValueName, RegistryValueData
```

```
DeviceFileEvents | where FolderPath has_any ("<drop-path-1>","<drop-path-2>") or FileName in~ ("<dropped-file-1>","<dropped-file-2>") | summarize by DeviceName, FolderPath, FileName
```

### Vendor Review
```
Does the CI/CD pipeline fail closed or fail open when SBOM generation or signature verification fails?
```

```
How many of our critical vendors have documented forensic-readiness and customer-notification commitments?
```

### Serverless Containment
```
aws lambda list-event-source-mappings --function-name <name> | jq .EventSourceMappings[].UUID
```

```
aws iam get-role-policy --role-name <exec-role> --policy-name <policy>
```

### K8s Audit Collection
```
aws logs filter-log-events --log-group-name /aws/eks/<cluster>/cluster --filter-pattern "{ $.verb = \"create\" && $.objectRef.resource = \"pods\" }"
```

```
gcloud logging read 'protoPayload.methodName="io.k8s.core.v1.pods.create"' --freshness=7d
```

### Serverless Collection
```
aws logs filter-log-events --log-group-name /aws/lambda/<function> --start-time $(date -d "-7 days" +%s)000
```

```
gcloud logging read 'resource.type="cloud_function" AND resource.labels.function_name="<name>"' --freshness=7d
```

---
*Generated by DFIR Assist*