Supply Chain Attack Investigation Assessment
Assessment template for supply chain compromises covering vendor-advisory ingestion, cohort separation (installed/executed/C2-contacted), rollback, signing-material rotation, and post-incident SBOM governance improvements.
Triage
0/5Confirm that the investigation window has been defined with clear T-start (earliest known indicator) and T-end boundaries. The timeframe should include a safety buffer of at least 48 hours before the first detected IOC to account for pre-compromise reconnaissance.
Verify that the initially compromised system, account, or entry point has been identified and documented. Patient zero determination should be supported by corroborating evidence from multiple log sources such as EDR, authentication logs, and email gateway records.
Ensure the incident has been assigned a severity level based on observed impact, affected asset criticality, and potential data exposure. The classification should follow the organization's incident severity matrix and be reflected in all communications and ticket metadata.
Confirm that the vendor advisory has been ingested, including affected product, version range, published IoCs (hashes, C2 domains, certificates), and the earliest malicious release timestamp. Private IoC sharing from ISAC or vendor channels has been requested if the public advisory is incomplete.
Confirm that affected-asset enumeration distinguishes assets with the compromised package present, assets where the payload executed, and assets that contacted attacker C2 infrastructure. These three cohorts have different response requirements.
Containment
0/4Confirm that all systems identified as compromised have been isolated from the network. Isolation should be verified through network-level controls (VLAN segmentation, firewall rules, or EDR network quarantine) rather than simply disabling accounts on the host.
Verify that all accounts known or suspected to be compromised have been disabled or had their credentials forcibly rotated. This includes service accounts, shared accounts, and any accounts with elevated privileges that the attacker may have accessed.
Assess whether the containment boundary is comprehensive enough to cover all known attacker footholds. Review lateral movement evidence, C2 communication logs, and authentication patterns to confirm no alternate access paths remain outside the containment perimeter.
Verify that all affected services have been rolled back to a verified safe version of the package with lockfile updates committed. CI/CD pipelines are frozen against redeploying malicious versions; internal registries block the known-bad versions.
Preservation
0/3Confirm that volatile memory (RAM) has been captured from all key compromised systems before any reboot or remediation action. Memory dumps should be acquired using forensically-sound tools and stored with proper chain of custody documentation.
Verify that all critical log sources have been snapshotted or exported to a tamper-proof location. This includes SIEM data, Windows Event Logs, authentication logs, email gateway logs, and cloud audit trails that fall within the investigation timeframe.
Ensure that a formal chain of custody record exists for every piece of evidence collected. Each record must include the evidence hash, collector identity, collection timestamp, storage location, and any transfers between custodians.
Collection
0/2Confirm that endpoint detection and response telemetry has been collected from all in-scope systems for the investigation timeframe. Telemetry should include process execution trees, file modifications, network connections, and registry changes.
Validate that evidence has been gathered from every relevant log source including EDR, SIEM, cloud audit logs, email gateway, proxy, DNS, VPN, and authentication systems. Cross-reference the log source inventory against the incident scope to identify any gaps.
Analysis
0/2Verify that all lateral movement activity has been identified and mapped across the environment. Analysis should cover RDP sessions, SMB connections, WMI/PSRemoting, pass-the-hash/pass-the-ticket activity, and any anomalous authentication patterns between systems.
Confirm that the root cause of the incident has been identified, including the initial attack vector, any exploited vulnerabilities, and the conditions that allowed the compromise to succeed. The root cause should be documented with supporting evidence from forensic analysis.
Eradication
0/4Confirm that all attacker-deployed malware, scripts, remote access tools, and utilities have been identified and removed from every affected system. Removal should be validated through post-remediation scans and manual verification of common persistence locations.
Verify that all attacker persistence mechanisms have been identified and removed. This includes scheduled tasks, registry run keys, startup folder entries, WMI subscriptions, service installations, DLL hijacks, and any modified Group Policy Objects.
Ensure that all credentials known or suspected to be compromised have been reset, including user passwords, service account passwords, API keys, certificates, and Kerberos tickets. The KRBTGT account should be reset twice if domain compromise is suspected.
If attacker had access to signing material (code-signing keys, Cosign keys, Authenticode certificates), confirm the compromised key has been revoked, a new key provisioned, and revocation published through internal and external channels.
Recovery
0/2Confirm that compromised systems have been rebuilt from known-clean images or installation media rather than simply cleaned in place. The rebuild process should include verifying the integrity of the baseline image and applying all current security patches before reconnecting to the network.
Verify that business services have been restored in a controlled, phased manner with validation at each step. Service restoration should include functional testing, security monitoring confirmation, and a defined rollback plan if anomalous activity is detected post-restoration.
Post-Incident Review
0/3Confirm that a formal lessons-learned review has been conducted with all participating teams. The review should document what worked well, what failed, timeline gaps, tooling shortcomings, and specific improvement actions with assigned owners and deadlines.
Verify that detection rules, SIEM correlations, and EDR policies have been updated based on the TTPs observed during the incident. New detections should cover the initial access vector, lateral movement techniques, and any persistence mechanisms used by the attacker.
Confirm a post-incident governance review has identified specific control improvements (SBOM coverage, signature verification, provenance attestation, dependency anomaly detection) with named owners and dates, and a vendor-contract review is underway with forensic-cooperation clauses.