Review Cloud Hardening Gaps After Identity Compromise
Review identity-plane and cloud-control-plane weaknesses that allowed attacker persistence or tenant abuse, including conditional access, service principals, OAuth grants, and AKS control-plane visibility.
Actions
- 1.Document every tenant control that failed or was bypassed: MFA coverage gaps, weak conditional access scoping, stale break-glass accounts, excessive app consent, and unmanaged workload identities.
- 2.Review all service principals, managed identities, and OAuth grants touched during the incident. Remove unnecessary permissions and add monitoring for privileged credential changes.
- 3.Assess Azure infrastructure logging coverage, including AKS diagnostic settings, Kubernetes audit retention, resource-level activity logs, and ACR access telemetry.
- 4.Define a hardening backlog for conditional access, application governance, workload identity restrictions, and cloud admin break-glass procedures.
Queries
AuditLogs | where TimeGenerated > ago(30d) | where OperationName has_any ("Add service principal credentials", "Consent to application", "Add member to role") | project TimeGenerated, OperationName, InitiatedBy, TargetResourcesAzureActivity | where TimeGenerated > ago(30d) | where ResourceProvider has "Microsoft.ContainerService" or ResourceProvider has "Microsoft.ContainerRegistry" | summarize count() by OperationNameValue, Caller, ResourceGroup
Notes
- Cloud hardening reviews should include workload identity and cluster control-plane telemetry, not just user account abuse.
- If AKS or container services are in scope, missing audit settings are evidence gaps that need explicit remediation owners.
Where to Go Next
Related Artifacts
Azure AD (Entra ID) Audit Logs
Azure Portal > Entra ID > Monitoring > Audit logs (or Microsoft Graph API /auditLogs/directoryAudits)
Conditional Access Policy Logs
Azure Portal > Entra ID > Monitoring > Sign-in logs > Conditional Access tab
Service Principal & App Registration Activity
Azure Portal > Entra ID > App registrations and Enterprise applications > Audit logs (or Microsoft Graph API)
Azure Kubernetes Service (AKS) Activity Logs
Azure Portal > Monitor > Activity Log filtered to Microsoft.ContainerService/managedClusters
AKS Kubernetes Audit Logs
Azure Monitor diagnostic settings for AKS (categories: kube-audit, kube-audit-admin)
AWS CloudTrail Management Events
AWS CloudTrail > Event history (last 90 days) or trail delivery in S3 / CloudWatch Logs
AWS IAM Credential Report & Access Key Metadata
AWS IAM > Credential report plus IAM API responses for users and access keys
Amazon EKS Control Plane Logs
CloudWatch Logs group /aws/eks/<cluster>/cluster for api, controllerManager, and scheduler log types
Amazon EKS Kubernetes Audit Logs
CloudWatch Logs group /aws/eks/<cluster>/cluster log type: audit
Amazon ECR CloudTrail and Registry Events
CloudTrail events for ecr.amazonaws.com plus repository and image metadata from ECR APIs
Google Workspace Admin Audit Events
Google Admin Console > Reporting > Audit and investigation > Admin log events
Google Workspace OAuth Token and App Access Audit Events
Google Admin Console > Reporting > Audit and investigation > OAuth log events
Google Cloud Audit Logs
Google Cloud Logging > Logs Explorer > cloudaudit.googleapis.com/*
Google Kubernetes Engine Audit Logs
Cloud Logging > GKE cluster audit streams and Kubernetes API audit entries
Okta System Log
Okta Admin Console > Reports > System Log or Okta System Log API
Slack Audit Logs
Slack Admin > Audit Logs or Slack Audit Logs API
GitHub Enterprise Audit Log Events
GitHub Enterprise or organization audit log UI and REST API
Common Blockers
Cloud or Container Logging Coverage Missing
The investigation depends on cloud-control-plane or container telemetry that was never enabled, was retained too briefly, or was routed to an unavailable destination. This creates blind spots around identity misuse, cluster administration, and workload behavior.
SaaS Audit Logging Not Enabled or Not Licensed
The investigation depends on SaaS audit evidence that was never enabled, is unavailable under the current subscription tier, or requires a higher-privilege admin role than the response team currently has. This creates blind spots for identity abuse, collaboration-platform misuse, and source-code access.
SaaS Audit Retention Expired Before Collection
The response started after the native retention window for Google Workspace, Okta, Slack, GitHub, or similar SaaS evidence had already passed. The necessary events are no longer available in the vendor UI or API even though the underlying accounts and content may still exist.