[EXP] Detection Strategy for AI-Accelerated Mass Exploitation Operations
Report Type: Exploitation Report (EXP)
Threat Category: Mass Exploitation / Opportunistic Internet-Scale Exploitation
Assessment Date: April 17, 2026
Primary Impact Domain: Data Exposure and Operational Disruption
Secondary Impact Domains: Regulatory Impact; Incident Response Cost; Reputational Impact
Affected Asset Class: Internet-Facing Services and Cloud-Hosted Workloads
Threat Objective Classification: Initial Access; Execution; Collection; Exfiltration
BLUF
AI-accelerated mass exploitation increases enterprise risk by enabling attackers to scale compromise and data loss far beyond traditional attack volumes. This risk is driven by automated exploitation that rapidly identifies vulnerable systems, executes payloads, and aggregates sensitive data with minimal human involvement. Most organizations remain exposed due to incomplete visibility and insufficient detection maturity, limiting their ability to identify attacks early. Executive action is required to strengthen detection capability, enforce visibility across systems, and reduce exposure to large-scale exploitation.
Executive Risk Translation
AI-driven exploitation shifts cyber risk from isolated incidents to continuous, high-volume attack conditions that increase both the likelihood and impact of compromise. The primary business risk is rapid progression from initial access to data loss and operational disruption before detection occurs. Organizations without sufficient visibility face delayed response, allowing attackers to operate at scale across multiple systems. This creates sustained exposure to financial loss, regulatory consequences, and operational instability.
S3 — Why This Matters Now
Attackers are increasingly using automation to accelerate exploitation across large numbers of targets simultaneously. This reduces the time between vulnerability exposure and active attack, increasing the likelihood of compromise. At the same time, enterprise environments continue to expand, increasing the number of exposed systems and potential attack paths. This combination creates a near-term risk where organizations may be compromised and experience data loss before detection mechanisms can respond effectively.
S4 — Key Judgments
• AI-accelerated exploitation increases both the scale and speed of attacks beyond traditional defensive capabilities
• Most enterprises lack sufficient visibility and detection maturity to identify high-speed, multi-stage attacks early
• Detection gaps are most significant during execution and data collection stages
• Delayed detection increases the likelihood of data loss and operational disruption
• Improving visibility and detection maturity is required to reduce exposure to large-scale attacks
S5 — Executive Risk Summary
Business Risk
Automated exploitation increases the likelihood of widespread compromise, rapid data loss, and disruption across multiple systems.
Technical Cause
The risk is driven by automated attack activity that increases the scale and speed of compromise and data access, combined with insufficient visibility and detection capability.
Threat Posture
Adversaries can execute high-frequency attacks across multiple targets and progress rapidly from initial access to data collection.
Executive Decision Requirement
Executives must prioritize improving detection capability, enforcing visibility across environments, and reducing exposure to large-scale exploitation.
S6 — Executive Cost Summary
AI-accelerated exploitation creates financial impact that varies based on detection speed, exposure level, and containment effectiveness.
Low Impact Scenario
Rapid detection and containment limit attacker activity to a small number of systems with minimal data exposure
Estimated Cost: $250K – $750K
Moderate Impact Scenario (Most Likely)
Delayed detection allows compromise of multiple systems with partial data exposure and limited operational disruption
Estimated Cost: $1.2M – $3.5M
High Impact Scenario
Widespread exploitation results in significant data loss, extended response operations, and operational disruption
Estimated Cost: $5M – $12M
S6A — Key Cost Drivers
• Exposure of internet-facing systems and exploitable services
• Detection delay and time to containment
• Number of systems affected during exploitation
• Volume and sensitivity of data accessed or exfiltrated
• Regulatory obligations and breach notification requirements
• Most likely scenario selection is based on:
• moderate detection delay due to partial detection maturity
• presence of exposed services increasing likelihood of compromise
• limited but not fully contained lateral movement potential
• moderate data sensitivity across enterprise systems
• partial visibility reducing early-stage detection effectiveness
S6B — Compliance and Risk Context
Compliance Exposure Indicator
Moderate to High depending on data sensitivity and regulatory obligations
Risk Register Entry
Risk Title
AI-Accelerated Mass Exploitation Leading to Data Loss and Operational Disruption
Risk Description
Automated exploitation campaigns can rapidly compromise multiple systems and access sensitive data before detection, increasing the likelihood of data breach and operational impact.
Likelihood
High
Impact
High
Risk Rating
High
Annualized Risk Exposure
$2M – $6M based on most likely scenario and exposure conditions
S7 — Risk Drivers
• Increased use of automation in cyber attacks
• Expansion of internet-facing systems
• Incomplete visibility across enterprise environments
• Limited detection capability for high-speed attack activity
• Delays in identifying execution and data access behaviors
S8 — Bottom Line for Executives
AI-driven exploitation increases both the speed and scale of cyber attacks, reducing the effectiveness of traditional detection approaches. Organizations without sufficient visibility will experience delayed detection and greater impact during attacks. The primary risk is rapid progression from compromise to multi-system data loss. Strengthening detection capability and enforcing visibility are critical to reducing exposure.
S9 — Board-Level Takeaway
AI-accelerated attacks are increasing enterprise risk by enabling large-scale compromise and financial impact in shorter timeframes. Without investment in detection and visibility, organizations face increased likelihood of significant data loss and disruption. This risk requires executive prioritization to ensure appropriate oversight and resource allocation. Strengthening detection capability is essential to maintaining control over enterprise risk.
Figure 2
S10 — Threat Overview
AI-accelerated mass exploitation operations represent a scalable attack model in which adversaries use automation to identify vulnerable systems and execute exploit activity across multiple targets simultaneously. The threat is defined by high-frequency interaction, rapid execution cycles, and the ability to operate continuously across exposed environments. Unlike targeted intrusion campaigns, this model prioritizes speed and volume over persistence on a single system. The primary risk is compressed time between exposure, compromise, and impact.
S11 — Threat Classification and Type
Threat Type
External Threat
Threat Sub-Type
Mass Exploitation Operations
Operational Classification
Automated, scalable, multi-target exploitation campaign
Primary Function
Rapid exploitation of exposed systems followed by execution and data access activity
S12 — Campaign or Activity Overview
These operations begin with continuous identification of exposed targets, followed by rapid exploit execution across available entry points. Successful access leads to immediate execution activity and progression into data access or staging behavior. The campaign model minimizes dwell time by accelerating movement from initial access to impact. Activity is repeated across multiple targets in parallel, enabling broad compromise within short timeframes.
S13 — Targets and Exposure Surface
Primary targets are externally accessible systems that provide direct interaction paths from untrusted sources. These systems represent the initial entry points for exploitation.
The exposure surface includes:
· Web applications
· Public APIs
· Remote access services
· Internet-facing application infrastructure
· Cloud services with external interfaces
Primary exposure conditions include:
· Unpatched vulnerabilities
· Misconfigured services
· Exposed administrative functionality
· Inconsistent monitoring on externally accessible systems
S14 — Sectors / Countries Affected
Sectors Affected
· Technology
· Financial Services
· Healthcare
· Retail and E-commerce
· Manufacturing
· Public Sector
Countries Affected
· Global distribution with no geographic restriction
· Higher concentration in regions with dense enterprise infrastructure
S15 — Adversary Capability Profiling
Capability Level
High
Technical Sophistication
Moderate to High. Automation enables effective exploitation without requiring advanced manual tradecraft.
Infrastructure Maturity
Moderate. Sufficient infrastructure is required to support scanning and repeated exploit execution across targets.
Operational Scale
High. The capability to execute across multiple targets simultaneously defines the threat model.
Escalation Likelihood
Moderate. Escalation is dependent on access to sensitive systems or valuable data following initial compromise.
S16 — Targeting Probability Assessment
Overall Targeting Probability
High
Targeting Drivers
· Presence of externally accessible systems
· Availability of exploitable weaknesses
· Automation enabling broad target coverage
· Reduced detection capability in partially monitored environments
Most Likely Targets
· Organizations with exposed application or service interfaces
· Environments with incomplete monitoring coverage
· Systems lacking consistent vulnerability management
S17 — MITRE ATT&CK Chain Flow Mapping
Initial Access
· Exploit Public-Facing Application (T1190)
Execution
· Command and Scripting Interpreter (T1059)
Persistence (Conditional)
· Account Creation (T1136)
Discovery
· System Information Discovery (T1082)
Collection
· Data from Local System (T1005)
Exfiltration
· Exfiltration Over Web Services (T1567)
S18 — Attack Path Narrative (Signal-Aligned Execution Flow)
The attack begins with interaction against externally accessible applications, APIs, or services that expose exploitable functionality to untrusted sources. Because exploitation methods are standardized and repeatable, attackers can reliably initiate this interaction across multiple targets without requiring custom development or environment-specific preparation.
The attacker submits requests designed to trigger vulnerable functionality, introducing externally controlled input into the application or service execution path. These requests may be repeated, varied, or concentrated within short intervals to identify a successful execution path.
Upon successful exploitation, the targeted system processes the supplied input in a way that deviates from normal operational behavior. This results in execution activity occurring within the context of the exposed application or service.
Execution occurs using the existing privileges of the compromised service or application. Depending on the execution path, activity may remain within application-level behavior or extend to system-level command execution through native runtime capabilities.
Following execution, the compromised system may be used to access and aggregate locally available data. This includes repeated access to files, objects, or other accessible data sources within short time windows to accelerate progression toward impact.
Once data has been accessed or staged, outbound communication is initiated to transfer information to external destinations. This enables transition from successful execution to material impact without requiring prolonged dwell time.
Because exploitation methods are standardized and repeatable, the attack can be executed consistently across multiple targets, increasing the likelihood of broad, parallel compromise where exposure exists.
The attack may complete without establishing persistence, leaving limited observable artifacts beyond exposed service interaction, execution activity, and any associated outbound communication.
S19 — Attack Chain Risk Amplification Summary
· Standardized exploitation methods increase likelihood of consistent compromise across exposed systems
· Repeated interaction against exposed services increases probability of successful execution
· Execution within application or service context accelerates post-access activity
· Rapid progression from execution to data access reduces effective response time
· Short-window collection activity increases likelihood of material data exposure
· Outbound communication enables immediate transition from compromise to impact
· Limited reliance on persistent artifacts reduces early-stage detection visibility
· Parallel targeting increases overall exposure across environments
Figure 3
S20 — Tactics, Techniques, and Procedures
Purpose
Defines adversary behavior using MITRE ATT&CK-aligned tactics and techniques. This section captures behavior independent of attack flow and includes only techniques directly supported by observed behavior and required attack conditions.
Initial Access
· Abuse of externally accessible applications or services
· Repeated interaction with exposed endpoints to identify exploitable conditions
Techniques: T1190 — Exploit Public-Facing Application
Execution
· Execution triggered through exploited application or service functionality
· Execution within application or service runtime context
Techniques:
· T1059 — Command and Scripting Interpreter
Persistence (Conditional)
· Establishment of access through creation of new accounts where required
Techniques:
· T1136 — Account Creation
Discovery
· Enumeration of system-level information to support follow-on activity
Techniques:
· T1082 — System Information Discovery
Collection
· Access and aggregation of locally accessible data
Techniques:
· T1005 — Data from Local System
Exfiltration
· Transfer of collected data using standard outbound communication channels
Techniques:
· T1567 — Exfiltration Over Web Services
S20A — Adversary Tradecraft Summary
· Leverages standardized and repeatable exploitation methods to eliminate need for custom exploit development
· Uses legitimate application or service functionality to execute actions without introducing traditional malware
· Operates within runtime context to reduce reliance on persistent artifacts
· Prioritizes speed and repeatability over long-term persistence
· Enables scalable targeting across multiple exposed systems
· Exploits incomplete monitoring and visibility gaps to reduce detection likelihood
S21 — Detection Strategy Overview
Detection Philosophy
· Detection MUST anchor to behavioral amplification patterns created by AI-assisted operations
· Detection MUST NOT attempt to identify AI usage directly
· Detection MUST focus on observable attack behaviors that scale with automation:
o exploit iteration
o execution density
o data aggregation
o exfiltration volume
· Detection MUST prioritize post-exploitation and execution stages where attacker activity becomes:
o more deterministic
o less obfuscatable
o more telemetry-rich
· Detection MUST NOT rely on:
o CVE-specific indicators
o static IOCs
o exploit payload signatures
· Core enforcement:
o Detection MUST identify mass exploitation behavior patterns, not individual exploit success events
Primary Detection Anchors
· Exploit attempt density
o Rapid, repeated interaction with exposed services
o Multiple exploit attempts within constrained time windows
· Execution burst behavior
o High-frequency process or script execution
o Multiple child processes spawned within short intervals
· Data access and aggregation spikes
o Sudden increase in data reads or queries
o Bulk data collection inconsistent with established baseline
· Outbound data transfer anomalies
o Sustained or high-volume outbound communication
o Transfer behavior deviating from normal operational patterns
Detection Prioritization Model
· Tier 1 — Primary Detection
o MUST detect multi-stage behavioral relationships consisting of exploit attempt, execution, and data access
o MUST produce high-confidence alerts with low false positive rates
· Tier 2 — Supporting Detection
o Detect isolated but high-signal behaviors:
§ execution bursts
§ abnormal data access
§ outbound anomalies
· Tier 3 — Contextual Signals
o Detect low-confidence indicators:
§ scanning activity
§ incomplete exploit attempts
· Enforcement:
o Detection design MUST prioritize behavioral chaining over isolated signal detection
o No detection may be included if it does not contribute meaningful signal strength
Correlation Strategy (Strict Enforcement)
· Correlation MUST be:
o time-bound
o entity-bound using host, user, or source IP
· Correlation windows MUST be:
o short, measured in seconds to minutes
o aligned to attack execution speed
· Correlation MUST operate across:
o network telemetry
o endpoint telemetry
o data access telemetry
· Enforcement constraints:
o Each detection MUST remain independently valid without correlation
o Correlation MUST NOT be required for baseline detection capability
o Correlation MUST only increase confidence and MUST NOT define detection existence
· Prohibited:
o cross-rule dependency
o chained detection logic requiring multiple rules to trigger
Telemetry Prioritization
· High priority (required)
o Endpoint process execution telemetry
o Parent-child process relationships
o Command-line visibility
o Network ingress and egress telemetry
· Medium priority (conditionally required)
o File and data access telemetry
o Authentication and session telemetry
· Conditional telemetry
o Web application logs
o Application-layer telemetry
· Enforcement:
o If high-priority telemetry is unavailable, primary detections MUST NOT be deployed
o Detection design MUST assume partial visibility and remain functional under degraded telemetry conditions
Detection Design Constraints
· Detection MUST be:
o behavior-based
o platform-native
o deployable without external enrichment dependencies
· Detection MUST operate under:
o real-world enterprise telemetry limitations
o partial logging scenarios
· Detection MUST NOT:
o depend on specific vulnerabilities
o assume attacker tooling
o rely on fragile artifacts
· Detection MUST include at least one rule that:
o anchors to behavioral relationships and not to volume thresholds alone
Baseline and Deployment Requirements
· Baselines MUST be established for:
o process execution rates per host
o scripting activity per user or system role
o outbound data transfer volumes
· Environment MUST support:
o command-line logging
o process lineage tracking
o normalized host and user identifiers
· Deployment constraints:
o Thresholds MUST be derived from environment-specific baselines
o Generic thresholds MUST NOT be used
Variant Resilience Requirements
· Detection MUST remain effective under:
o reduced exploit frequency
o script variation or obfuscation
o use of native system tools
· Detection MUST NOT rely solely on:
o execution volume
o signature-based indicators
· At least one detection MUST:
o anchor to exploit-to-execution behavioral transition
Operational Detection Model
· Detection MUST cover:
o exploitation attempts
o execution layer
o data access and exfiltration
· Detection MUST follow escalation logic:
o progression from initial signal to execution confirmation to data impact escalation
· SOC handling model:
o Tier 1 detections require immediate triage
o Tier 2 detections require contextual investigation
o Tier 3 detections are used for enrichment only
· Detection output MUST:
o provide sufficient context for analyst action
o avoid requiring additional correlation for initial triage
Explicit Non-Deployment Guardrails
· Detection MUST NOT be deployed if:
o telemetry requirements are not met
o baseline is not established
o false positive rate cannot be controlled
· Detection MUST NOT:
o rely solely on volume thresholds without baseline context
o generate high-noise alerts in administrative environments
o require unavailable telemetry sources
· Enforcement:
o If a detection cannot meet deployment standards, it MUST be excluded
o Detection completeness MUST NOT be forced at the expense of quality
S22 — Primary Detection Signals
Primary Detection Signals
Exploit attempt concentration
· Repeated inbound requests targeting the same service or endpoint from a single source or correlated sources within a defined time window
· Multiple malformed or exploit-pattern requests within the same session or connection group
Exploit-to-execution transition
· Network-facing service process spawning a shell or scripting interpreter
· Execution initiation within a short interval following inbound request activity from the same source or session
Execution burst activity
· Process or script execution count exceeding host baseline within a defined time window
· High-frequency child process creation originating from a single parent process
Data aggregation behavior
· Data access operations exceeding baseline thresholds for the host or user within a defined time window
· Bulk retrieval or query execution across multiple data objects within a constrained time window
Outbound transfer escalation
· Outbound data transfer volume exceeding baseline thresholds within a defined time window
· Sustained outbound data transfer exceeding expected duration for the host or user role
Supporting Detection Signals
Irregular service interaction patterns
· Request frequency, size, or structure deviating from established service baseline within a defined time window
· Repeated access attempts to non-standard or rarely used application endpoints
Elevated script interpreter usage
· Script interpreter invocation frequency exceeding baseline for host or user within a defined time window
· Script execution occurring outside defined administrative or operational contexts
Abnormal process lineage
· Service or non-interactive processes spawning shell or interpreter processes
· Parent-child relationships not present in baseline process lineage
File system activity anomalies
· File creation or access volume exceeding baseline within a defined time window
· File operations inconsistent with established system role patterns
Exploit Attempt and Instability Signals
Repeated exploit attempt failures
· Multiple failed or error-generating requests targeting the same service within a defined time window
· Sequences of request failures followed by variation in request structure
Application instability indicators
· Service crashes, restarts, or fault events occurring immediately after inbound request activity
· Repeated instability events associated with the same source or request pattern
Payload variation behavior
· Variation in request size, encoding, or parameter structure within a defined session or time window
· Repeated modification of request inputs within a constrained time window
Outbound Communication Signals
High-volume outbound transfer
· Outbound data transfer exceeding established baseline thresholds for the host or network segment within a defined time window
· Sustained outbound sessions with continuous data flow over a defined duration
Unusual destination targeting
· Outbound connections to previously unseen or low-frequency external destinations relative to established baseline
· Destination patterns deviating from established baseline for the host or user
Protocol and service deviation
· Use of protocols not associated with the host or application role
· Use of non-standard ports for outbound data transfer
Persistence and Post-Exploitation Signals (Conditional)
Unauthorized persistence creation
· Creation or modification of persistence mechanisms outside established baseline
· Persistence artifacts created within a short interval following execution activity
Credential and session manipulation
· Account creation or privilege changes exceeding baseline administrative patterns
· Authentication patterns deviating from established baseline following execution activity
Long-duration unauthorized processes
· Processes exceeding expected execution duration for system role
· Background processes lacking valid parent or operational context
Lateral Movement and Expansion Signals
Internal network probing
· Connection attempts to multiple internal hosts exceeding baseline within a defined time window
· Enumeration patterns targeting internal services or systems
Remote execution activity
· Command execution across hosts using administrative protocols
· Remote execution frequency exceeding baseline administrative activity
Access expansion behavior
· Authentication or session activity extending to additional systems within a defined time window
· Access patterns exceeding baseline scope for user or service
Signal Usage Constraints
· Primary detection signals MUST:
o map directly to observable telemetry events
o produce high-confidence detection capability when independently triggered
o be defined using measurable thresholds derived from environment-specific baselines
o represent direct attacker-controlled actions or deterministic system responses
o maintain high confidence without requiring correlation with other signals
· Primary detection signals MUST NOT:
o rely solely on anomaly detection without bounded thresholds
o depend on unsupported or unavailable telemetry
· Supporting and conditional signals MUST:
o NOT be used as standalone detection triggers
o be used only for validation or enrichment
· All signals MUST:
o be bounded by time window, entity context, and measurable baseline deviation
o use thresholds that are explicitly quantifiable within the deployment environment
· Signals MUST NOT:
o rely on single-event anomalies without supporting context
o assume exploit success without corresponding execution or impact evidence
· Detection design MUST:
o prioritize signals resistant to attacker-controlled variation
o avoid dependence on fragile or easily modified artifacts
S23 — Telemetry Requirements
Endpoint and Process Execution Telemetry
• Process creation telemetry MUST capture:
• process name
• parent process name
• process ID and parent process ID
• execution timestamps with sufficient precision for short-window analysis
• Parent-child process relationships MUST be:
• complete
• consistent across endpoints
• preserved for service-originated execution chains
• Command-line visibility MUST be:
• enabled
• untruncated
• attributable to the executing process
• Telemetry MUST support:
• burst detection within seconds-to-minutes windows
• grouping by host, process, or user
Memory and Execution Telemetry
• Memory execution telemetry is conditionally required for:
• fileless execution detection
• reflective or in-memory payload execution
• Where available, telemetry SHOULD support:
• script execution tracing
• decoded execution artifacts
• Limitation:
• absence of memory telemetry MUST be treated as a hard detection constraint
Crash and Fault Telemetry
• Systems MUST capture:
• application crashes
• service restarts
• fault and exception events
• Telemetry MUST support:
• correlation between inbound interaction and instability events
• time-aligned analysis with request activity
File and Persistence Telemetry
• File telemetry MUST capture:
• file creation
• file access
• file modification
• Persistence telemetry MUST capture:
• creation or modification of persistence mechanisms
• attribution to process and user
• Data access telemetry MUST support:
• measurable access volume
• attribution to host, user, or process
Network and Outbound Communication Telemetry
• Ingress telemetry MUST capture:
• source IP
• destination IP
• destination port
• request frequency
• Egress telemetry MUST capture:
• destination
• protocol and port
• session duration
• data transfer volume
• Telemetry MUST support:
• attribution to host or egress segment
• sustained transfer analysis
Web and Application Telemetry (Conditional Availability)
• Where available, telemetry SHOULD capture:
• HTTP request logs
• API interaction logs
• request structure and parameters
• Telemetry MUST support:
• request frequency measurement
• identification of repeated interaction patterns
• Limitation:
• absence reduces exploit-attempt detection fidelity
Telemetry Availability Requirements
• REQUIRED for primary detection:
• process execution telemetry with lineage
• command-line visibility
• network ingress and egress telemetry
• CONDITIONALLY REQUIRED:
• data access telemetry
• application-layer telemetry
• memory telemetry
• Enforcement:
• detections MUST NOT be deployed without required telemetry
• detection scope MUST match telemetry reality
Detection Capability Alignment
• Exploit-stage detection requires:
• network ingress telemetry
• application-layer request visibility where available
• Execution-stage detection requires:
• process creation telemetry
• parent-child relationships
• command-line visibility
• Collection-stage detection requires:
• data access telemetry
• attribution to user or process
• Exfiltration-stage detection requires:
• outbound network telemetry
• session duration and data transfer metrics
• Enforcement:
• absence of telemetry at any stage directly removes detection capability for that stage
Telemetry Limitations and Gaps
• Absence of memory telemetry:
• prevents fileless execution detection
• Incomplete process lineage:
• prevents exploit-to-execution attribution
• Lack of command-line visibility:
• reduces execution classification accuracy
• Insufficient data access telemetry:
• prevents detection of aggregation behavior
• Weak network attribution:
• reduces exploit and exfiltration detection confidence
• Baseline immaturity:
• prevents threshold-based detection enforcement
Figure 4
S24 — Detection Opportunities and Gaps
Detection Opportunities
Exploit-to-Execution Transition Detection
• Detection is reliable when:
• process lineage telemetry is complete
• command-line visibility is available
• Detection degrades when:
• parent-child relationships are incomplete
• execution context attribution is unavailable
• This represents:
• deterministic transition from external interaction to internal execution
Execution Burst Behavior Detection
• Detection is reliable when:
• process execution telemetry is complete
• baseline execution thresholds are defined
• Detection degrades when:
• baseline thresholds are not established
• execution variability cannot be bounded
• Automation introduces:
• execution density not typical in normal operations
Exploit Attempt Concentration Detection
• Detection is reliable when:
• network or application telemetry captures request frequency
• source attribution is consistent
• Detection degrades when:
• application-layer visibility is absent
• source attribution is inconsistent
Data Aggregation Behavior Detection
• Detection is reliable when:
• data access telemetry is complete
• access patterns are attributable to user or process
• baseline access thresholds are defined
• Detection degrades when:
• data access telemetry is incomplete
• legitimate administrative activity cannot be distinguished
Outbound Transfer Escalation Detection
• Detection is reliable when:
• outbound telemetry captures session duration and transfer volume
• baseline outbound behavior is defined
• Detection degrades when:
• outbound traffic variability is high
• attribution to host or egress segment is unreliable
Detection Gaps
Absence of Memory Execution Telemetry
• Prevents detection of:
• fileless execution
• reflective loading
Incomplete Process Lineage Telemetry
• Prevents:
• attribution of execution origin
• exploit-to-execution detection
Lack of Command-Line Visibility
• Prevents:
• execution intent classification
• script behavior identification
Insufficient Data Access Telemetry
• Prevents:
• detection of data aggregation
• visibility into collection stage
Baseline Absence or Immaturity
• Prevents:
• threshold definition
• enforcement of detection logic
High Variability in Outbound Traffic
• Reduces:
• exfiltration detection reliability
• confidence in outbound anomalies
Gap Impact and Risk Alignment
Execution-Stage Gaps
• Impact:
• exploit-to-execution transitions not reliably detected
• execution burst detection weakened
• Risk:
• delayed detection
• increased dwell time
Data Visibility Gaps
• Impact:
• aggregation activity not observable
• Risk:
• undetected data collection
• increased data loss exposure
Network Visibility Gaps
• Impact:
• exploit attempts not detected early
• outbound detection reduced
• Risk:
• delayed response
• missed early-stage activity
Detection Improvement Requirements
Process Telemetry MUST Be Enforced
• Full process logging with:
• lineage
• command-line visibility
Baselines MUST Be Established
• Required for:
• execution rates
• network activity
• data access
• MUST be:
• measurable
• environment-specific
• continuously validated
Data Access Telemetry MUST Be Enabled
• MUST include:
• user attribution
• process attribution
Network Telemetry MUST Be Strengthened
• MUST capture:
• session duration
• transfer volume
• destination attribution
Telemetry Normalization MUST Be Enforced
• Required across:
• endpoint
• network
• application
• MUST support:
• consistent entity mapping
S25 — Ultra-Tuned Detection Engineering Rules
System: Suricata
Rule name
Request Concentration Against Customer-Defined Exposed Services
Rule objective
· Detect repeated request activity targeting customer-defined exposed services within a bounded time window
· Detect attacker-controlled request concentration against explicitly scoped internet-facing services
· Provide early-stage network visibility into exploit-attempt behavior without relying on CVE-specific or payload-specific artifacts
Native format
Suricata rule format
Behavioral anchor
· Request concentration against exposed services
· Automation-driven request density against customer-defined attack surfaces
Detection strength conditions
· This rule is valid only when:
o protected service scope is explicitly defined through customer-maintained Suricata variables, service groups, address groups, port groups, or equivalent deployment segmentation
o request thresholds are derived from observed service baseline
o approved scanners and synthetic monitoring sources are excluded
o application-layer visibility exists through cleartext inspection or approved TLS inspection
Engineering Implementation Instructions
Customer data required
· Customer-defined exposed service inventory
· Customer-defined Suricata variables or deployment groupings for:
o exposed server addresses
o exposed service ports
o protected application service groups, where applicable
· Baseline request rate by protected service group
· Authorized scanner, monitoring, and integration source list
· Network architecture map showing reverse proxy, WAF, CDN, and load balancer flow path
Deployment preparation required
· Implement protected service scoping through customer-defined Suricata variables or equivalent deployment segmentation
· Scope the rule to:
o internet-facing applications
o business-relevant exposed services
o service groups with meaningful exploit exposure
· Tune detection thresholds using observed baseline behavior, not generic defaults
· Exclude:
o authorized vulnerability scanners
o synthetic monitoring systems
o approved integrations generating burst traffic
Field validation required
· Confirm Suricata reliably captures:
o source IP
o destination IP
o destination port
o HTTP transaction visibility for scoped services
· Confirm source attribution is preserved through:
o reverse proxies
o load balancers
o CDNs
o WAFs
· Confirm protected service scoping in rule variables matches actual exposed service inventory
Non-deployment guardrails
· Do NOT deploy this rule if:
o service scoping is not defined in Suricata variables or equivalent deployment controls
o baseline request rates are not established
o scanner exclusion cannot be enforced
o source attribution is unreliable
o application-layer visibility is absent
DRI assessment
· Target DRI: up to 8.5 only when:
o customer-defined service scoping is accurate and maintained
o baseline thresholds are validated
o non-malicious burst sources are excluded
o visibility is consistent and complete
· DRI degradation conditions:
o service inventory drift
o incomplete scanner exclusion
o high-volume legitimate burst traffic
o inconsistent application-layer visibility
Detection logic
alert http $EXTERNAL_NET any -> $CDX_EXPOSED_HTTP_SERVERS $CDX_EXPOSED_HTTP_PORTS (
msg:"CDX EXP Suricata Request Concentration Against Customer-Defined Exposed Services";
flow:to_server,established;
detection_filter:track by_src, count 8, seconds 60;
classtype:web-application-attack;
sid:410001;
rev:8;
)
Detection logic implementation notes
· $CDX_EXPOSED_HTTP_SERVERS MUST contain only customer-approved exposed service addresses or address groups
· $CDX_EXPOSED_HTTP_PORTS MUST contain only customer-approved exposed application ports
· count and seconds MUST be derived from customer baseline before production deployment
· Primary rule precision is enforced through explicit Suricata service scoping plus baseline-derived concentration thresholds
· This rule intentionally avoids payload dependence to preserve resilience across exploit variants
Rule name
Sustained Outbound Transfer Escalation From Controlled Egress Segments
Rule objective
· Detect sustained outbound transfer behavior exceeding baseline from controlled egress segments
· Identify probable exfiltration-stage activity under mature baseline and allowlisting conditions
Native format
Suricata rule format
Behavioral anchor
· Outbound transfer escalation
· Sustained outbound data movement inconsistent with segment baseline
Detection strength conditions
· This rule is valid only when:
o egress visibility is centralized and stable
o outbound baseline exists by segment or host class
o sanctioned bulk-transfer destinations are allowlisted
o outbound variability is controlled
Engineering Implementation Instructions
Customer data required
· Baseline outbound transfer behavior by:
o host class
o subnet
o egress segment
· Approved high-volume destinations
· Approved backup, synchronization, replication, and update services
· Egress architecture map identifying monitored choke points
· Customer-defined Suricata variables or deployment groupings for controlled egress segments, where used
Deployment preparation required
· Scope the rule to controlled egress segments through customer-defined variables or monitored egress placement
· Tune thresholds using real outbound baseline
· Apply allowlists for:
o backup platforms
o cloud storage
o software update services
o enterprise synchronization tools
· Restrict deployment to:
o controlled egress segments
o monitored network boundaries
Field validation required
· Confirm Suricata visibility into:
o source IP
o destination IP
o protocol
o session behavior
· Confirm NAT and proxy behavior preserves usable source attribution
· Confirm rule placement covers the intended egress choke points
Non-deployment guardrails
· Do NOT deploy this rule if:
o outbound baseline is not defined
o allowlisting is incomplete
o egress visibility is fragmented
o source attribution is unreliable
DRI assessment
· Target DRI: up to 7.5 only when:
o baseline maturity is high
o allowlisting is complete
o egress visibility is consistent
· DRI degradation conditions:
o high variability in outbound traffic
o incomplete allowlisting
o fragmented visibility
Detection logic
alert ip $CDX_CONTROLLED_EGRESS_NETS any -> $EXTERNAL_NET any (
msg:"CDX EXP Suricata Sustained Outbound Transfer Escalation From Controlled Egress Segments";
flow:established,to_server;
dsize:>1200;
detection_filter:track by_src, count 1500, seconds 300;
classtype:policy-violation;
sid:410002;
rev:5;
)
Detection logic implementation notes
· $CDX_CONTROLLED_EGRESS_NETS MUST contain only customer-approved controlled egress segments or monitored source ranges
· All thresholds MUST be baseline-derived before deployment
· Rule MUST be restricted to controlled egress segments with stable visibility
· This rule is secondary and MUST NOT be used as primary detection for this campaign
System: SentinelOne
Primary Rule
Rule name
Service-Originated High-Risk Child Process Burst
Rule objective
· Detect exploit-success execution by identifying customer-defined exposed service processes spawning repeated high-risk child processes within a bounded time window
· Detect automation-driven post-exploitation execution density without depending solely on classic shell or script interpreters
· Preserve resilience against interpreter-avoidance and mixed-tool execution variants
Native format
SentinelOne Deep Visibility query format
Behavioral anchor
· Exploit-to-execution transition
· Service-originated child-process burst
· Automation-driven execution density
· Interpreter and LOLBin variant coverage
Detection strength conditions
· This rule is valid only when:
o exposed service parent process inventory is accurate
o process lineage telemetry is complete
o execution timestamps support short-interval analysis
o approved service-driven automation is allowlisted
o high-risk child process classes are customer-tuned for the environment
Engineering Implementation Instructions
Customer data required
· Customer-defined exposed service parent process inventory
· Customer-defined allowlist for legitimate service-driven child execution
· Baseline child-process frequency by service parent
· Customer-defined high-risk child-process class list, including:
o shell interpreters
o scripting engines
o LOLBins
o environment-relevant native execution utilities
Deployment preparation required
· Define parent process scope to exposed service processes only
· Tune burst threshold using observed service-child baseline, not generic defaults
· Define high-risk child-process classes for the environment
· Exclude:
o approved middleware launchers
o sanctioned application-management tooling
o known maintenance or deployment automation triggered by service parents
Field validation required
· Confirm SentinelOne reliably captures:
o process creation
o parent process identity
o child process identity
o timestamps with short-interval fidelity
o command-line fields, where enabled
· Confirm exposed service parents are consistently named and stable across the environment
· Confirm lineage is preserved for service-originated execution chains
Non-deployment guardrails
· Do NOT deploy this rule if:
o service parent inventory is incomplete
o process lineage is unreliable
o service-driven automation cannot be allowlisted
o burst thresholds are not baseline-derived
o environment has high benign service-originated child-process variability that cannot be bounded
DRI assessment
· Target DRI: up to 8.6 only when:
o service scoping is accurate
o child-process class tuning is complete
o baseline thresholds are validated
o allowlisting is complete
· DRI degradation conditions:
o incomplete service inventory
o incomplete LOLBin coverage
o high benign variability in service-originated execution
o weak allowlisting maturity
Detection logic
event.type = "PROCESS_CREATION"
AND tgt.process.parent.name IN ("CUSTOM_EXPOSED_SERVICE_1","CUSTOM_EXPOSED_SERVICE_2","CUSTOM_EXPOSED_SERVICE_3")
AND tgt.process.name IN ("cmd.exe","powershell.exe","pwsh.exe","bash","sh","python.exe","perl.exe","mshta.exe","rundll32.exe","regsvr32.exe","wscript.exe","cscript.exe","certutil.exe","bitsadmin.exe","curl.exe")
| timebucket 1m
| group by agent.uuid, tgt.process.parent.name
| filter count() > CUSTOM_BASELINE_BURST_THRESHOLD
Detection logic implementation notes
· Parent process placeholders MUST be replaced with real customer-defined exposed service parent processes
· Child process list MUST be tuned to the environment and MAY be expanded or reduced based on validated LOLBin and interpreter exposure
· Thresholds MUST be baseline-derived before deployment
· This rule is designed to cover both classic interpreter chains and mixed native-tool execution bursts from exposed services
· If customer environment has meaningful low-child exploit-success patterns, those paths MUST be addressed through tuned survivor logic or environment-specific primary rule refinement
Rule name
High-Risk Execution From Non-User-Driven Parent Classes Outside Approved Baseline
Rule objective
· Detect high-risk interpreter or LOLBin execution when launched from service, scheduled-task, remote-administration, or other non-user-driven parent processes outside approved baseline
· Detect attacker-controlled automation paths that do not require a large burst from a service parent
· Preserve variant coverage for:
o scheduled execution
o remote execution
o background execution
o reduced-child-process exploit-success paths
Native format
SentinelOne Deep Visibility query format
Behavioral anchor
· Non-user-driven post-exploitation execution
· Interpreter and LOLBin abuse outside approved baseline
· Reduced-child-process execution variant coverage
Detection strength conditions
· This rule is valid only when:
o approved administrative tooling is allowlisted
o baseline non-user-driven interpreter and LOLBin usage is defined
o parent process classes representing service, scheduled, remote, or automation origins are identified
o approved scheduled and background automation is understood
Engineering Implementation Instructions
Customer data required
· Baseline non-user-driven execution patterns by host role
· Approved administrative tooling list
· Approved scheduled task, remote management, and background automation parent process inventory
· Customer-defined high-risk process class list for non-user-driven execution monitoring
Deployment preparation required
· Define parent-process scope for:
o service parents
o scheduled-task parents
o remote-management parents
o approved automation parents
· Exclude:
o approved management tools
o sanctioned automation frameworks
o validated scheduled task and service-maintenance activity
· Tune rule scope by:
o host role
o user context
o parent-process class
Field validation required
· Confirm SentinelOne reliably captures:
o process creation
o parent process
o user context
o command-line fields, where enabled
· Confirm environment can operationally distinguish user-driven parent processes from non-user-driven parent processes with acceptable confidence
Non-deployment guardrails
· Do NOT deploy this rule if:
o administrative allowlisting is incomplete
o scheduled, remote, or background automation is not understood
o parent-process class scoping cannot be maintained
o baseline maturity is insufficient for role-based tuning
DRI assessment
· Target DRI: up to 7.8 only when:
o baseline maturity is high
o admin and automation allowlisting is complete
o parent-process scoping is reliable
· DRI degradation conditions:
o poorly understood administrative automation
o unstable parent-process class mapping
o environments with frequent legitimate non-user-driven interpreter use
Detection logic
event.type = "PROCESS_CREATION"
AND tgt.process.parent.name IN ("CUSTOM_SERVICE_PARENT_1","CUSTOM_SCHEDULED_PARENT_1","CUSTOM_REMOTE_ADMIN_PARENT_1","CUSTOM_AUTOMATION_PARENT_1")
AND tgt.process.name IN ("cmd.exe","powershell.exe","pwsh.exe","bash","sh","python.exe","perl.exe","mshta.exe","rundll32.exe","regsvr32.exe","wscript.exe","cscript.exe","certutil.exe","bitsadmin.exe","curl.exe")
AND tgt.process.parent.name NOT IN ("CUSTOM_APPROVED_PARENT_1","CUSTOM_APPROVED_PARENT_2","CUSTOM_APPROVED_PARENT_3")
Detection logic implementation notes
· Parent-process placeholders MUST be replaced with real customer-defined parent classes and allowlisted parents
· High-risk child process list MUST be environment-tuned and justified by baseline evidence
· This rule is intentionally a survivor rule and MUST NOT replace the primary exploit-success burst coverage anchor
· If the environment cannot maintain reliable parent-class scoping, this rule MUST be withheld rather than loosely deployed
System: Splunk
Rule name
Exploit Attempt Followed by Service-Originated High-Risk Execution and Data Aggregation
Rule objective
· Detect multi-stage campaign progression from exploit-attempt telemetry to host execution and into collection-stage behavior
· Detect attacker-controlled progression across exploit-attempt, exploit-success, and data-access stages within bounded correlation windows
· Maximize Splunk’s cross-telemetry value without relying on a single fragile artifact
Native format
Splunk SPL correlation search
Behavioral anchor
· Exploit attempt concentration
· Service-originated high-risk execution
· Data aggregation behavior
· Multi-stage campaign progression
Detection strength conditions
· This rule is valid only when:
o exploit-attempt telemetry is ingested from trusted network, IDS, WAF, or reverse-proxy sources
o endpoint telemetry supports service-originated execution identification
o data-access telemetry exists and is attributable
o a customer-normalized asset correlation key exists across sources
o timestamps are synchronized well enough for bounded correlation windows
Engineering Implementation Instructions
Customer data required
· Customer-normalized correlation key for exposed assets, hosts, or protected service entities
· Normalized field mapping for:
o asset_id
o host
o src_ip
o dest_ip
o user
o process_name
o parent_process_name
o bytes_read or equivalent access metric
· Customer-defined exposed service parent list
· Customer-defined exploit-attempt signal source definition
· Baseline execution burst thresholds by service parent
· Baseline data-access thresholds by host role, user role, or asset class
· Customer allowlists for:
o approved scanners
o approved service-driven automation
o approved high-volume administrative data operations
Deployment preparation required
· Normalize asset identity across exploit, endpoint, and data-access telemetry before deployment
· Define bounded correlation windows for:
o exploit-attempt to execution
o execution to data aggregation
· Scope execution logic to:
o customer-defined exposed service parents
o customer-tuned high-risk child process classes
· Exclude:
o approved scanner activity
o approved service-driven automation
o known bulk administrative data jobs
Field validation required
· Confirm reliable ingestion of:
o exploit-attempt source telemetry
o process creation telemetry
o parent-child process relationships
o data-access telemetry
· Confirm asset identity is stable across all sources
· Confirm timestamps are synchronized well enough for bounded cross-source correlation
· Confirm data-access metrics are attributable to the same protected asset or host entity used in execution correlation
Non-deployment guardrails
· Do NOT deploy this rule if:
o exploit-attempt telemetry is absent or untrusted
o asset identity cannot be normalized across sources
o data-access telemetry is absent or not attributable
o service-origin execution cannot be distinguished from approved automation
o correlation windows cannot be bounded with operational confidence
DRI assessment
· Target DRI: up to 8.8 only when:
o all three telemetry stages are available
o entity mapping is reliable
o thresholds are baseline-derived
o allowlisting is complete
· DRI degradation conditions:
o weak asset normalization
o absent exploit telemetry
o absent data-access attribution
o immature allowlisting
o timestamp drift across sources
Detection logic
(
search index=network_exploit
signal_type="exploit_attempt"
asset_id=*
NOT src_ip IN ($CUSTOM_APPROVED_SCANNER_IPS$)
| stats count as exploit_count earliest(_time) as exploit_time by asset_id
| where exploit_count > $CUSTOM_EXPLOIT_COUNT_THRESHOLD$
)
| join type=inner asset_id
[
search index=endpoint_process
event_type="process_creation"
asset_id=*
parent_process_name IN ("CUSTOM_EXPOSED_SERVICE_1","CUSTOM_EXPOSED_SERVICE_2","CUSTOM_EXPOSED_SERVICE_3")
process_name IN ("cmd.exe","powershell.exe","pwsh.exe","bash","sh","python.exe","perl.exe","mshta.exe","rundll32.exe","regsvr32.exe","wscript.exe","cscript.exe","certutil.exe","bitsadmin.exe","curl.exe")
| stats count as exec_count earliest(_time) as exec_time by asset_id
| where exec_count > $CUSTOM_EXEC_BURST_THRESHOLD$
]
| join type=inner asset_id
[
search index=data_access
asset_id=*
| stats sum(bytes_read) as total_bytes_read earliest(_time) as data_time by asset_id
| where total_bytes_read > $CUSTOM_DATA_ACCESS_THRESHOLD$
]
| where exec_time >= exploit_time
| where data_time >= exec_time
| where exec_time - exploit_time <= $CUSTOM_EXPLOIT_TO_EXEC_WINDOW$
| where data_time - exec_time <= $CUSTOM_EXEC_TO_DATA_WINDOW$
Detection logic implementation notes
· asset_id MUST be replaced with the customer’s normalized protected-asset correlation field
· Placeholder indexes, thresholds, and field names MUST be replaced with customer-normalized sources and values
· This rule MUST be deployed only where exploit, endpoint, and data-access telemetry can be correlated to the same protected asset or host entity
· If exploit telemetry is not mature, this rule MUST be withheld rather than approximated with weaker logic
Rule name
Non-User-Driven High-Risk Execution Followed by Data Aggregation and Outbound Transfer Escalation
Rule objective
· Detect late-stage attacker progression from non-user-driven high-risk execution to data collection and outbound transfer
· Preserve meaningful campaign coverage when exploit-attempt telemetry is weak, absent, or not correlation-safe
· Detect collection-to-egress progression with stronger specificity than pure outbound anomaly detection
Native format
Splunk SPL correlation search
Behavioral anchor
· Non-user-driven high-risk execution
· Data aggregation behavior
· Outbound transfer escalation
· Late-stage campaign progression
Detection strength conditions
· This rule is valid only when:
o endpoint telemetry supports parent-class scoping for non-user-driven execution
o data-access telemetry exists and is attributable
o outbound telemetry exists and is attributable to host, segment, or egress entity
o entity and timestamp mapping between sources is reliable
o approved automation and bulk-transfer operations are allowlisted
Engineering Implementation Instructions
Customer data required
· Customer-normalized field mapping for:
o host
o asset_id
o egress_entity
o process_name
o parent_process_name
o bytes_read or equivalent access metric
o bytes_out or equivalent outbound transfer metric
· Customer-defined parent process classes for:
o exposed services
o scheduled-task parents
o remote-management parents
o automation parents
· Customer-defined high-risk child process classes
· Baseline data-access thresholds by role
· Baseline outbound thresholds by host class, subnet, or egress segment
· Allowlists for:
o approved automation
o approved bulk data operations
o approved high-volume outbound destinations
Deployment preparation required
· Scope execution stage to non-user-driven parent classes only
· Define bounded correlation windows for:
o execution to data aggregation
o data aggregation to outbound transfer
· Exclude:
o sanctioned service maintenance
o approved batch data operations
o approved replication, backup, sync, and update flows
· Validate host-to-egress attribution before enabling production alerting
Field validation required
· Confirm reliable ingestion of:
o process execution telemetry
o data-access telemetry
o outbound network telemetry
· Confirm host, asset, or egress-entity identity is stable across sources
· Confirm outbound attribution remains usable after NAT, proxy, or brokered egress translation
· Confirm parent-class scoping is operationally maintainable
Non-deployment guardrails
· Do NOT deploy this rule if:
o data-access telemetry is absent or not attributable
o outbound telemetry is too noisy for stable thresholding
o host-to-egress attribution is unreliable
o parent-class scoping cannot be maintained
o allowlisting for automation, bulk data operations, or outbound destinations is incomplete
DRI assessment
· Target DRI: up to 7.9 only when:
o execution, data, and outbound telemetry are all mature
o thresholds are baseline-derived
o outbound allowlisting is complete
o attribution across stages is reliable
· DRI degradation conditions:
o absent data attribution
o noisy outbound environments
o incomplete outbound allowlisting
o weak host-to-egress attribution
o unstable parent-class scoping
Detection logic
search index=endpoint_process
event_type="process_creation"
host=*
parent_process_name IN ("CUSTOM_SERVICE_PARENT_1","CUSTOM_SCHEDULED_PARENT_1","CUSTOM_REMOTE_ADMIN_PARENT_1","CUSTOM_AUTOMATION_PARENT_1")
process_name IN ("cmd.exe","powershell.exe","pwsh.exe","bash","sh","python.exe","perl.exe","mshta.exe","rundll32.exe","regsvr32.exe","wscript.exe","cscript.exe","certutil.exe","bitsadmin.exe","curl.exe")
| stats count as exec_count earliest(_time) as exec_time by host
| where exec_count > $CUSTOM_EXEC_THRESHOLD$
| join type=inner host
[
search index=data_access
host=*
| stats sum(bytes_read) as total_bytes_read earliest(_time) as data_time by host
| where total_bytes_read > $CUSTOM_DATA_ACCESS_THRESHOLD$
]
| join type=inner host
[
search index=network_egress
host=*
| stats sum(bytes_out) as total_bytes_out earliest(_time) as outbound_time by host
| where total_bytes_out > $CUSTOM_OUTBOUND_THRESHOLD$
]
| where data_time >= exec_time
| where outbound_time >= data_time
| where data_time - exec_time <= $CUSTOM_EXEC_TO_DATA_WINDOW$
| where outbound_time - data_time <= $CUSTOM_DATA_TO_EGRESS_WINDOW$
Detection logic implementation notes
· This rule MUST remain a survivor rule and MUST NOT replace the primary exploit-attempt-to-execution coverage anchor
· Placeholder indexes, fields, and thresholds MUST be replaced with customer-normalized sources and baseline-derived values
· If outbound attribution cannot be maintained across NAT or proxy boundaries, the rule MUST be withheld or rewritten to egress-segment attribution
System: Elastic
Rule name
Repeated Service-Originated High-Risk Execution Followed by Data Aggregation
Rule objective
· Detect exploit-success progression by identifying repeated high-risk child-process execution from customer-defined exposed service parents followed by data aggregation behavior within bounded sequence windows
· Detect attacker-controlled execution-to-collection progression without relying on a single tool family or fragile artifact
· Preserve resilience against interpreter-avoidance and mixed-tool execution variants
Native format
Elastic EQL detection rule
Behavioral anchor
· Exploit-to-execution transition
· Service-originated repeated high-risk execution
· Execution-to-collection progression
· Interpreter and LOLBin variant coverage
Detection strength conditions
· This rule is valid only when:
o exposed service parent process inventory is accurate
o process lineage telemetry is complete
o data-access telemetry exists and is attributable
o timestamps support bounded sequence analysis
o approved service-driven automation and bulk data operations are allowlisted
Engineering Implementation Instructions
Customer data required
· Customer-defined exposed service parent process inventory
· Customer-defined high-risk child process class list, including:
o shell interpreters
o scripting engines
o LOLBins
o environment-relevant native execution utilities
· Baseline repeated high-risk execution thresholds by exposed service parent
· Baseline data-access thresholds by host role, user role, or asset class
· Approved service-driven automation allowlist
· Approved bulk data operation allowlist
· Customer-normalized field mapping for:
o host.name
o process.name
o process.parent.name
o user.name
o CUSTOMER_DATA_ACCESS_METRIC
Deployment preparation required
· Define parent process scope to customer-exposed service parents only
· Tune repeated high-risk execution thresholds to the environment
· Define bounded execution-to-data windows using real telemetry timings
· Exclude:
o approved service maintenance
o sanctioned middleware launchers
o validated bulk administrative or ETL-style data operations
Field validation required
· Confirm Elastic reliably captures:
o process start events
o process parent identity
o child process identity
o timestamps with short-interval fidelity
o data-access telemetry mapped to the same host or protected asset
· Confirm exposed service parents are stable and consistently named across the deployment
· Confirm process lineage is preserved for service-originated execution chains
Non-deployment guardrails
· Do NOT deploy this rule if:
o service parent inventory is incomplete
o process lineage is unreliable
o data-access telemetry is absent or not attributable
o approved service-driven automation cannot be allowlisted
o execution-to-data sequence windows cannot be bounded with operational confidence
DRI assessment
· Target DRI: up to 8.5 only when:
o service scoping is accurate
o child-process class tuning is complete
o repeated-execution thresholds are baseline-derived
o data-access attribution is reliable
o allowlisting is complete
· DRI degradation conditions:
o incomplete service inventory
o weak data-access attribution
o incomplete LOLBin coverage
o immature allowlisting
o timestamp drift across relevant sources
Detection logic
sequence by host.name with maxspan=10m
[process where event.type == "start" and
process.parent.name in ("CUSTOM_EXPOSED_SERVICE_1","CUSTOM_EXPOSED_SERVICE_2","CUSTOM_EXPOSED_SERVICE_3") and
process.name in ("cmd.exe","powershell.exe","pwsh.exe","bash","sh","python","python.exe","perl","perl.exe","mshta.exe","rundll32.exe","regsvr32.exe","wscript.exe","cscript.exe","certutil.exe","bitsadmin.exe","curl.exe")]
[process where event.type == "start" and
process.parent.name in ("CUSTOM_EXPOSED_SERVICE_1","CUSTOM_EXPOSED_SERVICE_2","CUSTOM_EXPOSED_SERVICE_3") and
process.name in ("cmd.exe","powershell.exe","pwsh.exe","bash","sh","python","python.exe","perl","perl.exe","mshta.exe","rundll32.exe","regsvr32.exe","wscript.exe","cscript.exe","certutil.exe","bitsadmin.exe","curl.exe")]
[any where event.category in ("file","database") and
host.name != null and
CUSTOMER_DATA_ACCESS_METRIC > CUSTOMER_DATA_ACCESS_THRESHOLD]
Detection logic implementation notes
· Exposed service parent placeholders MUST be replaced with real customer-defined exposed service parents
· High-risk child process list MUST be environment-tuned and justified by baseline evidence
· CUSTOMER_DATA_ACCESS_METRIC MUST be replaced with the customer’s normalized data-access volume or aggregation indicator
· maxspan MUST be tuned to the customer’s validated execution-to-collection timing
· If reliable data-access attribution does not exist, this rule MUST be withheld rather than loosely approximated
Rule name
Data Aggregation Followed by Outbound Transfer Escalation
Rule objective
· Detect late-stage attacker progression from data collection into outbound transfer behavior within bounded sequence windows
· Preserve meaningful campaign coverage when exploit-attempt telemetry is absent and execution-stage coverage is partially degraded
· Detect collection-to-egress progression with stronger specificity than pure outbound anomaly detection
Native format
Elastic EQL detection rule
Behavioral anchor
· Data aggregation behavior
· Outbound transfer escalation
· Late-stage collection-to-egress progression
Detection strength conditions
· This rule is valid only when:
o data-access telemetry exists and is attributable
o outbound telemetry exists and is attributable to host, asset, or egress entity
o entity identity and timestamps are stable across sources
o approved bulk data operations and high-volume outbound destinations are allowlisted
Engineering Implementation Instructions
Customer data required
· Customer-normalized field mapping for:
o host.name
o destination.ip or egress entity
o CUSTOMER_DATA_ACCESS_METRIC
o CUSTOMER_OUTBOUND_METRIC
· Baseline data-access thresholds by role or asset class
· Baseline outbound thresholds by host class, subnet, or egress segment
· Approved bulk data operation allowlist
· Approved high-volume outbound destination allowlist
Deployment preparation required
· Define bounded data-to-egress sequence windows using real telemetry timings
· Exclude:
o approved ETL and batch export jobs
o validated replication, backup, synchronization, and update flows
o sanctioned high-volume cloud or partner destinations
· Validate host-to-egress attribution before production alerting
Field validation required
· Confirm Elastic reliably captures:
o attributable data-access telemetry
o attributable outbound network telemetry
o timestamps consistent enough for bounded sequence analysis
· Confirm host, asset, or egress-entity identity remains usable after NAT, proxy, or brokered egress translation
· Confirm allowlisted bulk-transfer and batch-operation sources are well understood
Non-deployment guardrails
· Do NOT deploy this rule if:
o data-access telemetry is absent or not attributable
o outbound telemetry is too noisy for stable thresholding
o host-to-egress attribution is unreliable
o allowlisting for bulk data operations or outbound destinations is incomplete
o sequence windows cannot be bounded with operational confidence
DRI assessment
· Target DRI: up to 7.8 only when:
o data and outbound telemetry are mature
o thresholds are baseline-derived
o outbound allowlisting is complete
o attribution across stages is reliable
· DRI degradation conditions:
o absent data attribution
o noisy outbound environments
o incomplete outbound allowlisting
o weak host-to-egress attribution
o unstable sequence timing
Detection logic
sequence by host.name with maxspan=15m
[any where event.category in ("file","database") and
host.name != null and
CUSTOMER_DATA_ACCESS_METRIC > CUSTOMER_DATA_ACCESS_THRESHOLD]
[network where event.category == "network" and
host.name != null and
network.direction in ("egress","outbound") and
CUSTOMER_OUTBOUND_METRIC > CUSTOMER_OUTBOUND_THRESHOLD]
Detection logic implementation notes
· CUSTOMER_DATA_ACCESS_METRIC and CUSTOMER_OUTBOUND_METRIC MUST be replaced with customer-normalized attributable threshold fields
· maxspan MUST be tuned to validated collection-to-egress timing in the customer environment
· This rule MUST remain a survivor rule and MUST NOT replace the primary execution-to-collection coverage anchor
· If outbound attribution cannot be maintained after NAT or proxy translation, this rule MUST be withheld or rewritten to segment-level attribution
System: QRadar
Rule name
Exploit Attempt Followed by Repeated Service-Originated High-Risk Execution and Data Aggregation
Rule objective
· Detect multi-stage campaign progression from exploit-attempt telemetry to repeated host execution and into collection-stage behavior
· Detect attacker-controlled progression across exploit-attempt, exploit-success, and data-access stages within bounded correlation windows
· Maximize QRadar correlation value without relying on single artifacts or weak anomaly chaining
Native format
QRadar CRE correlation rule logic
Behavioral anchor
· Exploit-attempt concentration
· Repeated service-originated high-risk execution
· Data aggregation behavior
· Multi-stage campaign progression
Detection strength conditions
· This rule is valid only when:
o exploit-attempt telemetry is ingested from trusted network, IDS, WAF, or reverse-proxy sources
o endpoint telemetry supports service-originated execution identification
o data-access telemetry exists and is attributable
o asset identity is normalized across sources
o event timing is sufficiently synchronized for bounded stage progression
Engineering Implementation Instructions
Customer data required
· QRadar-normalized asset identity mapping across:
o exploit-attempt events
o endpoint execution events
o data-access events
· Custom properties or normalized fields for:
o asset identity
o source and destination context
o parent process name
o process name
o data-access metric or bytes-read equivalent
· Customer-defined exposed service parent list
· Customer-defined exploit-attempt event source definition
· Baseline repeated high-risk execution thresholds by exposed service parent
· Baseline data-access thresholds by host role, user role, or asset class
· Reference sets or building blocks for:
o approved scanners
o approved service-driven automation
o approved high-volume administrative data operations
Deployment preparation required
· Normalize asset identity across all three telemetry stages before deployment
· Define bounded timing logic for:
o exploit-attempt to execution
o execution to data aggregation
· Scope execution stage to:
o customer-defined exposed service parents
o customer-defined high-risk child process classes
· Configure repeated execution thresholding using:
o customer-baselined execution count by exposed service parent
o bounded exploit-to-execution timing window
· Exclude:
o approved scanner activity
o approved service-driven automation
o approved bulk administrative or ETL-style data jobs
Field validation required
· Confirm QRadar reliably parses and normalizes:
o exploit-attempt event category
o parent process name
o process name
o asset identity
o data-access metric
· Confirm timestamps are sufficiently aligned across relevant log sources
· Confirm repeated execution events can be tied to the same protected asset and exposed service parent within the configured timing window
· Confirm data-access events can be tied to the same protected asset or host entity used in execution correlation
Non-deployment guardrails
· Do NOT deploy this rule if:
o exploit-attempt telemetry is absent or untrusted
o asset identity cannot be normalized across sources
o data-access telemetry is absent or not attributable
o service-origin execution cannot be distinguished from approved automation
o repeated execution thresholds cannot be baseline-derived
o bounded timing windows cannot be enforced with operational confidence
DRI assessment
· Target DRI: up to 8.7 only when:
o all three telemetry stages are available
o entity mapping is reliable
o repeated execution thresholds are baseline-derived
o allowlisting is complete
· DRI degradation conditions:
o weak asset normalization
o absent exploit telemetry
o absent data-access attribution
o immature allowlisting
o timestamp drift across sources
o weak repeated-execution threshold tuning
Detection logic
Rule Type:
Event Rule
Rule Test Stack:
when BB:CDX:EXP:Exploit_Attempt_Telemetry on an event is true
and when the event QID is one of the QIDs in BB:CDX:EXP:Trusted_Exploit_Attempt_QIDs
and when the source IP is not in Reference Set: CDX_Approved_Scanner_IPs
and when the event has Asset Identity Custom Property populated
followed by at least CUSTOM_EXEC_EVENT_COUNT events within CUSTOM_EXPLOIT_TO_EXEC_WINDOW minutes
where BB:CDX:EXP:Service_Originated_High_Risk_Execution on an event is true
and where Asset Identity Custom Property matches the same Asset Identity Custom Property from the preceding exploit-attempt event
and where Parent Process Name Custom Property is one of Reference Set: CDX_Exposed_Service_Parents
and where Process Name Custom Property is one of Reference Set: CDX_High_Risk_Child_Processes
and where the event does not match BB:CDX:EXP:Approved_Service_Automation
followed by at least 1 event within CUSTOM_EXEC_TO_DATA_WINDOW minutes
where BB:CDX:EXP:Data_Aggregation_Event on an event is true
and where Asset Identity Custom Property matches the same Asset Identity Custom Property from the preceding execution stage
and where Data Access Metric Custom Property > CUSTOM_DATA_ACCESS_THRESHOLD
and where the event does not match BB:CDX:EXP:Approved_Bulk_Data_Operations
Rule Response:
create an offense indexed by Asset Identity Custom Property
set magnitude to High
add log source, source IP, destination IP, Asset Identity Custom Property, Parent Process Name Custom Property, Process Name Custom Property, Execution Event Count, and Data Access Metric Custom Property to offense details
Detection logic implementation notes
· Asset Identity Custom Property MUST be replaced by the customer’s normalized protected-asset correlation property
· CUSTOM_EXEC_EVENT_COUNT, CUSTOM_EXPLOIT_TO_EXEC_WINDOW, CUSTOM_EXEC_TO_DATA_WINDOW, and CUSTOM_DATA_ACCESS_THRESHOLD MUST be replaced with customer-baselined values
· BB:CDX:EXP:* building blocks MUST be implemented using customer-normalized custom properties, log source types, and reference data
· This rule MUST be deployed only where exploit, endpoint, and data-access telemetry can be correlated to the same protected asset or host entity
· If exploit telemetry is not mature enough for reliable stage-1 confidence, this rule MUST be withheld rather than approximated
Rule name
Data Aggregation Followed by Outbound Transfer Escalation
Rule objective
· Detect late-stage attacker progression from data collection into outbound transfer behavior within bounded correlation windows
· Preserve meaningful campaign coverage when exploit-attempt telemetry is absent or not correlation-safe
· Detect collection-to-egress progression with stronger specificity than pure outbound anomaly detection
Native format
QRadar CRE correlation rule logic
Behavioral anchor
· Data aggregation behavior
· Outbound transfer escalation
· Late-stage collection-to-egress progression
Detection strength conditions
· This rule is valid only when:
o data-access telemetry exists and is attributable
o outbound telemetry exists and is attributable to host, asset, or egress entity
o asset or egress identity is stable across sources
o approved bulk data operations and high-volume outbound destinations are allowlisted
Engineering Implementation Instructions
Customer data required
· QRadar-normalized field mapping for:
o host or asset identity
o egress identity
o data-access metric
o outbound transfer metric
· Baseline data-access thresholds by role or asset class
· Baseline outbound thresholds by host class, subnet, or egress segment
· Reference sets or building blocks for:
o approved bulk data operations
o approved high-volume outbound destinations
Deployment preparation required
· Define bounded timing logic for:
o data aggregation to outbound transfer
· Exclude:
o approved ETL and batch export jobs
o validated replication, backup, synchronization, and update flows
o sanctioned high-volume cloud or partner destinations
· Validate host-to-egress attribution before production alerting
Field validation required
· Confirm QRadar reliably parses and normalizes:
o attributable data-access telemetry
o attributable outbound telemetry
o stable asset or egress identity
· Confirm identity remains usable after NAT, proxy, or brokered egress translation
· Confirm allowlisted bulk-transfer and batch-operation sources are well understood
Non-deployment guardrails
· Do NOT deploy this rule if:
o data-access telemetry is absent or not attributable
o outbound telemetry is too noisy for stable thresholding
o host-to-egress attribution is unreliable
o allowlisting for bulk data operations or outbound destinations is incomplete
o bounded timing logic cannot be enforced with operational confidence
DRI assessment
· Target DRI: up to 7.8 only when:
o data and outbound telemetry are mature
o thresholds are baseline-derived
o outbound allowlisting is complete
o attribution across stages is reliable
· DRI degradation conditions:
o absent data attribution
o noisy outbound environments
o incomplete outbound allowlisting
o weak host-to-egress attribution
o unstable sequence timing
Detection logic
Rule Type:
Event Rule
Rule Test Stack:
when BB:CDX:EXP:Data_Aggregation_Event on an event is true
and when the event has Asset Identity Custom Property populated
and when Data Access Metric Custom Property > CUSTOM_DATA_ACCESS_THRESHOLD
and when the event does not match BB:CDX:EXP:Approved_Bulk_Data_Operations
followed by at least 1 event within CUSTOM_DATA_TO_EGRESS_WINDOW minutes
where BB:CDX:EXP:Outbound_Transfer_Event on an event is true
and where Asset Identity Custom Property matches the same Asset Identity Custom Property from the preceding data-aggregation event
and where Outbound Transfer Metric Custom Property > CUSTOM_OUTBOUND_THRESHOLD
and where Destination IP is not in Reference Set: CDX_Approved_High_Volume_Destinations
and where the event does not match BB:CDX:EXP:Approved_Replication_Backup_Update_Flows
Rule Response:
create an offense indexed by Asset Identity Custom Property
set magnitude to Medium-High
add Asset Identity Custom Property, Data Access Metric Custom Property, Outbound Transfer Metric Custom Property, Destination IP, and relevant log source details to offense details
Detection logic implementation notes
· CUSTOM_DATA_ACCESS_THRESHOLD, CUSTOM_OUTBOUND_THRESHOLD, and CUSTOM_DATA_TO_EGRESS_WINDOW MUST be replaced with baseline-derived values
· Asset Identity Custom Property MUST be replaced with the customer’s normalized host, asset, or protected-entity correlation property
· This rule MUST remain a survivor rule and MUST NOT replace the primary exploit-to-collection coverage anchor
· If outbound attribution cannot be maintained after NAT or proxy translation, this rule MUST be withheld or rewritten to segment-level egress attribution
System: Sigma
Rule name
Service-Originated High-Risk Execution
Rule objective
· Detect exploit-success execution by identifying exposed service processes spawning high-risk child processes
· Provide strong, portable exploit-success detection anchored in parent-child relationships
· Cover interpreter and LOLBin execution without relying on fragile command-line signatures
Native format
Sigma rule format
Behavioral anchor
· Exploit-to-execution transition
· Service-originated high-risk execution
· Interpreter and LOLBin variant coverage
Detection strength conditions
· This rule is valid only when:
o service parent inventory is accurate
o process lineage is available
o backend normalization is stable
o approved automation can be suppressed
Engineering Implementation Instructions
Customer data required
· Customer-defined exposed service parent process inventory
· Customer-defined high-risk child process class list including:
o shell interpreters
o scripting engines
o LOLBins
o environment-relevant native execution utilities
· Approved service-driven automation inventory
· Backend field mapping for:
o Image
o ParentImage
o host identity
o user identity
Deployment preparation required
· Replace placeholder service parent values with real customer-defined exposed service parents
· Tune high-risk child process classes for the environment
· Implement allowlisting outside the Sigma rule using:
o backend exclusions
o reference lists
o suppression logic
· Validate parent-child process normalization before production deployment
Field validation required
· Confirm the target backend reliably provides:
o Image or process name
o ParentImage or parent process name
o host identity
o user identity, where relevant
· Confirm exposed service parents are stable and consistently mapped across the data source
Non-deployment guardrails
· Do NOT deploy this rule if:
o service parent inventory is incomplete
o parent-child process telemetry is unreliable
o backend field normalization is inconsistent
o approved service-driven automation cannot be operationally suppressed
DRI assessment
· Target DRI: up to 8.2 only when:
o service scoping is accurate
o child-process class tuning is complete
o field normalization is stable
o suppression of approved automation is complete
· DRI degradation conditions:
o incomplete service inventory
o incomplete LOLBin coverage
o weak backend normalization
o immature suppression or allowlisting
Detection logic
title: Service-Originated High-Risk Execution
id: cdx-sigma-001
status: experimental
logsource:
category: process_creation
detection:
selection_parent_windows:
ParentImage|endswith:
- '\CUSTOM_EXPOSED_SERVICE_1.exe'
- '\CUSTOM_EXPOSED_SERVICE_2.exe'
- '\CUSTOM_EXPOSED_SERVICE_3.exe'
selection_child_windows:
Image|endswith:
- '\cmd.exe'
- '\powershell.exe'
- '\pwsh.exe'
- '\python.exe'
- '\perl.exe'
- '\mshta.exe'
- '\rundll32.exe'
- '\regsvr32.exe'
- '\wscript.exe'
- '\cscript.exe'
- '\certutil.exe'
- '\bitsadmin.exe'
- '\curl.exe'
selection_parent_unix:
ParentImage|endswith:
- '/CUSTOM_EXPOSED_SERVICE_1'
- '/CUSTOM_EXPOSED_SERVICE_2'
- '/CUSTOM_EXPOSED_SERVICE_3'
selection_child_unix:
Image|endswith:
- '/bash'
- '/sh'
- '/python'
- '/perl'
- '/curl'
condition: (selection_parent_windows and selection_child_windows) or (selection_parent_unix and selection_child_unix)
falsepositives:
- Approved service automation
level: high
Detection logic implementation notes
· Placeholder parent process values MUST be replaced with customer-defined exposed service parents
· Windows and Unix path handling MUST be tuned to the customer backend’s actual field normalization
· High-risk child process list MUST be tuned to the environment
· False-positive handling MUST be implemented through backend-native suppression, filtering, or exception management
· This primary Sigma rule intentionally favors portability and exploit-success strength over backend-fragile multi-stage correlation
Rule name (Conditional — Aggregation Support Required)
Service-Originated Repeated High-Risk Execution
Rule objective
· Detect automation-driven exploit-success behavior through repeated execution of high-risk child processes from exposed service parents
· Strengthen detection of scripted or batch exploitation patterns where backend aggregation is supported
Native format
Sigma correlation rule format
Behavioral anchor
· Service-originated repeated execution
· Automation-driven exploit-success behavior
Detection strength conditions
· This rule is valid only when:
o the target backend supports Sigma correlation or equivalent translated aggregation logic
o execution frequency baselines are available
o service parents are well scoped
o approved batch or maintenance activity can be suppressed
o referenced rule fields and normalized host identity are preserved through backend translation
Engineering Implementation Instructions
Customer data required
· Customer-defined exposed service parent process inventory
· Customer-defined high-risk child process class list
· Baseline repeated-execution threshold by exposed service parent
· Customer-normalized host identity field used by the target backend for Sigma correlation grouping
· Backend support validation for:
o Sigma correlation rule translation
o aggregation by normalized host identity and parent process
o production-safe thresholding
Deployment preparation required
· Validate that the target Sigma backend supports aggregation and time-window counting before deployment
· Replace placeholder service parent values with real customer-defined exposed service parents
· Replace grouping field placeholders with the customer-normalized host identity field preserved by backend translation
· Tune repeated-execution thresholds using customer baseline, not generic defaults
· Implement suppression for approved service batch operations and maintenance workflows
Field validation required
· Confirm the target backend reliably provides:
o process image
o parent process image
o normalized host identity
o timestamps with short-interval fidelity
· Confirm backend translation preserves:
o count aggregation
o grouping by normalized host identity
o grouping by parent process or equivalent preserved parent field
o bounded time-window behavior
o referenced rule field integrity from cdx-sigma-001 into the correlation layer
Non-deployment guardrails
· Do NOT deploy this rule if:
o Sigma correlation or equivalent aggregation support is unavailable
o repeated-execution thresholds are not baseline-derived
o service parent scoping is incomplete
o approved batch or maintenance activity cannot be suppressed
o normalized host identity is not preserved through backend translation
o referenced rule fields are not preserved into the correlation layer
DRI assessment
· Target DRI: up to 7.8 only when:
o backend aggregation support is validated
o service scoping is accurate
o thresholds are baseline-derived
o suppression is complete
o normalized grouping fields are preserved through translation
· DRI degradation conditions:
o weak backend aggregation support
o incomplete service scoping
o poor threshold tuning
o immature suppression
o unstable normalized host identity or parent field preservation
Detection logic
title: Service-Originated Repeated High-Risk Execution
id: cdx-sigma-002
status: experimental
correlation:
type: event_count
rules:
- cdx-sigma-001
group-by:
- CUSTOM_NORMALIZED_HOST_ID
- ParentImage
- Image
timespan: 1m
condition:
gte: CUSTOM_EXEC_THRESHOLD
level: high
Detection logic implementation notes
· This rule MUST be implemented only on backends that support Sigma correlation rules or equivalent translated aggregation logic
· CUSTOM_NORMALIZED_HOST_ID MUST be replaced with the customer-normalized host identity field preserved by the target backend
· CUSTOM_EXEC_THRESHOLD MUST be replaced with a customer-baselined repeated-execution threshold
· If backend aggregation support or grouping-field preservation is not validated, this rule MUST be withheld rather than approximated
Rule name
Non-User-Driven High-Risk Execution From Service, Scheduled, or Remote Contexts
Rule objective
· Detect exploit-success execution outside strict exposed-service scope
· Provide fallback coverage for:
o scheduled tasks
o remote execution
o automation abuse
Native format
Sigma rule format
Behavioral anchor
· Non-user-driven execution
· Reduced-child-process exploit-success variant coverage
Detection strength conditions
· This rule is valid only when:
o parent-class scoping is accurate
o backend suppression is implemented
o automation is well understood
Engineering Implementation Instructions
Customer data required
· Customer-defined parent process inventories for:
o service parents
o scheduled-task parents
o remote-management parents
o automation parents
· Customer-defined high-risk child process class list
· Approved management and automation tooling inventory
· Backend field mapping for:
o Image
o ParentImage
o host identity
o user identity
Deployment preparation required
· Replace parent placeholders with customer-defined non-user-driven parent classes
· Tune high-risk child process classes to the environment
· Implement backend-native suppression for:
o approved management tools
o sanctioned automation frameworks
o validated scheduled maintenance activity
Field validation required
· Confirm the target backend reliably provides:
o process image
o parent process image
o host identity
· Confirm parent class scoping is stable across the deployment
· Confirm non-user-driven parent categories are operationally meaningful in the environment
Non-deployment guardrails
· Do NOT deploy this rule if:
o parent-class scoping cannot be maintained
o approved management and automation tooling cannot be suppressed
o normalized process creation telemetry is unstable
o the environment has excessive legitimate non-user-driven high-risk execution that cannot be bounded
DRI assessment
· Target DRI: up to 7.6 only when:
o parent-class scoping is accurate
o high-risk child process classes are tuned
o backend suppression is complete
· DRI degradation conditions:
o poorly understood automation
o unstable parent-class mapping
o high benign use of non-user-driven interpreters or LOLBins
Detection logic
title: Non-User-Driven High-Risk Execution From Service, Scheduled, or Remote Contexts
id: cdx-sigma-003
status: experimental
logsource:
category: process_creation
detection:
selection_parent_windows:
ParentImage|endswith:
- '\CUSTOM_SERVICE_PARENT_1.exe'
- '\CUSTOM_SCHEDULED_PARENT_1.exe'
- '\CUSTOM_REMOTE_PARENT_1.exe'
- '\CUSTOM_AUTOMATION_PARENT_1.exe'
selection_child_windows:
Image|endswith:
- '\cmd.exe'
- '\powershell.exe'
- '\pwsh.exe'
- '\python.exe'
- '\perl.exe'
- '\mshta.exe'
- '\rundll32.exe'
- '\regsvr32.exe'
- '\wscript.exe'
- '\cscript.exe'
- '\certutil.exe'
- '\bitsadmin.exe'
- '\curl.exe'
selection_parent_unix:
ParentImage|endswith:
- '/CUSTOM_SERVICE_PARENT_1'
- '/CUSTOM_SCHEDULED_PARENT_1'
- '/CUSTOM_REMOTE_PARENT_1'
- '/CUSTOM_AUTOMATION_PARENT_1'
selection_child_unix:
Image|endswith:
- '/bash'
- '/sh'
- '/python'
- '/perl'
- '/curl'
condition: (selection_parent_windows and selection_child_windows) or (selection_parent_unix and selection_child_unix)
falsepositives:
- Approved admin activity
- Scheduled maintenance
- Approved automation frameworks
level: medium
Detection logic implementation notes
· Parent placeholders MUST be replaced with real customer-defined non-user-driven parent classes
· Windows and Unix path handling MUST be tuned to the customer backend’s actual field normalization
· This rule MUST remain a survivor rule and MUST NOT replace the primary exposed-service exploit-success coverage anchor
· If the environment cannot maintain reliable parent-class scoping, this rule MUST be withheld rather than loosely deployed
System: YARA
Rule name
Multi-Capability Collection-and-Staging Automation Artifact
Rule objective
· Detect durable dropped or staged attacker artifacts that combine execution support, discovery or collection support, and staging or transfer-preparation support in a single script or tool
· Detect reusable automation tooling associated with attacker-controlled collection and staging activity
· Avoid dependence on single strings, filenames, or one-off campaign markers
Native format
YARA rule format
Behavioral anchor
· Multi-capability automation artifact
· Collection-and-staging helper tooling
· Reusable post-exploitation utility content
Detection strength conditions
· This rule is valid only when:
o capability clustering is based on durable content patterns, not single campaign strings
o benign administrative and deployment automation with overlapping functions has been tested and excluded
o scanning coverage includes realistic script and dropped-artifact persistence locations
o file or content scoping is limited to realistic artifact classes for the environment
Engineering Implementation Instructions
Customer data required
· Representative malicious artifact samples or validated emulations for:
o dropped scripts
o bundled utilities
o staged helper tools
· Benign comparison corpus including:
o administrative scripts
o deployment tooling
o backup and export automation
o sanctioned collection or transfer scripts
· File types, encodings, and scan locations in scope for scanning
· Operational scan timing model for file-based detection
Deployment preparation required
· Tune the rule against a benign corpus before production deployment
· Limit deployment to realistic artifact classes such as:
o script files
o dropped text artifacts
o staged utility content
o unpacked transient tool artifacts where scanning coverage exists
· Validate coverage across:
o plain-text scripts
o encoded or transformed scripts only where inspection workflow supports them
o dropped utilities and temporary staging artifacts
· Exclude sanctioned automation families that legitimately combine similar capabilities
Field validation required
· Confirm scanning pipeline can reliably inspect:
o relevant script types
o dropped text artifacts
o staged utility files
o temporary directories or repositories in scope
· Confirm file normalization, decoding, or unpacking workflow is sufficient for scanned content types
· Confirm scan timing is frequent enough to catch short-lived artifacts where required
Non-deployment guardrails
· Do NOT deploy this rule if:
o malicious and benign corpus comparison has not been performed
o the rule relies on one-off strings, filenames, or path markers
o scanning coverage does not include realistic artifact persistence locations
o sanctioned automation overlap cannot be bounded
DRI assessment
· Target DRI: up to 7.6 only when:
o capability clustering is durable
o benign overlap testing is mature
o scan coverage includes realistic persistence locations
o rule logic avoids one-off artifact dependence
· DRI degradation conditions:
o corpus overfitting
o weak benign testing
o dependence on single markers
o poor scan timing for short-lived artifacts
Detection logic
rule CDX_EXP_MultiCapability_Collection_and_Staging_Automation_Artifact
{
meta:
description = "Detects durable multi-capability collection-and-staging automation artifacts associated with attacker post-exploitation tooling"
author = "OpenAI"
scope = "file"
version = "1.1"
strings:
$exec_1 = /powershell(\.exe)?/ nocase ascii wide
$exec_2 = /cmd(\.exe)?/ nocase ascii wide
$exec_3 = /bash|sh/ nocase ascii wide
$disc_1 = /whoami|hostname|ipconfig|ifconfig|systeminfo|tasklist|wmic/ nocase ascii wide
$disc_2 = /net user|net group|ps aux/ nocase ascii wide
$collect_1 = /get-childitem|copy-item|dir |findstr|grep |select .* from/ nocase ascii wide
$collect_2 = /compress-archive|7z |zip |tar / nocase ascii wide
$stage_1 = /appdata|programdata|\/tmp\/|temp|tmp/ nocase ascii wide
$stage_2 = /invoke-webrequest|curl |wget |bitsadmin|certutil.*-urlcache/ nocase ascii wide
$stage_3 = /http:\/\/|https:\/\/|ftp:\/\// nocase ascii wide
condition:
filesize < 2MB and
1 of ($exec_*) and
1 of ($disc_*) and
1 of ($collect_*) and
2 of ($stage_*)
}
Detection logic implementation notes
· The final rule MUST be tuned against customer benign corpora before production use
· Capability groups MUST remain intact and MUST NOT be collapsed into generic keyword matching
· If customer environment has heavy legitimate automation overlap, this rule MUST be narrowed or withheld rather than loosely deployed
· File-scope deployment SHOULD be restricted to realistic script and dropped-artifact classes supported by the customer scan pipeline
Rule name
Memory-Resident Collection-and-Staging Orchestration Artifact
Rule objective
· Detect durable in-memory orchestration or staging artifacts associated with attacker automation, where memory scanning is operationally supported
· Provide support coverage for reusable in-memory helper content coordinating collection, staging, transfer preparation, or encoded execution support
· Preserve value without claiming direct behavioral progression detection
Native format
YARA rule format
Behavioral anchor
· Memory-resident orchestration artifact
· Reusable in-memory collection-and-staging helper content
· Support detection for post-exploitation tooling families
Detection strength conditions
· This rule is valid only when:
o memory acquisition and scanning are operationally supported
o target content classes are durable enough to survive mutation testing
o benign in-memory automation overlap is tested and bounded
o the rule is tuned to capability combinations, not single strings
Engineering Implementation Instructions
Customer data required
· Representative malicious memory samples or extracted in-memory content from validated emulations
· Benign comparison corpus for:
o in-memory admin tooling
o management agents
o legitimate automation frameworks
· Operational memory scanning coverage model
· Supported process classes and memory regions in scope for scanning
Deployment preparation required
· Validate rule behavior against real memory snapshots before production deployment
· Scope scanning to process classes and memory regions with realistic attacker tooling exposure
· Exclude benign agent and management tooling known to contain overlapping content traits
· Tune for durable capability combinations rather than one-off script fragments
Field validation required
· Confirm memory-scanning pipeline can:
o acquire target memory content reliably
o preserve relevant string material
o scan within operational time bounds
o attribute detections to process or host context
· Confirm memory content is not systematically stripped, packed, or normalized in a way that invalidates signature assumptions
Non-deployment guardrails
· Do NOT deploy this rule if:
o memory scanning is unavailable or inconsistent
o benign overlap testing is incomplete
o memory samples are too sparse to validate durable content traits
o rule logic depends on short, generic, or one-off strings
DRI assessment
· Target DRI: up to 7.0 only when:
o memory scanning maturity is high
o benign overlap testing is complete
o capability clustering is durable across validated samples
· DRI degradation conditions:
o sparse sample coverage
o weak benign-memory corpus testing
o unstable or highly mutated in-memory artifacts
o inconsistent memory acquisition coverage
Detection logic
rule CDX_EXP_Memory_Resident_Collection_and_Staging_Orchestration_Artifact
{
meta:
description = "Detects durable in-memory collection-and-staging orchestration content associated with reusable attacker automation"
author = "OpenAI"
scope = "memory"
version = "1.1"
strings:
$mem_exec_1 = /frombase64string|base64_decode|powershell -enc|base64 -d/ nocase ascii wide
$mem_disc_1 = /whoami|hostname|systeminfo|wmic|tasklist|ipconfig/ nocase ascii wide
$mem_collect_1 = /compress-archive|7z |zip |tar / nocase ascii wide
$mem_stage_1 = /invoke-webrequest|curl |wget |bitsadmin|certutil.*-urlcache/ nocase ascii wide
$mem_stage_2 = /http:\/\/|https:\/\/|ftp:\/\// nocase ascii wide
$mem_stage_3 = /appdata|programdata|\/tmp\/|temp|tmp/ nocase ascii wide
condition:
1 of ($mem_exec_*) and
1 of ($mem_disc_*) and
1 of ($mem_collect_*) and
2 of ($mem_stage_*)
}
Detection logic implementation notes
· This rule MUST be deployed only in environments with validated memory-scanning capability
· Capability groups MUST be tuned using real malicious and benign memory corpora before production deployment
· This rule is a survivor support rule and MUST NOT be used as a substitute for stronger progression-detection systems
· If memory-scanning fidelity is inconsistent, this rule MUST be withheld rather than loosely deployed
System: AWS
Rule name
AssumeRole or Explicitly New Principal Access Followed by Bulk S3 Object Collection
Rule objective
· Detect attacker-controlled identity pivot followed by collection-stage access to S3 objects
· Detect cloud-native progression from role assumption or explicitly new principal activation into high-volume object retrieval within bounded time windows
· Preserve high signal by tying identity transition directly to attributable S3 collection behavior
Native format
AWS CloudTrail detection logic with Athena SQL implementation
Behavioral anchor
· Identity pivot
· Cloud-native collection behavior
· Bulk S3 object access by the same principal
· Cross-account or newly activated role use as hardening component where applicable
Detection strength conditions
· This rule is valid only when:
o CloudTrail management events are enabled
o CloudTrail S3 data events are enabled for relevant buckets
o principal identity can be normalized across STS, IAM, and S3 access events
o approved automation, replication, analytics, and backup roles are allowlisted
o object-access thresholds are baseline-derived by role, user, or bucket sensitivity class
o explicitly new-principal logic is implemented through customer-maintained history or reference data and not through vague anomaly language
Engineering Implementation Instructions
Customer data required
· CloudTrail management event coverage status
· CloudTrail S3 data event coverage for protected buckets
· Customer-normalized principal identity mapping for:
o useridentity.arn
o useridentity.principalid
o assumed-role session identity where applicable
· Allowlist for:
o backup roles
o replication roles
o analytics roles
o ETL roles
o sanctioned cross-account access patterns
· Baseline object-access thresholds by:
o role
o principal class
o bucket sensitivity class
· Protected bucket inventory
· Customer-maintained prior-seen principal reference model for explicitly new-principal logic, where used
Deployment preparation required
· Enable and validate S3 data events for protected buckets
· Normalize assumed-role identity so role sessions can be tied to subsequent S3 access
· Scope detection to:
o protected buckets
o non-allowlisted principals
o principals matching one of:
§ AssumeRole management activity
§ customer-defined explicitly new-principal criteria derived from maintained historical identity reference data
· Exclude:
o approved replication
o backup workflows
o ETL pipelines
o sanctioned bulk export activity
Field validation required
· Confirm CloudTrail reliably captures:
o eventname
o eventsource
o useridentity.type
o useridentity.arn
o useridentity.principalid
o requestparameters.bucketname
o requestparameters.key where applicable
· Confirm assumed-role events and S3 object-access events can be tied to the same effective principal identity
· Confirm protected bucket inventory aligns to enabled data-event coverage
· Confirm explicitly new-principal logic is backed by stable historical reference data or prior-seen principal tracking
Non-deployment guardrails
· Do NOT deploy this rule if:
o S3 data events are not enabled for protected buckets
o assumed-role identity cannot be normalized
o allowlisted high-volume roles are not bounded
o object-access thresholds are not baseline-derived
o protected bucket coverage is incomplete
o explicitly new-principal logic cannot be enforced through maintained historical identity reference data
DRI assessment
· Target DRI: up to 8.2 only when:
o identity normalization is reliable
o S3 data-event coverage is complete for protected buckets
o high-volume legitimate roles are allowlisted
o thresholds are baseline-derived
o explicitly new-principal logic is enforceable through maintained historical identity reference data
· DRI degradation conditions:
o incomplete S3 data-event coverage
o weak principal normalization
o immature allowlisting
o broad legitimate bulk-access overlap
o weak historical identity reference coverage
Detection logic
· when a CloudTrail management event for the same normalized principal identity matches one of:
o eventsource=sts.amazonaws.com and eventname=AssumeRole
o customer-defined explicitly new-principal criteria enforced through maintained prior-seen principal reference data
· and when subsequent CloudTrail S3 data events for that same normalized principal identity show:
o repeated GetObject activity
o against customer-defined protected buckets
o at or above the customer-defined bulk object-access count threshold
o within the customer-defined identity-to-collection time window
· and when the principal is not present in approved replication, backup, ETL, analytics, or sanctioned cross-account allowlists
· then generate a high-confidence AWS collection alert for identity-pivot-to-S3-object-access progression
AWS-native implementation layer
Implementation type
Athena SQL query
Required data sources
· CloudTrail management events table
· CloudTrail S3 data events table for protected buckets
· customer-maintained allowlist table or reference dataset for approved principals
· customer-maintained historical principal reference dataset for explicitly new-principal logic, where used
Required field normalization
· normalized_principal
o preferred source order:
§ useridentity.arn
§ useridentity.principalid
· bucket_name
o requestparameters.bucketname
Athena SQL query
WITH principal_pivots AS (
SELECT
COALESCE(useridentity.arn, useridentity.principalid) AS normalized_principal,
MIN(from_iso8601_timestamp(eventtime)) AS pivot_time
FROM cloudtrail_management_events
WHERE (
(eventsource = 'sts.amazonaws.com' AND eventname = 'AssumeRole')
OR (
COALESCE(useridentity.arn, useridentity.principalid) IN (
SELECT normalized_principal
FROM customer_new_principal_reference
WHERE is_explicitly_new = true
)
)
)
AND COALESCE(useridentity.arn, useridentity.principalid) NOT IN (
SELECT normalized_principal
FROM customer_allowlisted_principals
)
GROUP BY COALESCE(useridentity.arn, useridentity.principalid)
),
s3_collection AS (
SELECT
COALESCE(useridentity.arn, useridentity.principalid) AS normalized_principal,
requestparameters.bucketname AS bucket_name,
COUNT(*) AS getobject_count,
MIN(from_iso8601_timestamp(eventtime)) AS first_get_time,
MAX(from_iso8601_timestamp(eventtime)) AS last_get_time
FROM cloudtrail_s3_data_events
WHERE eventsource = 's3.amazonaws.com'
AND eventname = 'GetObject'
AND requestparameters.bucketname IN (
SELECT bucket_name
FROM customer_protected_buckets
)
GROUP BY
COALESCE(useridentity.arn, useridentity.principalid),
requestparameters.bucketname
),
bulk_collection AS (
SELECT
normalized_principal,
MIN(first_get_time) AS first_collection_time,
SUM(getobject_count) AS total_getobject_count,
COUNT(DISTINCT bucket_name) AS distinct_bucket_count
FROM s3_collection
GROUP BY normalized_principal
HAVING SUM(getobject_count) >= CAST(:customer_bulk_getobject_threshold AS BIGINT)
)
SELECT
p.normalized_principal,
p.pivot_time,
b.first_collection_time,
b.total_getobject_count,
b.distinct_bucket_count
FROM principal_pivots p
JOIN bulk_collection b
ON p.normalized_principal = b.normalized_principal
WHERE b.first_collection_time >= p.pivot_time
AND date_diff(
'second',
p.pivot_time,
b.first_collection_time
) <= CAST(:customer_identity_to_collection_window_seconds AS BIGINT);
Implementation notes
· cloudtrail_management_events and cloudtrail_s3_data_events MUST be replaced with customer Athena table names
· customer_new_principal_reference MUST be a deterministic historical reference dataset, not an inferred anomaly layer
· customer_allowlisted_principals MUST include approved replication, backup, analytics, ETL, and sanctioned cross-account principals
· customer_protected_buckets MUST include only protected buckets with enabled S3 data-event coverage
· :customer_bulk_getobject_threshold MUST be baseline-derived and role-sensitive
· :customer_identity_to_collection_window_seconds MUST be tuned to customer-validated identity-to-collection timing
· If Athena deployment is not available, this rule MUST be implemented in an equivalent AWS-native correlation pipeline that preserves the same identity normalization, thresholding, and timing logic
Detection logic implementation notes
· This rule MUST use normalized principal identity across both management and data events
· “Explicitly new principal” logic MUST be implemented through customer-maintained history, prior-seen reference data, or equivalent deterministic identity-state tracking
· “Explicitly new principal” logic MUST NOT be approximated using vague low-frequency or baseline language
· If identity normalization cannot be maintained across AssumeRole and S3 data events, this rule MUST be withheld rather than loosely deployed
Rule name
S3 ListBucket Followed by High-Volume GetObject Collection
Rule objective
· Detect cloud-native collection behavior by identifying bucket enumeration followed by bulk object retrieval within bounded time windows
· Preserve meaningful AWS-native collection coverage even when identity-pivot signals are absent, weak, or not retained
· Detect collection progression with stronger specificity than raw object-access thresholds alone
Native format
AWS CloudTrail detection logic with CloudWatch Logs Insights implementation
Behavioral anchor
· Bucket enumeration
· Bulk object collection
· Collection-stage progression against protected buckets
Detection strength conditions
· This rule is valid only when:
o S3 data events are enabled for relevant buckets
o principal identity is attributable across ListBucket and GetObject events
o protected bucket scope is defined
o legitimate bulk-enumeration and export workflows are allowlisted
o GetObject density thresholds are defined as count within bounded time window, not as vague bulk-access language alone
Engineering Implementation Instructions
Customer data required
· Protected bucket inventory
· S3 data-event coverage map
· Allowlist for:
o backup or export workflows
o analytics jobs
o sanctioned inventory or synchronization processes
· Baseline thresholds for:
o GetObject count within time window
o object volume retrieved where available
o collection timing window
· Principal normalization fields for S3 access events
Deployment preparation required
· Enable and validate S3 data events for protected buckets
· Scope the rule to protected buckets only
· Define bounded timing windows from enumeration to retrieval
· Define GetObject density thresholds as count within bounded time window
· Exclude:
o sanctioned bulk export workflows
o inventory jobs
o analytics and reporting jobs
o synchronization and backup tooling
Field validation required
· Confirm CloudTrail reliably captures:
o eventname
o requestparameters.bucketname
o object access events for protected buckets
o normalized principal identity across S3 access records
· Confirm ListBucket and GetObject events can be attributed to the same principal with operational confidence
· Confirm GetObject count thresholds can be measured consistently within the retained timing window
Non-deployment guardrails
· Do NOT deploy this rule if:
o S3 data-event coverage is absent or incomplete
o protected bucket scope is not maintained
o high-volume legitimate export workflows cannot be allowlisted
o object-access thresholds are not baseline-derived
o GetObject density thresholds cannot be enforced within bounded time windows
DRI assessment
· Target DRI: up to 7.7 only when:
o S3 data-event coverage is mature
o protected bucket scope is accurate
o bulk export allowlisting is complete
o thresholds are baseline-derived
· DRI degradation conditions:
o missing object-level logging
o incomplete allowlisting
o legitimate high-volume export overlap
o weak principal attribution
o weak GetObject density enforcement
Detection logic
· when CloudTrail S3 access events for the same normalized principal identity show:
o eventname=ListBucket
o against a customer-defined protected bucket
· and when subsequent CloudTrail S3 data events for that same normalized principal identity show:
o repeated GetObject
o against the same protected bucket
o at or above the customer-defined GetObject count threshold within the customer-defined list-to-get time window
· and when the activity does not match approved backup, analytics, synchronization, inventory, or export allowlists
· then generate a medium-high confidence AWS collection alert for enumeration-to-bulk-object-retrieval progression
AWS-native implementation layer
Implementation type
CloudWatch Logs Insights query
Required log sources
· CloudTrail S3 data events log group for protected buckets
Required field normalization
· normalized_principal
o preferred source order:
§ useridentity.arn
§ useridentity.principalid
· bucket_name
o requestparameters.bucketname
CloudWatch Logs Insights query
fields @timestamp, eventSource, eventName, userIdentity.arn, userIdentity.principalId, requestParameters.bucketName
| filter eventSource = "s3.amazonaws.com"
| filter requestParameters.bucketName in [CUSTOM_PROTECTED_BUCKETS]
| fields coalesce(userIdentity.arn, userIdentity.principalId) as normalized_principal
| stats
count_if(eventName = "ListBucket") as list_count,
count_if(eventName = "GetObject") as get_count,
min(if(eventName = "ListBucket", @timestamp, null)) as first_list_time,
min(if(eventName = "GetObject", @timestamp, null)) as first_get_time,
max(if(eventName = "GetObject", @timestamp, null)) as last_get_time
by normalized_principal, requestParameters.bucketName
| filter list_count >= 1
| filter get_count >= CUSTOM_GETOBJECT_COUNT_THRESHOLD
| filter first_get_time >= first_list_time
| filter last_get_time - first_list_time <= CUSTOM_LIST_TO_GET_WINDOW_SECONDS * 1000
| filter normalized_principal not in [CUSTOM_ALLOWLISTED_S3_EXPORT_PRINCIPALS]
Implementation notes
· CUSTOM_PROTECTED_BUCKETS MUST be restricted to protected buckets with enabled S3 data-event coverage
· CUSTOM_GETOBJECT_COUNT_THRESHOLD MUST be baseline-derived per bucket sensitivity class or principal class
· CUSTOM_ALLOWLISTED_S3_EXPORT_PRINCIPALS MUST include approved backup, analytics, synchronization, inventory, and export workflows
· If CloudWatch Logs Insights aggregation limits or function availability prevent stable deployment, the same logic MUST be implemented through Athena, scheduled detection pipeline, or equivalent AWS-native analytics layer without weakening the rule conditions
System: Azure
Rule name
Privilege Elevation Followed by Protected Blob Collection
Rule objective
· Detect attacker-controlled identity pivot followed by collection-stage access to protected Azure Blob Storage
· Detect cloud-native progression from explicit privilege elevation into high-volume blob read behavior within bounded time windows
· Preserve high signal by tying deterministic identity-state change directly to attributable protected storage collection
Native format
Azure Monitor / Log Analytics KQL detection logic
Behavioral anchor
· Identity pivot
· Protected storage collection behavior
· Bulk blob read activity by the same normalized principal
Detection strength conditions
· This rule is valid only when:
o Entra audit visibility is enabled for relevant principals
o protected storage access logging is enabled
o normalized principal identity can be maintained across identity and storage telemetry
o approved automation, backup, analytics, synchronization, and export principals are allowlisted
o blob-read thresholds are baseline-derived by principal class, storage sensitivity class, or storage account role
o privilege-elevation logic is based on explicit audit events and not vague anomaly language
o storage telemetry is normalized to a deterministic schema before rule deployment
Engineering Implementation Instructions
Customer data required
· Entra audit log coverage status
· Protected storage log coverage for protected storage accounts and containers
· Customer-normalized principal identity mapping for:
o user principal
o app or service principal
o managed identity
o role-assignment or privilege-change records
· Allowlist for:
o backup identities
o synchronization identities
o analytics identities
o ETL or export identities
o sanctioned cross-tenant and cross-subscription access patterns
· Baseline blob-read thresholds by:
o principal class
o storage sensitivity class
o storage account or container class
· Protected storage account and container inventory
Deployment preparation required
· Enable and validate protected storage access logging
· Normalize principal identity so privilege-elevation events can be tied to subsequent storage access
· Restrict detection to:
o protected storage accounts and containers
o non-allowlisted principals
o explicit privilege-elevation events only
· Exclude:
o approved replication
o backup workflows
o ETL pipelines
o sanctioned export or synchronization activity
Field validation required
· Confirm Azure logs reliably capture:
o normalized principal identity
o explicit role or privilege change events
o storage account name
o container name
o blob-read operations
o timestamps suitable for bounded correlation
· Confirm identity-state events and blob-access events can be tied to the same effective principal identity
· Confirm protected storage scope aligns to enabled log coverage
· Confirm storage telemetry is normalized into one retained deployment schema before production use
Non-deployment guardrails
· Do NOT deploy this rule if:
o protected storage access logging is absent or incomplete
o principal normalization cannot be maintained across identity and storage events
o allowlisted high-volume principals are not bounded
o blob-read thresholds are not baseline-derived
o protected storage scope is incomplete
o explicit privilege-elevation events are not available or trustworthy
o storage telemetry has not been normalized into a deterministic deployment schema
DRI assessment
· Target DRI: up to 8.0 only when:
o identity normalization is reliable
o protected storage logging is complete
o high-volume legitimate principals are allowlisted
o thresholds are baseline-derived
o privilege-elevation events are explicit and attributable
o storage telemetry schema is normalized and stable
· DRI degradation conditions:
o incomplete storage log coverage
o weak principal normalization
o immature allowlisting
o broad legitimate bulk-access overlap
o unstable storage-schema mapping
Detection logic
· when Azure identity telemetry for the same normalized principal shows explicit privilege-elevation activity
· and when subsequent protected storage access telemetry for that same normalized principal shows:
o repeated blob-read activity
o against customer-defined protected storage accounts or containers
o at or above the customer-defined bulk blob-read threshold
o within the customer-defined elevation-to-collection time window
· and when the principal is not present in approved backup, analytics, ETL, synchronization, export, or sanctioned cross-tenant allowlists
· then generate a high-confidence Azure collection alert for privilege-elevation-to-protected-storage progression
Azure-native implementation layer
Implementation type
Azure Monitor / Log Analytics KQL query
Required data sources
· Entra audit log tables
· protected storage access log tables
· customer allowlist watchlist for approved principals
Required field normalization
· normalized_principal
o preferred source order:
§ user principal name
§ app or service principal identifier
§ managed identity identifier
· storage_scope
o storage account and container pairing
Required schema control
· Before deployment, storage logs MUST be normalized into one retained schema model, such as:
o resource-specific storage tables
o or normalized AzureDiagnostics extraction layer
· The deployed rule MUST reference only the retained normalized schema
· Mixed-schema assumptions MUST NOT be used in production logic
KQL query
let ElevationToCollectionWindow = 10m;
let BulkBlobReadThreshold = toint(CUSTOM_BLOBREAD_THRESHOLD);
let ProtectedStorage = datatable(storageAccount:string, container:string)
[
"CUSTOM_STORAGE_ACCOUNT_1","CUSTOM_CONTAINER_1",
"CUSTOM_STORAGE_ACCOUNT_2","CUSTOM_CONTAINER_2"
];
let AllowlistedPrincipals =
materialize(
_GetWatchlist('CUSTOM_ALLOWLISTED_AZURE_PRINCIPALS')
| project normalized_principal = tostring(SearchKey)
);
let PrivilegeElevationEvents =
AuditLogs
| where ActivityDisplayName has_any ("Add member to role", "Add app role assignment", "Add eligible member", "Activate eligible assignment")
| extend normalized_principal = tostring(InitiatedBy.user.userPrincipalName)
| where isnotempty(normalized_principal)
| project normalized_principal, pivot_time = TimeGenerated, pivot_reason = ActivityDisplayName;
let BlobReads =
StorageBlobLogs
| where OperationName has_any ("GetBlob", "ReadFile")
| extend normalized_principal = tostring(CallerObjectId)
| where isnotempty(normalized_principal)
| join kind=inner ProtectedStorage on $left.AccountName == $right.storageAccount and $left.ContainerName == $right.container
| summarize blob_read_count = count(),
first_blob_read = min(TimeGenerated),
last_blob_read = max(TimeGenerated)
by normalized_principal, AccountName, ContainerName;
PrivilegeElevationEvents
| where normalized_principal !in (AllowlistedPrincipals)
| join kind=inner BlobReads on normalized_principal
| where blob_read_count >= BulkBlobReadThreshold
| where first_blob_read >= pivot_time and first_blob_read <= pivot_time + ElevationToCollectionWindow
| project normalized_principal, pivot_time, pivot_reason, AccountName, ContainerName, blob_read_count, first_blob_read, last_blob_read
Implementation notes
· Table and field names MUST be aligned to the customer’s retained normalized Azure logging schema before production deployment
· CUSTOM_BLOBREAD_THRESHOLD MUST be baseline-derived and sensitivity-aware
· Protected storage scope MUST be limited to monitored sensitive containers
· If identity-state change events cannot be tied to protected storage access using a stable normalized principal, this rule MUST be withheld rather than approximated
Detection logic implementation notes
· This rule MUST use normalized principal identity across both identity and storage-access events
· This rule MUST rely only on explicit privilege-elevation events and MUST NOT depend on vague new-principal anomaly logic
· If identity normalization cannot be maintained across privilege-elevation and storage events, this rule MUST be withheld rather than loosely deployed
Rule name
Container Enumeration Followed by High-Volume Blob Read Collection
Rule objective
· Detect cloud-native collection behavior by identifying protected container enumeration followed by bulk blob retrieval within bounded time windows
· Preserve meaningful Azure-native collection coverage even when identity-pivot signals are absent, weak, or not retained
· Detect collection progression with stronger specificity than raw blob-read thresholds alone
Native format
Azure Monitor / Log Analytics KQL detection logic
Behavioral anchor
· Container enumeration
· Bulk blob collection
· Collection-stage progression against protected storage scope
Detection strength conditions
· This rule is valid only when:
o protected storage access logging is enabled
o principal identity is attributable across enumeration and blob-read events
o protected storage scope is defined
o legitimate enumeration and export workflows are allowlisted
o blob-read density thresholds are defined as count within bounded time window, not vague bulk-access language alone
o storage telemetry is normalized into one deterministic deployment schema before production use
Engineering Implementation Instructions
Customer data required
· Protected storage account and container inventory
· Storage log coverage map
· Allowlist for:
o backup or export workflows
o analytics jobs
o sanctioned inventory or synchronization processes
· Baseline thresholds for:
o blob-read count within time window
o object volume where available
o enumeration-to-read timing window
· Principal normalization fields for storage access events
Deployment preparation required
· Enable and validate protected storage access logging
· Scope the rule to protected storage only
· Define bounded timing windows from enumeration to retrieval
· Define blob-read density thresholds as count within bounded time window
· Exclude:
o sanctioned bulk export workflows
o inventory jobs
o analytics and reporting jobs
o synchronization and backup tooling
Field validation required
· Confirm Azure logs reliably capture:
o enumeration operations
o blob-read operations
o normalized principal identity
o storage account and container identity
o timestamps suitable for bounded correlation
· Confirm enumeration and blob-read events can be attributed to the same principal with operational confidence
· Confirm blob-read count thresholds can be measured consistently within the retained timing window
· Confirm storage telemetry is normalized into one retained deployment schema before production use
Non-deployment guardrails
· Do NOT deploy this rule if:
o protected storage logging is absent or incomplete
o protected storage scope is not maintained
o high-volume legitimate export workflows cannot be allowlisted
o blob-read thresholds are not baseline-derived
o blob-read density thresholds cannot be enforced within bounded time windows
o storage telemetry has not been normalized into a deterministic deployment schema
DRI assessment
· Target DRI: up to 7.7 only when:
o protected storage logging is mature
o protected storage scope is accurate
o bulk export allowlisting is complete
o thresholds are baseline-derived
o storage schema is normalized and stable
· DRI degradation conditions:
o missing object-level logging
o incomplete allowlisting
o legitimate high-volume export overlap
o weak principal attribution
o weak blob-read density enforcement
o unstable storage-schema mapping
Detection logic
· when protected storage access telemetry for the same normalized principal identity shows:
o container enumeration activity
o against a customer-defined protected storage account or container
· and when subsequent protected storage access telemetry for that same normalized principal identity shows:
o repeated blob-read activity
o against the same protected storage scope
o at or above the customer-defined blob-read count threshold within the customer-defined enumeration-to-read time window
· and when the activity does not match approved backup, analytics, synchronization, inventory, or export allowlists
· then generate a medium-high confidence Azure collection alert for enumeration-to-bulk-blob-retrieval progression
Azure-native implementation layer
Implementation type
Azure Monitor / Log Analytics KQL query
Required data sources
· protected storage access log tables
Required field normalization
· normalized_principal
o preferred source order:
§ caller object ID
§ service principal identifier
§ mapped user identity if available
· storage_scope
o storage account and container pairing
Required schema control
· Before deployment, storage logs MUST be normalized into one retained schema model, such as:
o resource-specific storage tables
o or normalized AzureDiagnostics extraction layer
· The deployed rule MUST reference only the retained normalized schema
· Mixed-schema assumptions MUST NOT be used in production logic
KQL query
let EnumerationToReadWindow = 10m;
let BlobReadThreshold = toint(CUSTOM_BLOBREAD_COUNT_THRESHOLD);
let ProtectedStorage = datatable(storageAccount:string, container:string)
[
"CUSTOM_STORAGE_ACCOUNT_1","CUSTOM_CONTAINER_1",
"CUSTOM_STORAGE_ACCOUNT_2","CUSTOM_CONTAINER_2"
];
let AllowlistedPrincipals =
materialize(
_GetWatchlist('CUSTOM_ALLOWLISTED_AZURE_STORAGE_EXPORT_PRINCIPALS')
| project normalized_principal = tostring(SearchKey)
);
let Enumerations =
StorageBlobLogs
| where OperationName has_any ("ListBlobs", "ListContainers", "ListPaths")
| extend normalized_principal = tostring(CallerObjectId)
| where isnotempty(normalized_principal)
| join kind=inner ProtectedStorage on $left.AccountName == $right.storageAccount and $left.ContainerName == $right.container
| summarize first_enum_time = min(TimeGenerated) by normalized_principal, AccountName, ContainerName;
let BlobReads =
StorageBlobLogs
| where OperationName has_any ("GetBlob", "ReadFile")
| extend normalized_principal = tostring(CallerObjectId)
| where isnotempty(normalized_principal)
| join kind=inner ProtectedStorage on $left.AccountName == $right.storageAccount and $left.ContainerName == $right.container
| summarize blob_read_count = count(),
first_blob_read = min(TimeGenerated),
last_blob_read = max(TimeGenerated)
by normalized_principal, AccountName, ContainerName;
Enumerations
| where normalized_principal !in (AllowlistedPrincipals)
| join kind=inner BlobReads on normalized_principal, AccountName, ContainerName
| where blob_read_count >= BlobReadThreshold
| where first_blob_read >= first_enum_time and first_blob_read <= first_enum_time + EnumerationToReadWindow
| project normalized_principal, AccountName, ContainerName, first_enum_time, first_blob_read, last_blob_read, blob_read_count
Implementation notes
· Table and field names MUST be aligned to the customer’s retained normalized Azure logging schema
· CUSTOM_BLOBREAD_COUNT_THRESHOLD MUST be baseline-derived per storage sensitivity class or principal class
· Protected storage scope MUST be restricted to monitored sensitive containers
· If storage logging does not provide stable principal attribution, this rule MUST be withheld rather than approximated
Detection logic implementation notes
· This rule MUST remain a survivor rule and MUST NOT replace the primary identity-pivot-to-collection coverage anchor
· Thresholds MUST be customer-baselined before production deployment
· If protected storage coverage is incomplete, this rule MUST be withheld rather than approximated
System: GCP
Rule name
Privilege-Bearing IAM Change Followed by Protected Cloud Storage Collection
Rule objective
· Detect attacker-controlled identity pivot followed by collection-stage access to protected GCS objects
· Detect cloud-native progression from explicit privilege-bearing IAM change into high-volume object read behavior within bounded time windows
· Preserve high signal by tying deterministic identity-state change directly to attributable protected storage collection
Native format
GCP BigQuery SQL detection logic
Behavioral anchor
· Identity pivot
· Protected storage collection behavior
· Bulk object read activity by the same normalized principal
Detection strength conditions
· This rule is valid only when:
o Cloud Audit Logs are enabled for relevant IAM and storage activity
o data-access logging is enabled for protected buckets
o normalized principal identity can be maintained across IAM and storage telemetry
o approved automation, backup, analytics, synchronization, and export principals are allowlisted
o object-read thresholds are baseline-derived by principal class, bucket sensitivity class, or project role
o privilege-change logic is based on explicit customer-defined privilege-bearing IAM change events and not vague anomaly language
o retained GCP telemetry is normalized into a deterministic deployment schema before production use
Engineering Implementation Instructions
Customer data required
· Cloud Audit Logs coverage status for IAM and Storage
· Protected bucket log coverage for protected buckets
· Customer-normalized retained fields for:
o normalized_principal
o normalized_bucket_name
o normalized_method_name
o normalized_event_time
· Customer-defined privilege-bearing IAM change event set
· Allowlist for:
o backup principals
o synchronization principals
o analytics principals
o ETL or export principals
o sanctioned cross-project access patterns
· Baseline object-read thresholds by:
o principal class
o bucket sensitivity class
o project or bucket role
· Protected bucket inventory
Deployment preparation required
· Enable and validate data-access logging for protected buckets
· Normalize principal identity so privilege-change events can be tied to subsequent storage access
· Normalize bucket identity into one retained field before deployment
· Restrict detection to:
o protected buckets
o non-allowlisted principals
o explicit privilege-bearing IAM change events only
· Exclude:
o approved replication
o backup workflows
o ETL pipelines
o sanctioned export or synchronization activity
Field validation required
· Confirm retained normalized schema reliably captures:
o normalized principal identity
o explicit IAM binding or privilege-bearing change events
o normalized bucket identity
o object-read operations
o timestamps suitable for bounded correlation
· Confirm IAM state-change events and storage-access events can be tied to the same effective principal identity
· Confirm protected bucket scope aligns to enabled log coverage
· Confirm retained schema is stable across the deployed query pipeline
Non-deployment guardrails
· Do NOT deploy this rule if:
o protected bucket data-access logging is absent or incomplete
o principal normalization cannot be maintained across IAM and storage events
o allowlisted high-volume principals are not bounded
o object-read thresholds are not baseline-derived
o protected bucket scope is incomplete
o explicit privilege-bearing IAM change events are not defined and validated by the customer
o telemetry has not been normalized into a deterministic retained deployment schema
DRI assessment
· Target DRI: up to 7.9 only when:
o identity normalization is reliable
o protected bucket logging is complete
o high-volume legitimate principals are allowlisted
o thresholds are baseline-derived
o privilege-bearing IAM change logic is explicit and customer-validated
o retained schema is normalized and stable
· DRI degradation conditions:
o incomplete storage log coverage
o weak principal normalization
o immature allowlisting
o broad legitimate bulk-access overlap
o unstable retained schema mapping
o incomplete privilege-bearing IAM change coverage
Detection logic
· when retained GCP IAM telemetry for the same normalized principal shows a customer-defined privilege-bearing IAM change event
· and when subsequent protected Cloud Storage access telemetry for that same normalized principal shows:
o repeated object-read activity
o against customer-defined protected buckets
o at or above the customer-defined bulk object-read threshold
o within the customer-defined privilege-change-to-collection time window
· and when the principal is not present in approved backup, analytics, ETL, synchronization, export, or sanctioned cross-project allowlists
· then generate a high-confidence GCP collection alert for privilege-change-to-protected-storage progression
GCP-native implementation layer
Implementation type
BigQuery SQL query
Required data sources
· customer-retained normalized GCP IAM audit table
· customer-retained normalized GCP Cloud Storage data-access table
· customer allowlist reference table for approved principals
· customer protected-bucket reference table
· customer privilege-bearing IAM change reference table
Required field normalization
· normalized_principal
· normalized_bucket_name
· normalized_method_name
· normalized_event_time
Required schema control
· Before deployment, GCP audit logs MUST be normalized into retained tables with stable normalized fields
· The deployed rule MUST reference only retained normalized fields
· Raw source-field assumptions MUST NOT be used in production logic
BigQuery SQL query
WITH privilege_changes AS (
SELECT
normalized_principal,
MIN(normalized_event_time) AS pivot_time
FROM `CUSTOM_PROJECT.CUSTOM_DATASET.gcp_iam_normalized`
WHERE normalized_method_name IN (
SELECT normalized_method_name
FROM `CUSTOM_PROJECT.CUSTOM_DATASET.customer_privilege_bearing_iam_change_methods`
)
AND normalized_principal IS NOT NULL
GROUP BY normalized_principal
),
object_reads AS (
SELECT
normalized_principal,
normalized_bucket_name,
COUNT(*) AS object_read_count,
MIN(normalized_event_time) AS first_read_time,
MAX(normalized_event_time) AS last_read_time
FROM `CUSTOM_PROJECT.CUSTOM_DATASET.gcp_storage_access_normalized`
WHERE normalized_method_name IN ("storage.objects.get", "storage.objects.list")
AND normalized_bucket_name IN (
SELECT normalized_bucket_name
FROM `CUSTOM_PROJECT.CUSTOM_DATASET.customer_protected_buckets`
)
AND normalized_principal IS NOT NULL
GROUP BY normalized_principal, normalized_bucket_name
),
bulk_reads AS (
SELECT
normalized_principal,
MIN(first_read_time) AS first_collection_time,
SUM(object_read_count) AS total_object_read_count,
COUNT(DISTINCT normalized_bucket_name) AS distinct_bucket_count
FROM object_reads
GROUP BY normalized_principal
HAVING SUM(object_read_count) >= @customer_bulk_objectread_threshold
)
SELECT
p.normalized_principal,
p.pivot_time,
b.first_collection_time,
b.total_object_read_count,
b.distinct_bucket_count
FROM privilege_changes p
JOIN bulk_reads b
ON p.normalized_principal = b.normalized_principal
LEFT JOIN `CUSTOM_PROJECT.CUSTOM_DATASET.customer_allowlisted_gcp_principals` a
ON p.normalized_principal = a.normalized_principal
WHERE a.normalized_principal IS NULL
AND b.first_collection_time >= p.pivot_time
AND TIMESTAMP_DIFF(b.first_collection_time, p.pivot_time, SECOND) <= @customer_privchange_to_collection_window_seconds
Implementation notes
· Table names MUST be replaced with customer-deployed retained normalized table names
· customer_privilege_bearing_iam_change_methods MUST contain only customer-validated privilege-bearing IAM change events
· @customer_bulk_objectread_threshold MUST be baseline-derived and sensitivity-aware
· Protected bucket scope MUST be limited to monitored sensitive buckets
· If IAM state-change events cannot be tied to protected storage access using a stable normalized principal, this rule MUST be withheld rather than approximated
Detection logic implementation notes
· This rule MUST use normalized principal identity across both IAM and storage-access events
· This rule MUST rely only on explicit customer-defined privilege-bearing IAM change events
· This rule MUST NOT depend on vague new-principal anomaly logic
· If identity normalization cannot be maintained across privilege-change and storage events, this rule MUST be withheld rather than loosely deployed
Rule name
Bucket Enumeration Followed by High-Volume Object Read Collection
Rule objective
· Detect cloud-native collection behavior by identifying protected bucket enumeration followed by bulk object retrieval within bounded time windows
· Preserve meaningful GCP-native collection coverage even when identity-pivot signals are absent, weak, or not retained
· Detect collection progression with stronger specificity than raw object-read thresholds alone
Native format
GCP BigQuery SQL detection logic
Behavioral anchor
· Bucket enumeration
· Bulk object collection
· Collection-stage progression against protected storage scope
Detection strength conditions
· This rule is valid only when:
o protected bucket access logging is enabled
o principal identity is attributable across enumeration and object-read events
o protected bucket scope is defined
o legitimate enumeration and export workflows are allowlisted
o object-read density thresholds are defined as count within bounded time window, not vague bulk-access language alone
o retained storage telemetry is normalized into one deterministic deployment schema before production use
Engineering Implementation Instructions
Customer data required
· Protected bucket inventory
· Storage log coverage map
· Allowlist for:
o backup or export workflows
o analytics jobs
o sanctioned inventory or synchronization processes
· Baseline thresholds for:
o object-read count within time window
o object volume where available
o enumeration-to-read timing window
· Retained normalized fields for:
o normalized_principal
o normalized_bucket_name
o normalized_method_name
o normalized_event_time
Deployment preparation required
· Enable and validate protected bucket access logging
· Scope the rule to protected buckets only
· Define bounded timing windows from enumeration to retrieval
· Define object-read density thresholds as count within bounded time window
· Exclude:
o sanctioned bulk export workflows
o inventory jobs
o analytics and reporting jobs
o synchronization and backup tooling
Field validation required
· Confirm retained normalized schema reliably captures:
o enumeration operations
o object-read operations
o normalized principal identity
o normalized bucket identity
o timestamps suitable for bounded correlation
· Confirm enumeration and object-read events can be attributed to the same principal with operational confidence
· Confirm object-read count thresholds can be measured consistently within the retained timing window
· Confirm retained storage schema is stable across the deployed query pipeline
Non-deployment guardrails
· Do NOT deploy this rule if:
o protected bucket logging is absent or incomplete
o protected bucket scope is not maintained
o high-volume legitimate export workflows cannot be allowlisted
o object-read thresholds are not baseline-derived
o object-read density thresholds cannot be enforced within bounded time windows
o storage telemetry has not been normalized into a deterministic retained deployment schema
DRI assessment
· Target DRI: up to 7.7 only when:
o protected bucket logging is mature
o protected bucket scope is accurate
o bulk export allowlisting is complete
o thresholds are baseline-derived
o retained storage schema is normalized and stable
· DRI degradation conditions:
o missing object-level logging
o incomplete allowlisting
o legitimate high-volume export overlap
o weak principal attribution
o weak object-read density enforcement
o unstable retained storage-schema mapping
Detection logic
· when protected bucket access telemetry for the same normalized principal identity shows:
o bucket enumeration activity
o against a customer-defined protected bucket
· and when subsequent protected bucket access telemetry for that same normalized principal identity shows:
o repeated object-read activity
o against the same protected bucket
o at or above the customer-defined object-read count threshold within the customer-defined enumeration-to-read time window
· and when the activity does not match approved backup, analytics, synchronization, inventory, or export allowlists
· then generate a medium-high confidence GCP collection alert for enumeration-to-bulk-object-retrieval progression
GCP-native implementation layer
Implementation type
BigQuery SQL query
Required data sources
· customer-retained normalized GCP Cloud Storage data-access table
Required field normalization
· normalized_principal
· normalized_bucket_name
· normalized_method_name
· normalized_event_time
Required schema control
· Before deployment, GCP audit logs MUST be normalized into one retained schema model in BigQuery
· The deployed rule MUST reference only the retained normalized schema
· Mixed-schema assumptions MUST NOT be used in production logic
BigQuery SQL query
WITH enumerations AS (
SELECT
normalized_principal,
normalized_bucket_name,
MIN(normalized_event_time) AS first_enum_time
FROM `CUSTOM_PROJECT.CUSTOM_DATASET.gcp_storage_access_normalized`
WHERE normalized_method_name IN ("storage.objects.list", "storage.buckets.list")
AND normalized_bucket_name IN (
SELECT normalized_bucket_name
FROM `CUSTOM_PROJECT.CUSTOM_DATASET.customer_protected_buckets`
)
AND normalized_principal IS NOT NULL
GROUP BY normalized_principal, normalized_bucket_name
),
object_reads AS (
SELECT
normalized_principal,
normalized_bucket_name,
COUNT(*) AS object_read_count,
MIN(normalized_event_time) AS first_read_time,
MAX(normalized_event_time) AS last_read_time
FROM `CUSTOM_PROJECT.CUSTOM_DATASET.gcp_storage_access_normalized`
WHERE normalized_method_name = "storage.objects.get"
AND normalized_bucket_name IN (
SELECT normalized_bucket_name
FROM `CUSTOM_PROJECT.CUSTOM_DATASET.customer_protected_buckets`
)
AND normalized_principal IS NOT NULL
GROUP BY normalized_principal, normalized_bucket_name
)
SELECT
e.normalized_principal,
e.normalized_bucket_name,
e.first_enum_time,
r.first_read_time,
r.last_read_time,
r.object_read_count
FROM enumerations e
JOIN object_reads r
ON e.normalized_principal = r.normalized_principal
AND e.normalized_bucket_name = r.normalized_bucket_name
LEFT JOIN `CUSTOM_PROJECT.CUSTOM_DATASET.customer_allowlisted_gcp_storage_principals` a
ON e.normalized_principal = a.normalized_principal
WHERE a.normalized_principal IS NULL
AND r.object_read_count >= @customer_objectread_count_threshold
AND r.first_read_time >= e.first_enum_time
AND TIMESTAMP_DIFF(r.first_read_time, e.first_enum_time, SECOND) <= @customer_enum_to_read_window_seconds
Implementation notes
· Table names MUST be replaced with customer-deployed retained normalized table names
· @customer_objectread_count_threshold MUST be baseline-derived per bucket sensitivity class or principal class
· Protected bucket scope MUST be restricted to monitored sensitive buckets
· If storage logging does not provide stable principal attribution, this rule MUST be withheld rather than approximated
· Enumeration method mapping MUST be validated against the customer’s retained schema before production deployment
Detection logic implementation notes
· This rule MUST remain a survivor rule and MUST NOT replace the primary identity-pivot-to-collection coverage anchor
· Thresholds MUST be customer-baselined before production deployment
· If protected bucket coverage is incomplete, this rule MUST be withheld rather than approximated
S26 Threat-to-Rule Traceability Matrix
Purpose
Provides direct traceability between confirmed threat behaviors and surviving detection rules across all systems. Ensures detection coverage is explicit, non-inferred, and accurately reflects both strengths and gaps.
All behaviors in this section are derived from validated exploit execution flow and real-world attacker activity patterns observed in this campaign.
External Exploit Interaction Against Exposed Service
Description
Untrusted interaction with exposed service or application endpoint used to initiate exploit delivery
Detection Coverage:
Suricata
• Request Concentration Against Customer-Defined Exposed Services
AWS
• Conditional pre-execution detection via identity-linked service access telemetry
Azure
• Conditional pre-execution detection via control-plane or access telemetry
GCP
• Conditional pre-execution detection via audit-based service access visibility
Coverage Classification:
Strong (network-layer inspection)
Conditional (cloud-native pre-execution visibility)
Exploit Invocation Triggering Execution Path
Description
Execution of exploit mechanism initiating attacker-controlled logic
Detection Coverage:
Suricata
• Request Concentration Against Customer-Defined Exposed Services
Coverage Classification:
Strong
Service-Originated Execution Following Exploit
Description
Execution initiated by service or non-interactive parent process as a result of exploit success
Detection Coverage:
SentinelOne
• Service-Originated High-Risk Child Process Burst
Splunk
• Exploit Attempt Followed by Service-Originated High-Risk Execution and Data Aggregation
Elastic
• Repeated Service-Originated High-Risk Execution Followed by Data Aggregation
QRadar
• Exploit Attempt Followed by Repeated Service-Originated High-Risk Execution and Data Aggregation
Coverage Classification:
Strong
Non-User Execution Outside Approved Parent Baselines
Description
Execution originating from non-user context outside expected operational parent-process baseline
Detection Coverage:
SentinelOne
• High-Risk Execution From Non-User-Driven Parent Classes Outside Approved Baseline
Splunk
• Non-User-Driven High-Risk Execution Followed by Data Aggregation and Outbound Transfer Escalation
Elastic
• Repeated Service-Originated High-Risk Execution Followed by Data Aggregation
QRadar
• Exploit Attempt Followed by Repeated Service-Originated High-Risk Execution and Data Aggregation
Coverage Classification:
Strong
Payload Retrieval and Secondary Tooling Staging
Description
Retrieval and staging of additional payloads or tooling following initial execution
Detection Coverage:
SentinelOne
• Service-Originated High-Risk Child Process Burst
Splunk
• Non-User-Driven High-Risk Execution Followed by Data Aggregation and Outbound Transfer Escalation
Coverage Classification:
Partial
Data Access and Aggregation Activity
Description
Collection and aggregation of sensitive data following successful execution
Detection Coverage:
Splunk
• Exploit Attempt Followed by Service-Originated High-Risk Execution and Data Aggregation
Elastic
• Repeated Service-Originated High-Risk Execution Followed by Data Aggregation
QRadar
• Exploit Attempt Followed by Repeated Service-Originated High-Risk Execution and Data Aggregation
Coverage Classification:
Strong
Cloud Identity Pivot via Privilege or Role Change
Description
Privilege escalation or role assumption enabling expanded access to protected resources
Detection Coverage:
AWS
• AssumeRole or Explicitly New Principal Access Followed by Bulk S3 Object Collection
Azure
• Privilege Elevation Followed by Protected Blob Collection
GCP
• Privilege-Bearing IAM Change Followed by Protected Cloud Storage Collection
Coverage Classification:
Strong
Storage Resource Enumeration Prior to Collection
Description
Enumeration of storage resources to identify accessible data targets
Detection Coverage:
AWS
• S3 ListBucket Followed by High-Volume GetObject Collection
Azure
• Container Enumeration Followed by High-Volume Blob Read Collection
GCP
• Bucket Enumeration Followed by High-Volume Object Read Collection
Coverage Classification:
Partial
Bulk Data Collection from Storage Services
Description
High-volume retrieval of data from storage services following enumeration or privilege escalation
Detection Coverage:
AWS
• AssumeRole or Explicitly New Principal Access Followed by Bulk S3 Object Collection
Azure
• Privilege Elevation Followed by Protected Blob Collection
GCP
• Privilege-Bearing IAM Change Followed by Protected Cloud Storage Collection
Coverage Classification:
Strong
Enumeration Followed by Bulk Collection Sequence
Description
Sequential behavior of resource enumeration immediately followed by bulk data retrieval
Detection Coverage:
AWS
• S3 ListBucket Followed by High-Volume GetObject Collection
Azure
• Container Enumeration Followed by High-Volume Blob Read Collection
GCP
• Bucket Enumeration Followed by High-Volume Object Read Collection
Coverage Classification:
Strong (Azure, GCP)
Partial (AWS)
Outbound Data Transfer Following Collection
Description
Transfer of collected data outside the environment following aggregation
Detection Coverage:
Suricata
• Sustained Outbound Transfer Escalation From Controlled Egress Segments
Splunk
• Non-User-Driven High-Risk Execution Followed by Data Aggregation and Outbound Transfer Escalation
Elastic
• Data Aggregation Followed by Outbound Transfer Escalation
QRadar
• Data Aggregation Followed by Outbound Transfer Escalation
Coverage Classification:
Partial
Collection and Staging Tooling Artifacts
Description
Use of tooling for collection and staging, including file-based and memory-resident variants
Detection Coverage:
YARA
• Multi-Capability Collection-and-Staging Automation Artifact
YARA
• Memory-Resident Collection-and-Staging Orchestration Artifact
Coverage Classification:
Partial
Execution Without Observable Telemetry Artifacts
Description
Execution paths that do not generate observable network or process telemetry
Detection Coverage:
No surviving rules across systems
Coverage Classification:
Gap
S27 Behavior & Log Artifacts
Purpose
Defines the observable telemetry artifacts associated with each confirmed threat behavior. Establishes the exact log-level evidence required to support S25 detections and S26 traceability. This section is strictly telemetry-grounded and does not include detection logic or strategy.
External Exploit Interaction Against Exposed Service
Description
Untrusted interaction with exposed service endpoint used to initiate exploit delivery
Log Artifacts:
Suricata
• HTTP request logs targeting exposed service endpoint
• URI access values associated with service interface
• Source IP addresses interacting with destination service
• Request counts over defined time intervals
AWS
• CloudTrail events showing API or service access activity
• Source identity and request origin fields
Azure
• Activity log entries for service access attempts
• Caller identity and operation fields
GCP
• Audit log entries for service access
• Principal identity and request metadata fields
Exploit Invocation Triggering Execution Path
Description
Execution of exploit mechanism through request-level invocation parameters
Log Artifacts:
Suricata
• Request payload content fields
• Parameter values included in request body or URI
• Protocol fields associated with request handling
Service-Originated Execution Following Exploit
Description
Execution initiated by service-level or non-interactive parent process following exploit success
Log Artifacts:
SentinelOne
• Process creation events
• Parent process name and process ID
• Child process name and process ID
Splunk
• Process execution logs
• Parent process fields
• Command-line fields
Elastic
• Process start events
• Parent-child process relationship fields
QRadar
• Process execution events
• Parent and child process attributes
Non-User Execution Outside Approved Parent Baselines
Description
Execution originating from non-user parent processes
Log Artifacts:
SentinelOne
• Parent process name fields
• Process classification fields
Splunk
• Parent process attributes
• Command-line execution fields
Elastic
• Process lineage fields
• Parent process identifiers
QRadar
• Correlated process execution attributes
Payload Retrieval and Secondary Tooling Staging
Description
Retrieval and staging of additional payloads following initial execution
Log Artifacts:
SentinelOne
• Process execution with network connection events
• Command-line fields referencing external resources
• File creation events
Splunk
• Network connection logs linked to process execution
• File creation or write events
Elastic
• Network activity associated with process execution
• File access and creation events
QRadar
• Correlated network and process activity events
Data Access and Aggregation Activity
Description
Collection and aggregation of data following successful execution
Log Artifacts:
Splunk
• File access logs
• Object read or access events
Elastic
• Data access events
• Repeated file or object read events
QRadar
• Correlated data access activity events
Cloud Identity Pivot via Privilege or Role Change
Description
Privilege escalation or role assumption enabling expanded access
Log Artifacts:
AWS
• AssumeRole events
• Role modification events
• Principal identity fields
Azure
• Role assignment events
• Permission change events
• Caller identity fields
GCP
• IAM policy change events
• Principal identity fields
Storage Resource Enumeration Prior to Collection
Description
Enumeration of storage resources to identify accessible data
Log Artifacts:
AWS
• S3 ListBucket events
• Bucket listing operations
Azure
• Container listing operations
• Metadata access events
GCP
• Bucket listing events
• Object listing operations
Bulk Data Collection from Storage Services
Description
High-volume retrieval of data from storage services
Log Artifacts:
AWS
• S3 GetObject events
• Object access event records
Azure
• Blob read events
• Object access records
GCP
• Object read events
• Storage access records
Enumeration Followed by Bulk Collection Sequence
Description
Sequential enumeration followed by data retrieval
Log Artifacts:
AWS
• ListBucket events followed by GetObject events
• Timestamp fields showing event order
Azure
• Container listing followed by blob read events
• Event timestamp correlation
GCP
• Bucket listing followed by object read events
• Event timestamp correlation
Outbound Data Transfer Following Collection
Description
Transfer of collected data outside the environment
Log Artifacts:
Suricata
• Outbound network flow records
• Destination IP and port fields
• Byte count fields
Splunk
• Network connection logs
• Data transfer metrics
Elastic
• Network flow events
• Outbound traffic records
QRadar
• Correlated outbound network activity events
Collection and Staging Tooling Artifacts
Description
Presence of tooling used for collection and staging
Log Artifacts:
YARA
• File scan match results
• Memory scan match results
Execution Without Observable Telemetry Artifacts
Description
Execution paths that do not generate observable logs
Log Artifacts:
No reliable telemetry artifacts available across systems
Figure 5
S28 Detection Strategy and SOC Implementation Guidance
Purpose
Defines how S25 detections should be operationalized within a SOC environment. Focuses on deployment, tuning, triage, and response alignment without redefining detection logic.
Detection Strategy Overview
• Prioritize detections anchored to:
• service-originated execution
• identity pivot events
• storage collection activity
• Maintain strict allowlist control for:
• backup processes
• analytics workloads
• synchronization activity
• Preserve rule integrity by avoiding expansion beyond validated S25 detections
SOC Implementation Guidance
Alert Triage Approach
• Validate parent-child process relationships for service-originated execution alerts
• Confirm identity transition events prior to storage access activity
• Correlate storage access events with preceding identity or execution events
False Positive Reduction
• Maintain allowlists for:
• known service processes
• approved storage access identities
• scheduled data movement workflows
Escalation Criteria
• Identity pivot events followed by storage access
• Service-originated execution with subsequent data access
• Enumeration events followed by object retrieval
Response Actions
• Disable or restrict affected identities
• Terminate or isolate processes associated with execution events
• Review storage access activity for affected resources
• Audit related identity and access changes
S30 Intelligence Maturity Assessment
Purpose
Assesses the maturity and reliability of detection and intelligence alignment for this campaign.
Maturity Assessment
Detection Design
• Behavior-aligned detection coverage anchored to exploit execution and collection activity
Telemetry Alignment
• Dependent on:
• endpoint process telemetry
• cloud audit logging
• storage access logging
Operational Readiness
• Dependent on:
• allowlist definition and enforcement
• correlation capability across telemetry sources
• stable log normalization
Intelligence Confidence
• High confidence in:
• service-originated execution behavior
• identity pivot events
• storage collection activity
• Moderate confidence in:
• outbound transfer attribution
• initial exploit interaction outside network visibility
Maturity Conclusion
Detection and intelligence alignment is strongest where execution, identity, and storage telemetry are available and correlated.
Limitations are tied to telemetry availability rather than detection design.
S31 — Telemetry Dependencies
Purpose
Defines required telemetry presence, structure, attribution, and normalization necessary for S25 detection logic.
Endpoint Telemetry Dependencies
· Process creation telemetry must capture process identity, parent process identity, and execution timestamps with sufficient precision for short-window analysis
· Process lineage must be complete and preserved for all execution chains, including service-originated processes
· Command-line telemetry must be enabled, untruncated, and attributable to the executing process
· Telemetry must support grouping by host, process, and user identity
Network Telemetry Dependencies
· Inbound telemetry must capture source IP, destination service, and request frequency within bounded time windows
· Outbound telemetry must capture destination, protocol, session duration, and data transfer volume
· Source attribution must remain consistent across reverse proxies, load balancers, CDNs, and NAT boundaries
· Telemetry must allow attribution of activity to a stable source entity
Data Access Telemetry Dependencies
· File or object access telemetry must capture user or process attribution
· Telemetry must include measurable access volume such as bytes read or equivalent metric
· Data access events must be attributable to the same host, user, or process used in execution-stage telemetry
Cloud Telemetry Dependencies
Identity Telemetry
· Role assumption, privilege elevation, or IAM change events must be captured
· Principal identity must be consistently represented across identity and resource-access events
Storage Telemetry
· Object-level access events must be captured for protected storage resources
· Enumeration activity must be captured where supported by platform telemetry
· Storage access events must be attributable to the same principal identity used in identity telemetry
Telemetry Normalization Dependencies
· Host, asset, and principal identity must be consistently normalized across telemetry sources
· Process and parent process fields must be normalized for reliable matching
· Timestamp consistency must support bounded detection windows across telemetry sources
Baseline Dependencies
· Environment-specific baselines must exist for process execution frequency, service-originated child-process behavior, outbound transfer volume, and data access volume
· Baselines must be measurable and maintained per host role, service role, or identity class
· Detection thresholds must be derived from these baselines
Telemetry Availability Enforcement
· Absence of process lineage removes exploit-to-execution detection capability
· Absence of command-line telemetry reduces execution classification capability
· Absence of data access telemetry removes aggregation detection capability
· Absence of storage access telemetry removes cloud collection detection capability
S32 — Detection Limitations
Purpose
Defines conditions under which detection capability is reduced or removed due to telemetry constraints, execution characteristics, or adversary-controlled variation.
Execution Visibility Limitations
· Incomplete process lineage prevents attribution of execution origin
· Execution paths without child process creation prevent detection of service-originated execution patterns
· Absence of command-line telemetry prevents classification of execution behavior
Memory and Artifact Limitations
· Absence of memory telemetry prevents detection of fileless or in-memory execution
· Execution that does not generate process, file, or observable telemetry artifacts cannot be detected
Data Collection Limitations
· Absence of data access telemetry prevents detection of aggregation behavior
· High-volume legitimate data access reduces differentiation between normal and abnormal activity
Network Visibility Limitations
· Encrypted or uninspected traffic reduces visibility into inbound exploit-attempt activity
· Network intermediaries may alter or obscure source attribution
Cloud Detection Limitations
· Absence of storage access logging removes collection-stage detection capability
· Inconsistent principal identity representation prevents correlation between identity events and resource access
Behavioral and Evasion Limitations
· Reduced-frequency exploit attempts may not meet request concentration thresholds
· Low-frequency execution may not meet execution burst thresholds
· Use of native application functionality without observable execution artifacts reduces visibility
Operational Limitations
· Absence of baselines prevents threshold-based detection enforcement
· Incomplete allowlisting prevents stable deployment of detection logic
S33 — Defensive Control & Hardening Improvements
Purpose
Defines required improvements to restore or enable detection capability based on identified telemetry and detection limitations.
Process Lineage Enforcement
· Absence of process lineage removes exploit-to-execution detection capability
· Improvement requires complete collection and preservation of parent-child process relationships
Command-Line Telemetry Enforcement
· Absence of command-line telemetry reduces execution classification capability
· Improvement requires enabling and preserving command-line data for all process execution events
Non-User Execution Visibility
· Execution originating from non-user contexts without visibility reduces detection of post-exploitation behavior
· Improvement requires visibility into service, scheduled, and remote execution contexts
Data Access Telemetry Enablement
· Absence of data access telemetry removes aggregation detection capability
· Improvement requires enabling access logging with user or process attribution and measurable access volume
Network Attribution Preservation
· Loss of source attribution reduces reliability of exploit-attempt detection
· Improvement requires preservation of source identity across network infrastructure
Storage Access Logging Enablement
· Absence of storage access telemetry removes cloud collection detection capability
· Improvement requires enabling object-level access logging for protected storage
Identity Normalization Enforcement
· Inconsistent principal identity prevents correlation between identity and resource-access events
· Improvement requires normalization of identity across telemetry sources
Baseline Establishment
· Absence of baselines prevents threshold-based detection enforcement
· Improvement requires measurable and maintained baselines for execution, data access, and outbound activity
S34 — Defensive Control & Hardening Architecture
Figure 6
Purpose
Defines deterministic deployment requirements for telemetry collection and detection support.
Ingress Telemetry Collection
· Network telemetry must be collected at externally exposed service boundaries where inbound traffic enters controlled infrastructure
· Collection points must preserve source attribution prior to transformation by internal systems
Endpoint Telemetry Collection
· Endpoint agents must capture process creation, parent-child relationships, and command-line execution across all monitored systems
· Collection must include service processes and non-user-driven execution contexts
Data Access Telemetry Collection
· Data access telemetry must be enabled on systems storing sensitive or high-volume data
· Collection must attribute access events to process or user identity
Egress Telemetry Collection
· Outbound telemetry must be collected at controlled egress points where traffic exits the environment
· Collection must capture session duration and data transfer volume
Cloud Telemetry Collection
· Identity telemetry must be collected from cloud control-plane logging systems
· Storage telemetry must be collected from object-level access logging systems
· Identity and storage telemetry must share a normalized principal identity
Normalization and Correlation Layer
· Telemetry must be normalized into a consistent schema across all sources
· Entity mapping must support correlation across endpoint, network, and cloud telemetry
· Timestamp alignment must support bounded detection windows
S35 — Defensive Control Mapping
Purpose
Maps required telemetry and control capabilities to detection behavior coverage.
Exploit Attempt Behavior Support
· Requires inbound network telemetry capturing request frequency and source attribution
Exploit-to-Execution Behavior Support
· Requires process lineage telemetry capturing parent-child execution relationships
Execution Behavior Support
· Requires process creation telemetry and command-line visibility
Data Aggregation Behavior Support
· Requires data access telemetry with attribution and measurable access volume
Identity Pivot Behavior Support (Cloud)
· Requires identity telemetry capturing role assumption or privilege changes
· Requires normalized principal identity across telemetry sources
Storage Collection Behavior Support (Cloud)
· Requires object-level storage access telemetry and enumeration visibility
Outbound Transfer Behavior Support
· Requires outbound telemetry capturing session duration and data transfer volume
S36 — CyberDax Intelligence Maturity Assessment
Purpose
Evaluates detection capability based on telemetry availability, normalization, and deployment constraints.
Telemetry Availability Conditions
· Detection capability exists only where required telemetry sources are present and attributable
· Absence of required telemetry removes detection capability for the corresponding attack stage
Normalization Conditions
· Detection capability requiring cross-source correlation depends on consistent identity and asset normalization
· Inconsistent normalization reduces correlation reliability
Correlation Conditions
· Detection requiring multi-stage visibility depends on alignment of entity identifiers and timestamps across telemetry sources
· Misalignment reduces correlation feasibility
Baseline Conditions
· Detection logic using thresholds depends on existence of measurable baselines
· Absence of baselines prevents enforcement of threshold-based detection logic
Deployment Conditions
· Detection deployment depends on ability to enforce allowlisting and maintain stable telemetry collection
· Inability to enforce allowlisting reduces deployability
Maturity Conclusion
· Detection capability is constrained by telemetry availability, normalization consistency, and baseline presence
· Detection gaps are directly attributable to absence or degradation of these conditions
S37 — Strategic Defensive Improvements
Purpose
Defines prioritized improvements based on detection loss and telemetry gaps.
Restore Exploit-to-Execution Visibility
· Absence of process lineage removes detection of execution following exploit activity
· Improvement requires enforcement of process lineage telemetry
Restore Execution Classification Capability
· Absence of command-line telemetry reduces classification of execution behavior
· Improvement requires enabling command-line visibility
Restore Data Aggregation Detection
· Absence of data access telemetry removes visibility into collection-stage activity
· Improvement requires enabling attributed data access logging
Restore Identity-to-Collection Correlation (Cloud)
· Inconsistent principal identity prevents linking identity events to storage access
· Improvement requires normalization of identity across telemetry sources
Restore Exploit Attempt Attribution
· Loss of source attribution reduces reliability of inbound exploit detection
· Improvement requires preservation of source identity across network infrastructure
Enable Threshold-Based Detection
· Absence of baselines prevents enforcement of execution, data access, and outbound thresholds
· Improvement requires establishment and maintenance of measurable baselines
S38 — Attack Economics & Organizational Impact Model
Attacker Cost Structure
Initial cost:
· exploit development or acquisition
· infrastructure setup for scanning and delivery
Marginal cost per target:
· automated execution reduces per-target cost
· reuse of infrastructure across multiple targets
Scaling behavior:
· parallel exploitation across exposed systems
· cost remains stable while impact scales with number of targets
Defender Cost Structure
Impact cost:
· data exposure or loss
· operational disruption
· regulatory and legal impact
Response cost:
· investigation and triage
· containment and remediation
· system recovery and validation
Economic Amplifiers
Detection delay:
· increases number of affected systems
· increases response scope
Data aggregation:
· increases volume of exposed data
· increases regulatory impact
Execution spread:
· increases number of compromised systems
· increases operational disruption
Economic Constraints
Detection during execution:
· limits attacker progression
· reduces ability to scale
Detection during collection:
· limits data extraction
· reduces overall impact
Economic Model Outcome
· Low attacker cost combined with automation enables large-scale exploitation
· Organizational impact increases with detection delay and data exposure
S39 — Economic Impact & Organizational Exposure
Figure 7
Exposure Factors
External exposure:
· number of internet-facing systems
· accessibility of exposed services
Detection timing:
· time from exploit attempt to detection
· time from execution to containment
Data sensitivity:
· presence of sensitive or regulated data
· volume of accessible data
Impact Conditions
Early detection:
· limits execution success
· reduces number of affected systems
Delayed detection:
· allows execution across multiple systems
· increases likelihood of data aggregation
No detection during collection:
· allows full data extraction
· increases impact severity
Organizational Impact Outcomes
Multi-system compromise:
· increased response scope
· increased remediation complexity
Data exposure:
· regulatory and legal consequences
· reputational impact
Operational disruption:
· system downtime
· degraded operations
Exposure Amplification Conditions
Absence of telemetry:
· prevents detection at key stages
Inconsistent normalization:
· prevents correlation across telemetry sources
Absence of baselines:
· prevents threshold-based detection
Most Likely Impact Alignment
· Detection during execution limits full-scale data extraction
· Partial visibility allows multi-system impact under defined detection delay conditions
S40 — References
Security Vendor Analysis
• https[:]//www.greynoise.io/resources/2025-mass-internet-exploitation-report
• https[:]//www.vulncheck.com/blog/opportunistic-exploitation
• https[:]//www.rapid7.com/research/attack-intelligence-report/
• https[:]//www.shadowserver.org/what-we-do/network-reporting/
• https[:]//unit42.paloaltonetworks.com/threat-research/
• https[:]//www.crowdstrike.com/global-threat-report/
• https[:]//www.fortinet.com/content/dam/fortinet/assets/threat-reports/threat-landscape-report-2025.pdf
Analytical Framework
· MITRE ATT&CK Framework — https[:]//attack.mitre.org