Rule-based detection has a ceiling

Most detection engineering follows a predictable cycle: analysts learn about techniques, create rules to identify them, deploy those rules, then repeat. This generates a collection of detection rules effective against recognised threats but ineffective against novel approaches.

The fundamental weakness in this model: adversaries need not create entirely new techniques; they simply operate in the spaces between existing rules. A compromised service account executing API calls that are individually authorised but collectively abnormal won’t activate a rule designed for specific attack signatures. Lateral movement mimicking legitimate administrative activity won’t generate alerts. Data theft occurring at rates indistinguishable from normal traffic patterns won’t breach thresholds.

The challenge isn’t poor rule writing. Rather, the rule-based methodology demands defenders predict adversary behaviour before it occurs — a competition defenders typically lose.

Alert fatigue intensifies this problem. Mature detection programs often execute hundreds or thousands of rules simultaneously. The resulting alert volume surpasses triage capacity, forcing analysts to make difficult choices: lowering rule sensitivity, deprioritising lower-severity notifications, or deciding based on intuition rather than evidence. Threats that escape detection are precisely those engineered to appear unremarkable.

Behavioural profiling is environment-specific

Behavioural detection inverts the traditional approach. Rather than identifying malicious activity patterns and searching for them, you establish baselines of normal activity and alert on deviations.

Though straightforward conceptually, this requires constructing detailed models of how your particular environment functions — not generic models derived from external datasets.

What constitutes normal? Context determines everything:

  • Identity behaviour — user authentication patterns, timing, geographic sources, system access targets. Role-based resource permissions and access frequency. Service account operation windows, invoked APIs, execution sequences.
  • Network and API patterns — typical traffic between services, expected API call characteristics, standard data transfer volumes. Baseline internal network activity, not merely perimeter traffic.
  • Workload behaviour — processes running on specific systems, expected resource consumption, standard deployment patterns. Container orchestration characteristics, serverless function frequency, task scheduling.
  • Data access patterns — data store user access, inter-system data movement volumes, typical database and object storage queries.

Environmental-specific behavioural profiles work where static rules fail. An action normal in one organisation — production database snapshot extraction by developers — constitutes a critical anomaly elsewhere. Static rules cannot capture this distinction; behavioural profiles do.

Baselines require ongoing adjustment. Environments evolve — staff expands, workloads migrate, fresh services launch. Effective behavioural modelling adapts to change rather than generating expanding false positive lists as environments shift.

How automated detection agents work

An automated detection agent represents a continuous system that observes environment telemetry, establishes and refreshes behavioural profiles, identifies anomalies relative to those profiles, and escalates findings warranting investigation.

The agent maintains an iterative process:

Observe. The agent processes telemetry across infrastructure — cloud provider logs, identity system events, network flow information, application logs, SaaS audit records. It standardises these into uniform event structure regardless of origin.

Profile. From observed events, the agent establishes and continuously refreshes behavioural baselines. These profiles capture patterns across multiple dimensions: individual users, service accounts, workloads, network segments. The agent captures temporal patterns (batch job execution at 2 AM nightly), relational patterns (user typically accessing three specific systems), and volumetric patterns (API endpoint processing 500 requests hourly during business).

Detect. The agent persistently evaluates live telemetry against established profiles. Deviations receive scoring considering multiple components: baseline deviation magnitude, simultaneous profile violations, accessed resource sensitivity. A solitary unusual API call scores differently from multiple anomalous behaviours from one identity within brief timeframes.

Escalate. High-confidence findings receive packaging with supporting context — specific anomaly, relevant baseline, underlying events, correlated activity — for investigation. The agent furnishes analysts evidence supporting rapid decisions rather than alerts requiring extensive manual investigation.

This isn’t opaque. Each detection includes reasoning — which baseline violation occurred, deviation extent, expected baseline behaviour. Analysts validate findings, share feedback, and agents improve accuracy accordingly.

What this catches that rules miss

Behavioural detection demonstrates greatest value where adversaries deliberately remain beneath rule-based detection thresholds.

Compromised service account

Service account credentials leak through misconfigured CI/CD systems. The attacker makes authorised API calls — bucket listing, instance description, parameter retrieval. Each individual action respects the account’s permissions.

Rule systems recognise authorised calls from valid identities and generate no alerts.

A behavioural agent observes that this account normally executes 12 specific calls in predictable deployment sequences. It’s now making broad reconnaissance queries — ListBuckets, DescribeInstances, GetParameter — outside normal windows, from previously unseen IP ranges. Timing, call pattern, and source anomalies trigger escalation.

Slow data exfiltration

An insider or compromised account gradually copies sensitive information externally — hundreds of megabytes daily, distributed across business hours, leveraging legitimate data store queries within authorised permissions.

Individual transfers don’t exceed established limits. Queries remain syntactically standard. Access respects permissions.

The behavioural agent monitors this user’s data access history. Weekly downloads typically reach 50MB from this source. Recent weeks show 200MB daily increases — a 28x jump — with broader queries than typical. Escalating patterns with expanded scope trigger compound anomaly flagging.

Lateral movement via legitimate tools

An attacker achieves initial compromise on developer infrastructure, moving laterally through SSH, RDP, PowerShell remoting — common environmental tools. They access systems for which the compromised user maintains legitimate access.

Lateral movement detection usually targets specific tool usage or recognised frameworks. Native tool usage within authorised access patterns avoids triggering rules.

The behavioural agent knows this user typically connects to three specific systems during business hours. Current activity shows connections to unfamiliar systems, outside normal hours, displaying exploratory rather than task-focused patterns. Access scope breadth, connection timing, and systematic exploration sequence substantially deviate from established patterns.

Cloud privilege escalation

An attacker possessing limited credentials pursues privilege enhancement through IAM policy establishment, role assumption, or permission adjustment. They proceed incrementally — minor permission modifications appearing routine.

The behavioural agent maps IAM alteration patterns across infrastructure. It identifies which principals typically modify policies, modification frequency, and change types. A developer account without IAM experience suddenly establishing inline policies and modifying role permissions triggers immediate escalation — despite individual IAM actions remaining syntactically valid and within permissions.

Why cloud environments are ideal for this approach

Cloud and SaaS systems generate organised, API-driven telemetry with volume and consistency particularly suited to behavioural profiling.

Every cloud environment action represents an API call. Every API call logs the invoking identity, targeted resource, operation timestamp, source IP, and outcome. This structured, complete audit trail provides necessary foundations for behavioural agent profiling.

On-premises infrastructure generates inconsistent logs across numerous systems with variable formatting, uneven coverage, and visibility gaps. Cloud platforms furnish unified, standardised event streams covering complete control planes.

Cloud environments feature elevated automation baselines. Service accounts, Lambda functions, Step Functions, scheduled operations — these automated workloads demonstrate highly predictable behavioural patterns where deviations become immediately obvious. A Lambda function executing every six hours invoking identical API sequences proves straightforward to profile. Any divergence from such patterns carries immediate significance.

The complication: volume. Medium-sized AWS environments produce millions of CloudTrail events daily. A behavioural agent requires near-real-time processing, profile maintenance across thousands of identities and workloads, and anomaly detection avoiding operator inundation with false positives. This context necessitates automation — no human team sustains manual behavioural baseline management at these volumes.

Operationalising behavioural detection

Deploying behavioural detection agents doesn’t replace detection engineering — it amplifies it. The most effective detection operations merge both strategies:

Rules for known threats. Where specific attack patterns exist — particular exploits, recognised malware communications, documented techniques — detection rules provide appropriate solutions. They’re deterministic, efficient, and validation-friendly.

Agents for unknown and emergent threats. Where adversaries employ legitimate access and native tools within environments, behavioural detection represents the only scalable approach. Agents surface what rules cannot anticipate.

Humans for judgement. Automated agents flag anomalies. People determine whether anomalies constitute threats. Agents surface findings with adequate context enabling rapid analyst decisions. Fully automated response lacking human validation introduces distinct dangers — false positive containment disrupts operations as much as threat responses.

Behavioural detection agents operate continuously across infrastructure. Agents manage scale — profiling thousands of identities and workloads, processing millions of events, maintaining evolving baselines. Analyst teams manage judgement — validating escalations, conducting thorough investigation, directing response.

This structure provides machine-speed behavioural analysis coverage with human-validated response confidence. Agents surface the findings warranting investigation from millions observed. Analysts investigate those, acting on confirmed threats.

Evaluating MDR providers

Every MDR provider currently asserts AI or behavioural detection capabilities. Here’s how to assess them:

Inquire about training data. Providers developing environment-specific behavioural profiles practise actual behavioural detection. Applying generic models from aggregated customer data represents sophisticated signature detection — recognising known-bad patterns but missing environment-unique threats.

Question baseline establishment periods. Meaningful behavioural profiles require 2-4 observation weeks. Providers claiming immediate detection capability aren’t practising behavioural profiling — they’re executing rules and rebranding as AI.

Explore drift handling methodology. Environments constantly change. Profiles accurate six months prior generate noise without adaptation. Investigate continuous versus periodic versus manual baseline updating.

Examine explainability. When systems identify anomalies, can they clarify reasoning through operational language? “Anomaly score 0.87” lacks utility. “This service account made 14 unprecedented API calls from a new source IP outside normal operating windows” provides utility. Unexplainable systems become distrusted black boxes that analysts learn to dismiss.

Assess false positive rates and feedback mechanisms. What escalation-to-confirmed-threat ratios exist? How do systems learn from analyst feedback? Systems lacking feedback mechanisms stagnate — perpetuating identical false positives indefinitely.

Where this is heading

Behavioural detection with automated agents isn’t forthcoming capability — it’s operational now. Organisations implementing this strategy develop detection capabilities that scale with environments, adapt to change, and catch threats that rule-based detection stacks never addressed.

The adversary advantage has historically rested on needing a singular gap. Behavioural detection restructures that equation. Rather than defending every possible attack path with specific rules, you detect any deviation from expected behaviour — regardless of technique employed.

For security leaders reviewing detection strategy: would your current capability catch adversaries leveraging valid credentials, legitimate tools, and authorised access navigating your environments? If not, behavioural detection closes that gap.