NIST CSF

NIST CSF Detect Function

A complete guide to the NIST CSF Detect function — continuous monitoring, adverse event analysis, and detection processes that surface attacks in time to respond.
Browse NIST CSF topics

What is the NIST CSF Detect function?

The Detect (DE) function develops and implements activities to identify the occurrence of a cybersecurity event in a timely manner. Detect is where the cybersecurity program proves it can see what is actually happening. No preventive control is perfect, and the gap between compromise and detection — dwell time — is one of the most decisive variables in the final impact of an attack. An adversary detected within hours is an incident; an adversary detected after six months is a breach.

Detect sits between Protect (the preventive function) and Respond (the reactive function). Telemetry from the platforms, identities, data stores, and networks protected in the Protect function flows into Detect, where continuous monitoring and event analysis turn raw signals into actionable alerts. Those alerts become the inputs to Respond.

Detect is also the function most likely to be measured badly. A detection program that produces thousands of alerts that nobody reads is not detecting anything; it is generating noise. Mature NIST CSF Detect programs are judged by mean time to detect (MTTD), true-positive rate, and coverage against relevant threat scenarios — not by alert volume.

How Detect changed in NIST CSF 2.0

NIST CSF 1.1 split the Detect function into three categories: Anomalies and Events (DE.AE), Security Continuous Monitoring (DE.CM), and Detection Processes (DE.DP). NIST CSF 2.0 consolidated these into two:

CategoryIDFocus
Continuous MonitoringDE.CMMonitoring of networks, physical environments, personnel activity, and third parties
Adverse Event AnalysisDE.AEAnalysis of anomalies, correlation across sources, and characterization of events

The old Detection Processes category (DE.DP) was partially folded into DE.AE and partially moved into the Govern function's oversight and improvement outcomes. The net effect is a cleaner distinction: DE.CM is the telemetry layer, DE.AE is the analysis layer, and governance of the detection program itself is handled through Govern.

Continuous Monitoring (DE.CM)

DE.CM covers the collection of telemetry and the continuous monitoring of the environment for cybersecurity-relevant signals. This includes monitoring of networks, endpoints, cloud services, applications, identities, physical environments, personnel activity, and third-party connections. DE.CM outcomes are usually measured in coverage: what percentage of the environment is visible, which assets or tiers of assets are blind spots, and whether critical logs are being retained for long enough to support Respond and Recover.

A healthy DE.CM program integrates logs from:

  • Endpoints — EDR agents across workstations, servers, and mobile devices.
  • Identity providers — authentication logs, privileged access, federation, and token issuance events.
  • Cloud providers — control-plane audit logs, data-plane access logs, and configuration change logs.
  • Network — flow data, DNS logs, and network detection and response (NDR) sensors on segments where they are warranted.
  • Applications — application-layer logs for critical business systems.
  • Third parties — logs from managed service providers, SaaS vendors, and partners with privileged access.

Adverse Event Analysis (DE.AE)

DE.AE takes the raw signals collected by DE.CM and turns them into characterized events. Analysts triage anomalies, correlate across sources, determine the scope and potential impact, and decide whether an event warrants escalation to the Respond function. DE.AE is where the real expertise lives. Signatures catch known-bad behavior; DE.AE analysis catches the variants, the novel techniques, and the low-and-slow activity that evades pure-signature detection.

Mature DE.AE practices include:

  • Threat-informed detection engineering — mapping detection coverage to a threat model such as MITRE ATT&CK.
  • Purple-team exercises that test whether detections actually fire against realistic attack scenarios.
  • Documented triage runbooks that produce consistent decisions regardless of which analyst is on shift.
  • Feedback loops from Respond back to DE.AE — every incident becomes an opportunity to improve future detection.

Implementation guidance

A pragmatic sequence for standing up the Detect function:

  1. Decide what must be detected. Start from the prioritized risk register in the Identify function. Pick the top threat scenarios that matter most to the business — ransomware on critical systems, credential theft of privileged identities, exfiltration of regulated data — and design detection coverage to meet them.
  2. Centralize logs. Choose a SIEM, a log analytics platform, or a managed detection service. What matters is that logs from endpoints, identities, and cloud control planes are collected, retained for a defined period, and searchable.
  3. Start with high-fidelity detections. Identity-centric detections (impossible travel, MFA bypass, new admin creation, token theft indicators) and EDR-based detections tend to produce the highest signal-to-noise ratios. Expand from there.
  4. Write and test runbooks. Every detection should have a runbook that tells an analyst how to triage it. Runbooks should be living documents updated after every incident.
  5. Tune continuously. Alert fatigue kills detection programs. Measure false-positive rates and either tune, suppress, or remove noisy detections.
  6. Measure coverage against a framework. Use MITRE ATT&CK or a similar model to track detection coverage over time. Coverage gaps become initiatives in the NIST CSF roadmap.
  7. Feed improvements back to Govern and Identify. Detection findings often change the risk picture; that information belongs in the risk register and in leadership reporting.

Common challenges

Detect programs commonly hit these walls:

  • Tooling without tuning. A SIEM deployed and left on defaults produces a flood of low-value alerts. Investment in detection engineering is non-negotiable.
  • Coverage illusions. Dashboards that count log sources ingested rather than relevant telemetry collected can create a false sense of coverage. Measure coverage against real threat scenarios, not against log volume.
  • Logs that cannot be searched quickly. Detection value evaporates if analysts cannot query logs in seconds. Storage architecture and retention policies matter as much as collection.
  • Alert fatigue. Analysts triaging hundreds of alerts per shift will miss the important ones. Suppress noise aggressively and treat alert volume as a defect metric, not a success metric.
  • No purple-teaming. Detections that have never been tested against realistic attack simulations often fail silently when a real attack occurs. Regular purple-team exercises validate that the detections actually work.
  • Unclear escalation criteria. Analysts need a clear rule for when an adverse event becomes an incident and handoff to the Respond function begins. Ambiguity here costs minutes that matter.

Measuring Detect outcomes

Mean time to detect (MTTD) is the headline metric for the NIST CSF Detect function, but MTTD alone can be misleading. A Detect program with excellent MTTD for commodity malware but no visibility into identity-based attacks is not actually strong. Mature Detect programs report a small portfolio of metrics: MTTD by scenario class, true-positive rate per detection, alert-to-escalation time, coverage of the MITRE ATT&CK tactics most relevant to the threat model, and percentage of incidents first detected by internal telemetry rather than by a third party or an affected customer. That last metric — internal-first detection rate — is often the most honest measure of Detect maturity.

Detect also benefits from ongoing threat intelligence integration. Intelligence about current adversary behavior, sector-specific threats, and software supply chain compromises should flow into the detection engineering backlog and update existing detections. Without this feedback loop, DE.CM coverage and DE.AE analytics slowly drift behind what attackers are actually doing.

How episki helps

episki connects directly to your identity provider, EDR, cloud accounts, and SIEM to measure DE.CM coverage and DE.AE performance as living metrics. Coverage gaps against the risk scenarios that matter most to the business become tracked initiatives with owners and due dates. Detection engineering improvements captured in one place are automatically reflected in the NIST CSF profile and in the corresponding SOC 2, ISO 27001, HIPAA, and PCI DSS controls. Leadership sees mean time to detect trending down quarter over quarter; practitioners see the concrete work that made it happen.

Ready to turn the NIST CSF Detect function into live, measurable operations? Start a trial or book a demo.

Related terms

Frequently asked questions

Continue exploring

See how episki handles this

Start a free trial and explore controls, evidence, and automation firsthand.