05.01 What is Vulnerability Management?
Vulnerability management is the discipline of finding, prioritizing, and remediating the technical weaknesses that scanners surface continuously. Here's how it differs from risk management, what NIST 800-40 prescribes, and how SimpleRisk's Core and Extra split the work.
Why this matters
Vulnerability management is the operational discipline of running scanners, reading the output, deciding what to fix, fixing it, and verifying the fix held. The scanner-and-patch loop is what produces the recurring weekly or monthly stream of "your environment has 437 new findings this scan cycle, here's what they are." A program that's running this loop well has a working answer to "are we patching what matters?"; a program that isn't is finding out about the same vulnerabilities for the third time when the auditor asks why nothing's been done about them.
The trap is conflating vulnerability management with risk management. They're related but distinct. A vulnerability is a technical weakness (a missing patch, a misconfiguration, a deprecated cryptographic algorithm) that exists on a specific asset. A risk is a threat exploiting a vulnerability against an asset, with a likelihood and an impact attached. The same vulnerability on different assets has different risk profiles; the same risk can be reduced by addressing different vulnerabilities. Programs that try to manage every vulnerability as if it were its own risk drown in scanner output; programs that ignore vulnerabilities until they materialize get caught flat-footed.
The third reason this matters: vulnerability output scales differently than risk-register output. A typical mid-sized environment surfaces hundreds to thousands of vulnerabilities in any given scan cycle. The risk register, by contrast, holds dozens to a few hundred entries at the working-program level. The two have to coexist without one drowning the other. The standard pattern is "vulnerabilities live in the vulnerability tool until they get triaged into the risk register at the level where they need executive attention" — which is exactly the model SimpleRisk's Vulnerability Management Extra implements.
How frameworks describe this
The major frameworks each describe vulnerability management as a distinct capability area, with NIST publishing the most prescriptive guidance.
- NIST Special Publication 800-40 r4 Guide to Enterprise Patch Management Planning is the federal reference for patch and vulnerability management. The current revision (2022) describes patch management as a four-phase preventative maintenance cycle: Prepare (asset inventory, software inventory, patch sources, prioritization scheme), Plan (testing, scheduling, exception handling), Implement (deployment), and Verify (confirm patches applied, scan to validate). The 800-40 emphasis: patching is a continuous program, not a series of campaigns.
- NIST Cybersecurity Framework (CSF) v2.0 addresses vulnerability management primarily through the Identify function (
ID.RA-1: identifying vulnerabilities;ID.RA-6: prioritizing risks based on threats and likelihoods) and the Protect function (PR.PS-2: maintaining software inventories;PR.IR-3: implementing protective technology). CSF doesn't prescribe a workflow but expects the capability to exist. - NIST SP 800-53 spreads vulnerability management across RA-5 (Vulnerability Monitoring and Scanning), SI-2 (Flaw Remediation), SI-3 (Malicious Code Protection), and CM-3 (Configuration Change Control). RA-5 is the headline control: scan systems for vulnerabilities, analyze the scan reports, remediate legitimate vulnerabilities, share information across the organization. The federal context adds rigor (mandates the scanning frequency, the response timeframes for different severity levels), but the underlying activities are universal.
- CIS Critical Security Controls v8 covers vulnerability management as Control 7 (Continuous Vulnerability Management) in Implementation Group 1 (the baseline expected of every organization). The CIS framing emphasizes "continuous" — not "annual scan, fix the findings, declare victory" but "scan continuously, prioritize continuously, remediate continuously."
- PCI DSS v4.0 addresses vulnerability management through Requirement 6.3 (vulnerability identification and remediation timelines for in-scope systems) and Requirement 11.3 (regular scanning, including ASV scanning quarterly for external-facing systems and internal scanning at organization-defined frequencies). PCI is the most prescriptive on timelines: critical vulnerabilities have a 30-day remediation window for in-scope systems, with shorter windows for the most severe findings.
The takeaway across all five: vulnerability management isn't a one-time activity. It's a continuous loop, the loop is auditable, and the audit will check both the finding side (you scan regularly with current tools) and the fixing side (you remediate within reasonable timelines per severity).
How SimpleRisk implements this
SimpleRisk's vulnerability management story has two distinct chapters depending on whether the Vulnerability Management Extra is installed.
Without the Extra (Core SimpleRisk): there is no native vulnerability tracking. Customers running on Core typically handle vulnerabilities through one of two patterns:
- Treat vulnerabilities as risks directly. A scanner finding gets submitted as a risk in the standard risk register, scored on the CVSS scoring methodology (one of the six SimpleRisk methodologies — see Risk Scoring Methodologies), tagged with
vulnerabilityfor filtering, and worked through the standard risk-treatment workflow. This works at low volume (dozens of vulnerabilities); it doesn't scale to the hundreds-or-thousands volumes a typical scanner produces. - Track vulnerabilities outside SimpleRisk. The scanner's own console tracks the per-finding detail; SimpleRisk holds only the risks that the team has explicitly promoted from the scanner output. This is the operational reality for many Core customers; the trade-off is that the promotion step is manual.
With the Extra: the Vulnerability Management Extra bolts on a full vulnerability-management workflow. Activation adds a Vulnerabilities module under /vulnerabilities/ with a Triage Vulnerabilities queue (the unprocessed-finding inbox) and a Configure page for setting up scanner connectors. The Extra ships native integrations for five scanner platforms (Qualys, Rapid7 InsightVM (Cloud and On-Premise), Rapid7 Nexpose, and Tenable.io) pulling vulnerability data on a configurable cron schedule and writing it to platform-specific tables (vulnmgmt_
and the related junction tables linking vulnerabilities to assets).
The triage flow is opinionated. Each scanner finding lands in the triage queue with its CVSS score, title, description, and the assets it was found on. A reviewer with the Approve Vulnerability Triage to Risk permission picks the findings worth promoting, clicks Approve Selected Vulnerabilities, and the Extra creates a corresponding risk in the standard risk register (using the vulnerability's title as the subject, its description as the assessment, and its CVSS score as the impact). Findings not worth promoting get Reject Selected Vulnerabilities instead, hiding them from the queue without deleting them. The full flow is in From Vulnerability to Risk.
The Extra also supports automatic triage by CVSS threshold (configuration setting extra_vulnmgmt_triage_vulnerabilities_by_score). With auto-triage enabled, vulnerabilities with a CVSS score above a configurable cutoff get automatically converted to risks during the scanner-update cycle, without human approval. This is the right setting for programs that want to capture every CVSS-9-or-above finding as a risk by default; it's the wrong setting for programs that want a human review on every promotion.
The relationship between the two layers: the Vulnerability Management Extra is the right tool when scanner volume is meaningful and the team needs scanner-integrated intake. Core's risk register is the right tool when scanner volume is low or when the program wants every vulnerability to flow through full risk-management discipline (review, scoring, mitigation planning, governance approval) rather than CVSS-driven auto-conversion.
Common pitfalls
A handful of patterns recur when teams operate vulnerability management.
-
Scanning without a fix discipline. A program with quarterly scans producing thousands of findings and a remediation queue that never gets shorter is doing the finding half of the work and skipping the fixing half. The scan output is operationally useful only to the extent the team is closing findings on the cadence the scan rate produces. Match the remediation capacity to the scan rate; if you can't remediate at the rate scanners produce findings, the backlog grows monotonically and the program loses credibility.
-
Treating every CVSS 7+ as critical. CVSS scores describe vulnerability severity in the abstract, not exposure in your specific environment. A CVSS 9.8 on a development server isolated from the network behind a firewall isn't operationally a 9.8 in your context; a CVSS 6.4 on the production database with internet exposure may be considerably worse than its score suggests. Use CVSS as the starting point and adjust for environmental context. CVSS itself supports this through the Environmental metric group; SimpleRisk's CVSS calculator exposes those fields.
-
Elevating every finding to a risk. A scanner producing 1,000 vulnerabilities per scan cycle, with each one promoted to a risk in the risk register, produces a risk register that's 1,000-deep within a quarter and unreadable. The triage step exists for a reason: most scanner findings are routine vulnerabilities the patch process will handle without leadership attention. Elevate to the risk register only the ones that genuinely need risk-management treatment (significant residual exposure, accepted-risk decisions, mitigation plans that require coordination across teams).
-
Auto-triage threshold set too low. Configuring auto-triage at CVSS 6.0 (the typical "Medium" floor) produces a risk register dominated by routine findings the team would have rejected from the triage queue if they'd seen them. Set the auto-triage threshold high enough that the auto-promoted findings are ones you'd always promote anyway (typical: CVSS 9.0+ for the very-loud-and-very-bad findings); leave the rest for human review.
-
Treating vulnerability remediation as the same SLA as risk treatment. A vulnerability remediation cycle is fast (days to weeks per finding); a risk treatment cycle is slow (weeks to months per risk). Mixing the two SLAs in the same workflow either rushes the risk-management work into shortcuts or lets vulnerabilities sit unremediated waiting for governance ceremonies they don't actually need. Keep the two cadences separate; vulnerabilities flow through the patch process, risks flow through the risk-management process, and the promotion step is the bridge.
-
No retest after remediation. A vulnerability marked "fixed" without a follow-up scan that confirms it's actually gone produces an evidence trail that doesn't hold up. The auditor will ask "how do you know the patch was applied?"; the answer needs to be "the next scan didn't show the finding," not "the team said they applied it." Build the rescan into the remediation workflow; the loop closes only when the scanner agrees the finding is gone.
-
Ignoring the asset context. A vulnerability finding without a clear linked asset is much harder to act on (which server is "host-4523" again?). When the scanner ingestion populates the asset context, use it; when it doesn't, fix the asset-discovery side before the vulnerability side, because the asset is the unit the team will actually act on.
-
Silently disabling scanner integrations when they break. Scanner API credentials expire; site definitions change; rate limits get hit. When the Extra's scanner sync starts failing, the symptom is "no new vulnerabilities are appearing" — which can look like good news to a casual reader. Monitor the sync; failed syncs are a kind of incident that hides itself if not surfaced.