Skip to content
English
  • There are no suggestions because the search field is empty.

01.03 Risk Scoring Methodologies

SimpleRisk ships six scoring methodologies — Classic, CVSS, DREAD, OWASP, Custom, and Contributing Risk — each suited to a different risk shape. This article explains what each is for and how to pick.

Why this matters

A risk register is a list of things that could go wrong. A useful risk register is a list of things that could go wrong, ordered by how worried you should be. The ordering is what scoring produces, and the methodology is what makes the ordering defensible. Without a method, every score is opinion; with one, every score is opinion plus a paper trail showing how the opinion was reached.

The trap is treating the methodology as the program. A program with six perfectly-scored risks isn't more useful than one with sixty messily-scored ones. The point of the score is to support a treatment decision. If the score is precise but nobody acts on it, the precision is wasted; if the score is rough but it correctly puts the right risks in the "do something this quarter" bucket, the program is working.

SimpleRisk supports six scoring methodologies because no single one fits every risk shape. A known software vulnerability scores naturally on CVSS; a "vendor goes out of business" risk doesn't. A web-application risk scores on OWASP; a strategic risk doesn't. Picking the methodology that matches the risk produces a useful score; forcing every risk through the same methodology produces a register where two-thirds of the rows are scored at "Medium / Medium" because the methodology doesn't have anything more useful to ask.

How frameworks describe this

Each of the six methodologies traces back to a specific framework or community.

  • Classic (Likelihood × Impact) is the qualitative ordinal scoring most security programs start with. The lineage is NIST SP 800-30 Guide for Conducting Risk Assessments, which defines a 5×5 matrix of likelihood (Very Low through Very High) and impact (Very Low through Very High), with the resulting risk level coming from a published lookup table. Classic is what most non-quantitative GRC programs mean when they say "we score risks."
  • CVSS (Common Vulnerability Scoring System) is the FIRST.org community standard for scoring known software vulnerabilities. It produces a base score from 0 to 10 based on attack vector, attack complexity, privileges required, user interaction, scope, and impact to confidentiality/integrity/availability. CVSS is the score the National Vulnerability Database publishes for CVEs; if your risk is a known vulnerability, CVSS is usually the right tool because the upstream score is already published and defensible.
  • DREAD is a Microsoft-origin scoring methodology (Damage, Reproducibility, Exploitability, Affected users, Discoverability) from the early 2000s threat modeling work. DREAD has critics — the "discoverability" axis in particular is frequently dropped — but it's still in active use in some application-security communities.
  • OWASP Risk Rating is the OWASP Foundation's methodology for application-security risks, which scores likelihood and impact each as the average of multiple factors (threat agent skill, motive, opportunity, size, ease of discovery, ease of exploit, etc., on the likelihood side; loss of confidentiality/integrity/availability/accountability and financial/reputational/non-compliance/privacy damage on the impact side). It's the right tool when a risk is application-shaped and the team has the time to walk through the factor list.
  • Custom isn't a methodology in the framework sense — it's a single numeric score the program defines for itself. Custom is the right answer when an organization has its own scoring rubric (for example, a quantitative methodology like FAIR computed in a separate tool) and just needs SimpleRisk to hold the resulting number alongside the rest of the register.
  • Contributing Risk is SimpleRisk's own variant on the Classic approach, with the impact side broken into multiple "subjects" (categories like financial, operational, reputational, regulatory, etc.) that each carry their own impact score. The risk's overall impact is the maximum across the configured subjects. Useful when your program needs to track multi-dimensional impact without committing to a full quantitative analysis.

For the related deeper methodologies, Factor Analysis of Information Risk (FAIR) is worth knowing about even though SimpleRisk doesn't ship a native FAIR scoring methodology. FAIR is a quantitative methodology that expresses risk as a probability distribution of dollar loss, and it's the right choice when leadership wants risk reported in monetary terms. Many SimpleRisk customers run FAIR in a separate Monte Carlo tool and use the Custom methodology to capture the resulting expected-loss number in the register.

How SimpleRisk implements this

The methodology is selected per risk on the Risk Scoring Method dropdown of the Submit Risk form (see Submitting a Risk). The default methodology is configurable in admin settings; Classic is the seeded default and is the right starting point for most programs. Switching methodology on an existing risk is supported; the previous score is recorded in the audit trail before the new methodology's inputs replace it.

Each methodology has its own set of input fields:

  • Classic asks for Current Likelihood and Current Impact as Very Low / Low / Medium / High / Very High dropdowns. The mapping from the 5×5 matrix to a numeric score is published in the admin settings.
  • CVSS opens a calculator modal with the Base Metrics, Temporal Metrics, and Environmental Metrics fields from the official CVSS standard. The resulting Base Score (0.0–10.0) is what gets stored.
  • DREAD asks for the five DREAD factors (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) on a 1–10 scale and computes the average.
  • OWASP asks for the eight likelihood factors and the sixteen impact factors on a 0–9 scale and computes the OWASP risk rating from them.
  • Custom takes a single numeric score from 0 to 10 with no further breakdown.
  • Contributing Risk asks for a likelihood score and per-subject impact scores; the overall impact is the maximum across subjects, and the score follows the same likelihood-times-impact pattern as Classic.

Whichever methodology is in use, SimpleRisk produces both an inherent risk score (before mitigation) and a residual risk score (after the recorded mitigation reduces the impact by the mitigation percent). The default risk level (Very Low / Low / Medium / High / Very High) thresholds are configurable in admin settings; the dashboards and the prioritization views read from the residual score by default, with the inherent score available for trend analysis.

The thresholds and the matrix can be customized — the matrix in particular often needs adjustment for organizations whose impact scale doesn't fit a 5×5 grid (for example, because the highest plausible loss for a small organization is less than what the seeded matrix would call "Very High"). Both customizations live in the Admin Configuration → Risk Management settings.

Common pitfalls

A few patterns recur across customers when scoring choice goes sideways.

  • Picking one methodology as the house standard for every risk. A program that mandates CVSS for everything will score most non-vulnerability risks at "Medium / Medium" because the CVSS axes don't have anything else useful to say about a "key person leaves" risk. The methodology should match the risk shape; choosing per risk costs a few seconds at submission and prevents meaningless scores from cluttering the register.

  • Scoring inherent and ignoring residual (or vice versa). Inherent risk tells you what the risk would look like with no controls. Residual risk tells you what it looks like with the controls you actually have. Both are useful for different conversations — inherent for "where would we be exposed if controls failed," residual for "where are we now, after the work we've done." Programs that report only one of them give leadership a partial picture.

  • Treating CVSS as a likelihood score. CVSS Base Score isn't a probability; it's a severity-of-the-vulnerability score that assumes the vulnerability is exploited. Translating "CVSS 9.8" to "very likely to happen" loses the meaning. CVSS measures how bad it would be if it happened; the likelihood of it actually being exploited in your environment is a separate analysis, which is why CVSS has Temporal and Environmental metric groups for adjusting the base score to context.

  • Calibrating once and never again. A 5×5 matrix calibrated when the program was twenty risks may not work when the register has two hundred. If the residual scores are clustering at one end of the matrix, the thresholds aren't separating the risks usefully — recalibrate in the admin settings rather than scoring everything against an outdated scale. Plan to revisit the calibration once a year minimum.

  • Confusing scoring methodology with risk methodology. "We use NIST 800-30" is a sentence customers say a lot and it doesn't pin down whether they mean the qualitative 5×5 matrix (which Classic implements), the broader risk-assessment process (which is most of the rest of the standard), or both. When a question is about how to assess a risk, the answer is usually about process, not about which scoring inputs to fill in.

  • Mixing methodologies in the same dashboard without normalizing. A register with fifty Classic-scored risks and ten CVSS-scored risks shown on the same prioritized list will rank the CVSS risks oddly because CVSS produces 0–10 scores with different distribution characteristics than a 5×5 matrix score. The dashboards do normalize to a common 0–10 scale internally, but if a particular report sums or averages raw scores across methodologies the result won't be meaningful. Filter to one methodology when running cross-risk comparisons.

  • Letting the methodology drive the program instead of the program driving the methodology choice. Picking CVSS because it's "more rigorous" without a vulnerability-management program to feed it produces the same outcome as Classic with extra steps. The methodology serves the work; pick the one that fits the work that's actually being done.

Related