06.01 Default Risk Scoring Method
SimpleRisk ships six scoring methods — Classic, CVSS, DREAD, OWASP, Custom (Customization Extra), and Contributing Risk. Pick one as the system default for new submissions, allow per-submission overrides, and recompute existing risks when the methodology shifts. The default is set at Configure → Settings → Risk Formula.
Why this matters
The scoring method is how SimpleRisk turns the qualitative judgments a submitter makes (impact and likelihood) into a quantitative score that powers everything downstream — the dashboards, the SLA cadence, the heat map, the executive readouts. Pick the wrong method and the program either over-emphasizes impact (every "could be bad" risk lands as Critical) or under-emphasizes it (only certain catastrophes ever break out of Medium). Pick the right method and the scores correlate with the program's intuition about which risks actually matter.
The honest scope to know up front: all six built-in methods are simple arithmetic over impact and likelihood, with different weightings. They aren't full implementations of CVSS, DREAD, or OWASP as defined by their respective standards bodies; they're scoring formulas inspired by those frameworks' weighting philosophies. CVSS practitioners expecting to enter Attack Vector / Attack Complexity / Privileges Required / etc. will find SimpleRisk's CVSS method scores from impact × likelihood, not from the actual CVSS metric vector. If you need true CVSS vector scoring, the Custom Risk Model (Customization Extra; see Custom Risk Scoring) is the closer fit, since you can define an arbitrary matrix that approximates CVSS-style weights.
The other thing worth knowing: the default is per-submission-overridable, not enforced. The system default applies to new submissions where the submitter doesn't pick a different method; submitters can choose any of the six on the form. Programs that want one consistent methodology should communicate the expectation; the system doesn't gate it.
The third thing: changing the system default doesn't recompute existing risks. Each risk records the methodology it was scored with; the stored calculated_risk reflects that methodology. Switching the default applies forward to new submissions; to bring existing risks onto the new methodology, run recalculate_all_risk_scores() (the Recalculate Risk Scores admin action). This is a deliberate design choice (historical scores aren't silently re-shaped by a configuration change), but it means a methodology change is two operations, not one.
Before you start
Have these in hand:
- Admin access to Configure → Settings → Risk Formula (or equivalent path that opens
simplerisk/admin/configure_risk_formula.php). - Awareness of which methodology your program will standardize on. Don't switch methodologies in flight without a plan; pick one for an extended period (a year or more) and let the program's intuition stabilize against it.
- Understanding that the methodologies are simple formulas, not the named external frameworks. Don't assume "we use the OWASP methodology" satisfies a requirement to use the actual OWASP Risk Rating Methodology.
- A test environment if you're switching from one methodology to another. The new methodology may produce visibly different scores; verify in test before flipping production.
Step-by-step
1. Understand what each built-in method weighs
The six methods, with their numeric IDs and their weighting characteristics:
1: Classic
- Formula (max-score basis):
(L × I) + (2 × I) - Weighting characteristic: Heavily impact-weighted — high impact dominates the score even with low likelihood.
2: CVSS
- Formula (max-score basis):
(L × I) + I - Weighting characteristic: Moderately impact-weighted.
3: DREAD
- Formula (max-score basis):
L × I - Weighting characteristic: Pure multiplicative — impact and likelihood weighted equally.
4: OWASP
- Formula (max-score basis):
(L × I) + L - Weighting characteristic: Moderately likelihood-weighted.
5: Custom
- Formula (max-score basis): Per-cell matrix lookup
- Weighting characteristic: Whatever you define; no formula. (Requires the Customization Extra.)
6: Contributing Risk
- Formula (max-score basis):
(L × I) + (2 × L) - Weighting characteristic: Heavily likelihood-weighted — high likelihood dominates even with low impact.
(L = likelihood, I = impact. The formulas listed are the maximum-score formulas; the per-risk score is computed similarly but using the specific values of impact and likelihood for that risk.)
The score is then normalized to a 0–10 display range when need_risk_score_normalization is true (default).
2. Pick the methodology that matches your program's posture
Some guidance:
- Risk-averse programs / regulator-facing posture — Classic. The 2× impact weighting means even moderately-likely high-impact risks land in the Critical band. Appropriate when "we cannot afford a high-impact event regardless of probability" is the program's stated posture.
- Balanced security-engineering posture — DREAD. Impact and likelihood multiplied with no extra weighting; produces a clean impact-vs-likelihood matrix.
- Threat-driven security posture — OWASP or Contributing Risk. The likelihood weighting reflects "we're worried about active threats; if it's likely to happen, that matters more than the maximum theoretical impact."
- CVSS-aligned program — CVSS. The slight impact-weighting tracks the CVSS philosophy of "impact > exploitability, all else equal."
- Custom matrix — Custom. Use when none of the above fit and you need to define your own (impact, likelihood) → score table. Requires the Customization Extra.
Don't agonize over the choice. Most programs pick DREAD or Classic and never change. The methodology matters less than the consistency.
3. Set the system default
Sidebar: Configure → Settings → Risk Formula. Find the Risk Model setting (risk_model in the settings table). Pick the methodology ID; save.
The change takes effect immediately for new submissions; submitters who hit the form after the save see the new default in the form's scoring-method dropdown.
4. Decide whether to allow per-submission overrides
The risk submission form exposes the scoring method as a dropdown; submitters can pick any of the six methods regardless of the system default. Programs vary on whether to allow this:
- Allow overrides (the default behavior): submitters can pick the method that best fits the specific risk. A vulnerability submission might use CVSS; an operational risk might use Classic. Increases per-risk fidelity at the cost of cross-risk comparability.
- Effectively disable overrides (operational discipline): communicate to submitters that they should always use the default; review submissions for compliance. SimpleRisk doesn't have a hard system gate that blocks non-default methods; the discipline is operational.
Most programs land on "allow overrides; encourage the default." Pure-vulnerability programs sometimes commit to CVSS for everything; mixed programs tend to use the default consistently.
5. Recompute existing risks if you change the methodology
If you've switched the system default, existing risks still carry their original methodology and score. To bring them onto the new methodology:
- Configure → Settings → Risk Formula → Recalculate Risk Scores (or the equivalent admin action).
- The function
recalculate_all_risk_scores()iterates every risk, recomputes thecalculated_riskusing the risk's current methodology and impact/likelihood values, and updates therisk_scoringrow. - On a large install (10,000+ risks), this is meaningful database write activity; coordinate with operations.
For a methodology change (every risk should now use the new method, not just be recomputed under its existing one), you'd need to update each risk's scoring_method first, then recalculate. The standard recalculate action keeps each risk's existing methodology.
6. Configure the score-to-level mapping
The numeric score from the formula maps to a qualitative level (Insignificant / Low / Medium / High / Very High). The mapping is configured separately at Configure → Settings → Risk Formula under the level thresholds. See The Risk Formula for the exact threshold settings.
If you change the methodology and the score range shifts, the level thresholds may need to shift too. The defaults assume the post-normalization 0–10 range; verify they still produce a reasonable distribution after a methodology change.
7. Communicate the methodology to stakeholders
Risk scores travel — into reports, dashboards, exec slides, audit responses. A change in methodology produces score changes that aren't actual changes in risk. Stakeholders interpreting "the High count dropped from 12 to 7" as "the program improved" may be misreading a methodology shift.
Communicate explicitly:
- What methodology you're using and why.
- For methodology changes: an explicit announcement, a translation table between old and new scoring, a re-baseline of period-over-period metrics.
- In reports: include the methodology name in the report header so the reader knows the basis of the scores they're seeing.
8. Verify with a few representative risks
Before announcing a methodology choice (or change) to the program, test with 5–10 representative risks:
- Note the (impact, likelihood) pair for each.
- Compute by hand what the methodology should produce.
- Compare to what SimpleRisk shows after submission or recalculation.
- Confirm the levels (Critical, High, etc.) match the stakeholder's intuition.
If the scores don't track intuition, either the methodology is wrong for the program or the level thresholds need adjustment. Fix before going live.
Common pitfalls
A handful of patterns recur with the default scoring method.
-
Picking a methodology because it sounds prestigious. "We use CVSS" sounds rigorous in audit responses, but SimpleRisk's CVSS method isn't the full CVSS specification. If you need true CVSS, integrate with a vulnerability-management tool that produces CVSS vectors and import the scores; don't rely on SimpleRisk's simplified CVSS method alone.
-
Switching the default mid-program without recalculating. New submissions use the new methodology; existing risks keep their old scores. Mixed-methodology registers are hard to reason about. Either recalculate when you switch, or commit to the previous methodology for existing risks and only use the new one going forward.
-
Allowing per-submission overrides without explaining the consequence. A register where each risk uses a different methodology can't be sorted by score meaningfully (the scores aren't comparable across methodologies). Encourage the default.
-
Forgetting that the score-to-level mapping needs to align with the methodology range. If your methodology produces scores in 0–25 and your level thresholds expect 0–10, every risk lands in one level. Verify both together.
-
Treating the methodology as a permanent decision. Methodologies can change with program maturity. The conversation around "should we revisit this?" is appropriate every couple of years; the change itself just needs the recalculate-and-communicate workflow.
-
Not understanding what each formula actually emphasizes. Picking "OWASP" because the program wants application-security focus doesn't necessarily produce app-sec-relevant scores; the formula just adds a likelihood weighting. Read the formula table; pick based on the weighting, not the name.
-
Using Custom (method 5) without the Customization Extra. The method appears in the dropdown; without the Extra, the matrix isn't configured and the score may default to a constant. Either activate the Extra and configure the matrix, or pick a different method.
-
Skipping the test-with-representative-risks step. The cost of "actually compute a few examples to confirm the methodology produces sensible scores" is low; the cost of "we picked it, now everyone's reports look weird" is high.
-
Not communicating methodology changes to external stakeholders. Internal users notice; auditors and regulators may also need to know. A methodology shift between annual audits surfaces as inexplicable score changes if not explained.
-
Conflating methodology with categorization. Different categories of risk (security, operational, financial, strategic) might warrant different methodologies, but operationally most programs use one. Don't try to maintain four parallel methodologies for four categories; pick one and accept the imperfect-fit trade-off, or move to a Custom matrix that captures the program's specific weighting.
Related
Reference
- Permission required:
check_adminfor the Risk Formula configuration page and the recalculation action. - API endpoint(s): None specific to scoring methodology configuration; risks expose their
calculated_riskandscoring_methodvia standard/api/v2/risksendpoints. - Implementing files:
simplerisk/admin/configure_risk_formula.php(the configuration UI);simplerisk/includes/functions.php(calculate_risk($impact, $likelihood)~line 6045 — the per-method dispatch;calculate_maximum_risk_score($scoring_method)~line 6141 — the per-method maximum used in normalization;get_scoring_method_name($method)~line 17225 — the method-ID to display-name mapping;recalculate_all_risk_scores()— the bulk recalculation;update_risk_model()— the system default setter). - Database tables:
risk_scoring(per-risk:scoring_methodID, methodology-specific impact and likelihood columns,calculated_riskfor the computed score). config_settingskeys:risk_model(system default scoring method ID, 1–6);need_risk_score_normalization(boolean — whether to normalize scores to 0–10 for display);default_risk_score(fallback for invalid inputs, default10).- Built-in methods:
1Classic,2CVSS,3DREAD,4OWASP,5Custom (requires Customization Extra),6Contributing Risk. - External dependencies: None.