Skip to content
English
  • There are no suggestions because the search field is empty.

05.02 Custom Risk Scoring

Define your own risk-scoring formula when none of the built-in methodologies (Classic, CIA, CVSS v2, DREAD, OWASP) fit. The Customization Extra adds the Custom Risk Model methodology — a per-organization scoring matrix that maps your impact and likelihood scales to a quantitative risk score.

Requires: Customization Extra

The Custom Risk Model scoring methodology and the per-organization scoring matrix are added by the Customization Extra at simplerisk/extras/customization/. Without the Extra activated, only the five built-in scoring methodologies are available.

Why this matters

SimpleRisk ships five always-available scoring methodologies (Classic, CVSS, DREAD, OWASP, and Contributing Risk — see The Risk Formula), and most programs find one that fits. But there's always a tail: the program with a regulator-prescribed scoring formula, the organization that's standardized on a 5x5 matrix with custom labels, the team that's adopted FAIR-style monetary loss exposure rather than ordinal scoring. For these programs, the built-in methodologies are close-but-wrong, and forcing them into a built-in shape produces scores that everyone discounts because "that's not how we actually rate this."

The Custom Risk Model methodology added by the Customization Extra is the answer. You define your own impact scale, likelihood scale, and the matrix that maps the (impact, likelihood) pair to a numeric risk score; SimpleRisk uses that matrix wherever a risk score is computed. The result: every dashboard, report, and SLA threshold operates on scores from your formula, not from a default that doesn't match how the program actually thinks about risk.

The honest scope to know up front: the Custom Risk Model is a matrix lookup, not an arbitrary expression evaluator. You define a value for every (impact, likelihood) combination; SimpleRisk looks up the cell and uses the value. There's no support for "score = (impact × likelihood) + (vendor_tier × 0.5) - (control_maturity × 0.3)"-style formulas; if your scoring genuinely requires programmatic evaluation across many variables, the matrix lookup might not capture all of it. Most programs land at a 5x5 or 7x7 matrix though, which the methodology handles cleanly.

The other thing worth knowing: per-risk methodology assignment is independent of the system default. Each risk records the methodology it was scored with; switching the system default doesn't recompute existing risks. A change to the custom matrix recalculates all risks scored with the Custom Risk Model methodology (recalculate_all_risk_scores() is what does this). Plan for the recalculation: a 50,000-risk install can take a noticeable amount of time.

The third thing: risk levels and SLA thresholds are separate from scoring. The scoring methodology produces a numeric score; the risk-level mapping (Insignificant / Low / Medium / High / Critical) and the SLA thresholds are configured separately. Changing the methodology changes the scores; the score-to-level mapping is what determines the qualitative labels. Confirm both are aligned after a methodology change.

Before you start

Have these in hand:

  • Admin access to Configure → Settings → Risk Formula (or the equivalent path that exposes simplerisk/admin/configure_risk_formula.php).
  • Your scoring methodology defined on paper. What's the impact scale (5 levels? 7? what do they mean?). What's the likelihood scale (5 levels? probability ranges?). What numeric value does each (impact, likelihood) cell produce? Don't try to design this in the SimpleRisk UI; design it externally and then transcribe.
  • A documented mapping from scores to qualitative levels. "Score 0–25 = Low, 26–50 = Medium, 51–75 = High, 76–100 = Critical" or whatever your program uses. The SimpleRisk default risk-level configuration ships with one set of thresholds; adjust to match your scoring range.
  • A test environment to validate the matrix before pushing to production. Recalculation across all risks is reversible (change the matrix, recalculate again) but disruptive in flight; the test environment is where you confirm the matrix produces sensible scores for representative risks.
  • Awareness that recalculation is a write operation. The recalculation updates risk_scoring rows for every risk. On large installs this is meaningful database write activity; coordinate with the operations team if relevant.

Step-by-step

1. Activate the Customization Extra

If not already active, see Custom Fields → step 1 for the activation procedure. The Custom Risk Model methodology becomes available as scoring method 6 once the Extra is active.

2. Configure the Custom Risk Model matrix

Sidebar: Configure → Settings → Risk Formula (or follow the navigation that opens configure_risk_formula.php). The Custom Risk Model section exposes:

  • Impact scale labels — the qualitative labels for each impact level (e.g., "Insignificant," "Minor," "Moderate," "Major," "Catastrophic").
  • Likelihood scale labels — the qualitative labels for each likelihood level (e.g., "Rare," "Unlikely," "Possible," "Likely," "Almost Certain").
  • The matrix — for each (impact, likelihood) combination, the numeric score the methodology produces.

Fill in the matrix cell by cell. Many programs use a multiplicative pattern (impact_index × likelihood_index, scaled to a 0–100 range); others use a non-linear lookup (high-impact-low-likelihood scored higher than the multiplication would suggest, reflecting the "tail risk" weighting). The methodology supports both; the matrix is whatever you put in.

Save the matrix. The settings persist in the relevant settings rows; the system is now ready to score risks with the new methodology.

3. Set the system default scoring methodology

The same settings page exposes the default scoring method for new risks. To make the Custom Risk Model the default for new submissions:

  1. Set the default to 6 (Custom Risk Model).
  2. Save.

New risk submissions will use the Custom Risk Model unless the submitter explicitly picks a different methodology on the form. Existing risks keep their original methodology until explicitly recalculated.

4. Configure the score-to-level mapping

Sidebar: Configure → Settings for the risk levels. The relevant settings:

  • risk_level_high, risk_level_medium, risk_level_low — the score thresholds that define the boundaries between Critical/High/Medium/Low/Insignificant levels.
  • risk_level_*_color — the display color for each level.
  • sla_threshold_* — the days within which a risk at each level should be remediated.

If your custom matrix produces scores in 0–100, the default thresholds may already work. If you produce scores in 0–25, the thresholds need to scale down. Verify the level mapping covers the actual range your matrix produces; otherwise risks will all land in one level (everything labeled "High" because the Medium threshold is set higher than your max possible score).

5. Recalculate existing risks (optional)

If you want existing risks to use the new methodology, the function recalculate_all_risk_scores() recomputes every risk's score using its current methodology. The trigger surface is typically the Recalculate Risk Scores button on the configure_risk_formula page or a similar admin action.

What happens:

  1. Every risk in the risk table is iterated.
  2. The risk's stored impact, likelihood, and methodology are read.
  3. The score is recomputed using the current matrix (for Custom Risk Model risks) or the standard formula (for the other methodologies).
  4. The risk_scoring.calculated_risk column is updated.
  5. Audit-log entries are written for each recalculation.

On a large install, this is a substantial operation. Consider running during a low-traffic window. Backups are advisable before; if the matrix produces unexpected scores, the rollback is "reset the matrix and recalculate again" rather than a database restore, but a backup is the belt-and-suspenders option.

6. Verify the methodology with representative risks

Before announcing the new scoring to the program, pick 5–10 representative existing risks (a mix of high/medium/low historical scores) and:

  1. Note their pre-recalculation scores.
  2. Recalculate.
  3. Note their post-recalculation scores.
  4. Confirm the new scores match what the methodology was supposed to produce.

If the scores don't match expectations, the matrix is wrong; fix it and recalculate again. This is much easier to do early than after every reporting cycle has shifted to the new scores and stakeholders have started reading them.

7. Communicate the change

Risk scores are how stakeholders interpret the program. A change in scoring methodology produces apparent score changes that aren't actual changes in risk — the same underlying impact and likelihood are now multiplied differently. Stakeholders who track "the high-risk count dropped from 15 to 8 this quarter" will read that as "the program improved" when it might just be "we changed the scoring."

Communicate the change explicitly:

  • Announce the methodology shift in advance, with the rationale.
  • Document the new matrix so anyone can inspect how a given (impact, likelihood) maps to a score.
  • Provide a translation table between old and new scores for the most common levels.
  • Re-baseline reporting metrics so quarter-over-quarter comparisons account for the methodology change.

8. Plan for matrix changes after going live

Once the Custom Risk Model is in production use, future matrix changes affect every Custom-Risk-Model-scored risk. Treat matrix changes like any other production configuration change:

  • Test in a non-production environment first.
  • Document the rationale.
  • Communicate the change.
  • Schedule the recalculation during a low-traffic window.

A matrix that's tweaked monthly produces a moving target that nobody trusts. Stabilize on a methodology; revisit it on a deliberate cadence (annually, at audit cycle, or when a regulator or framework genuinely requires a change).

Common pitfalls

A handful of patterns recur with custom risk scoring.

  • Designing the matrix in the SimpleRisk UI without an external sanity check. It's easy to fill in 25 cells and miss a logical inconsistency (high-impact-low-likelihood scoring lower than medium-impact-high-likelihood). Design on paper or in a spreadsheet first; transcribe into SimpleRisk after you've sanity-checked.

  • Forgetting to recalculate after a matrix change. The matrix is the lookup table; existing risks keep their stored scores until recalculated. A program that "changed the scoring" but didn't recalculate is producing reports against the old scores. Always recalculate after a methodology change.

  • Recalculating during business hours on a large install. The recalculation is a write-heavy operation; coordinate with operations.

  • Mismatched matrix range and risk-level thresholds. A matrix producing 0–25 scores against thresholds set for 0–100 puts every risk into the lowest level. Verify the thresholds match the matrix range.

  • Switching the system default without updating the documentation. New submitters use the Custom Risk Model; existing documentation says to use Classic. The mismatch produces confusion. Update onboarding materials when the default changes.

  • Keeping multiple methodologies active across the program. Risks scored with Classic, CIA, and Custom Risk Model can't be directly compared. The dashboards display the calculated_risk score regardless of methodology, which masks the comparability problem. Either commit to one methodology program-wide and recalculate, or accept the comparability limitation explicitly.

  • Treating the Custom Risk Model as a substitute for FAIR or other quantitative methodologies. The matrix lookup is ordinal scoring with custom labels and weights; it doesn't produce monetary loss exposure or probabilistic distributions. If your program needs FAIR or other true quantitative risk modeling, the Custom Risk Model captures the structure but not the analytics.

  • Defining a matrix with too many levels. A 10x10 matrix is 100 cells to design and validate; the additional granularity rarely produces decision-relevant differentiation. 5x5 is the conventional choice; 7x7 is the upper bound for most programs.

  • Not training submitters on the impact and likelihood scales. The methodology is only as accurate as the inputs. If submitters guess at "impact = 3" without understanding what 3 means in your scale, the resulting scores are noise. Document the scale definitions; train new users; revisit periodically.

  • Forgetting that the scoring methodology applies to risk only, not to assets, controls, or other entities. Asset risk scores (in the Asset Management Extra) and compliance maturity scores follow different methodologies. The Custom Risk Model affects the risk register; other entity scoring is configured separately.

Related

Reference

  • Permission required: check_admin for the risk-formula configuration page and the recalculation trigger.
  • API endpoint(s): None specific to scoring methodology; the API surfaces calculated risk scores per risk via the standard /api/v2/risks/{id} endpoint.
  • Implementing files: simplerisk/admin/configure_risk_formula.php (the configuration UI); simplerisk/includes/functions.php (calculate_risk() ~line 6045 — the dispatch to per-methodology scoring; get_stored_risk_score($impact, $likelihood) for the Custom Risk Model lookup; recalculate_all_risk_scores()); simplerisk/extras/customization/index.php (enable_customization_extra(), the Custom Risk Model registration).
  • Database tables: risk_scoring (per-risk score storage including scoring_method, methodology-specific columns, and calculated_risk); the Custom Risk Model matrix lives in the settings table (one row per matrix cell).
  • Built-in methodologies: 1 = Classic, 2 = CVSS, 3 = DREAD, 4 = OWASP, 5 = Custom (the Customization Extra's matrix lookup), 6 = Contributing Risk.
  • config_settings keys: The matrix cells (named per the impact-likelihood combination); risk_level_high, risk_level_medium, risk_level_low (level thresholds); sla_threshold_high, sla_threshold_medium, sla_threshold_low (SLA days per level); default_risk_score (the scoring method ID for new risks).
  • External dependencies: None.