04.03 Data Classification and Asset Criticality
Two related concepts that shape how a program prioritizes risk treatment — what they mean, what SimpleRisk supports natively (Asset Valuation), and what requires the Customization Extra (data classification as a structured field).
Why this matters
Not all assets are equal, and not all data is equal. The customer-data warehouse and the developer's test laptop are both "assets," but the consequence of compromising one is wildly different from the consequence of compromising the other. Risk programs that treat all assets identically end up with risk scores that don't differentiate between high-impact and low-impact targets, which produces dashboards where everything reads the same shade of yellow and the program can't tell leadership where to focus.
Two distinct disciplines address the inequality. Asset criticality describes how important the asset is to the organization's operations: the higher the criticality, the worse it is when something goes wrong with it. Data classification describes how sensitive the data is that the asset handles: the more sensitive, the more constrained the controls that have to operate around it. The two are related but not identical — a high-criticality system (the production order-processing pipeline) might handle low-classification data, and a low-criticality system (the prototype machine-learning environment) might briefly handle high-classification data the team didn't realize was there.
The third reason both matter: every framework worth its name expects them. NIST CSF v2.0's ID.AM-5 explicitly mandates resource prioritization based on classification and criticality. ISO 27001's information-asset inventory requirement implicitly assumes both. SOC 2's Confidentiality criterion depends on knowing which data is confidential. PCI DSS scope is defined by which assets handle cardholder data. Programs that don't classify and prioritize end up unable to answer the framework's first question: "what's the high-impact stuff."
The load-bearing fact for SimpleRisk specifically: SimpleRisk has native support for asset criticality (via the Asset Valuation field) but does NOT have native support for data classification as a structured field on assets. Adding data classification as a queryable, reportable field requires the Customization Extra. This article lays out both concepts, what SimpleRisk does natively for each, and what to do about the gap.
How frameworks describe this
The major frameworks each define classification and criticality with slightly different vocabulary. The underlying ideas are stable across them.
- NIST SP 800-60 Guide for Mapping Types of Information and Information Systems to Security Categories defines the federal model: information types are categorized by their potential impact on confidentiality, integrity, and availability, with each category set to Low, Moderate, or High. The system's overall security categorization is the high-water mark across the categories of all the information types it handles. This is the federal version of "data classification + asset criticality combined into one categorization."
- NIST CSF v2.0 under
ID.AM-5mandates that "assets are prioritized based on their classification, criticality, resources, and impact on the mission." CSF doesn't prescribe specific levels; it expects the organization to define them and apply them. - ISO/IEC 27001 Annex A
A.5.12(Classification of Information) requires that information be classified according to its confidentiality, integrity, and availability needs and that the classification scheme be documented. ISO leaves the specific levels open; common schemes are Public / Internal / Confidential / Restricted (four-level) or Public / Internal / Confidential (three-level). - PCI DSS v4.0 doesn't classify data per se — it defines a single category (cardholder data and sensitive authentication data) and scopes the entire control set to environments that handle that data. The PCI model is "binary classification: in scope or out."
- GDPR distinguishes personal data, special-category personal data, and (informally) operational data, with progressively stricter handling requirements. The legal frame produces an effective classification scheme even though "GDPR data classes" isn't a published taxonomy.
The takeaway across all five: pick a scheme, document it, apply it consistently. The specific levels matter less than the discipline of using the same scheme everywhere.
How SimpleRisk implements this
SimpleRisk implements asset criticality through the Asset Valuation field on each asset record. Each asset carries a numeric value referencing the asset_values table, which holds a configurable scale of valuation levels with optional human-readable names.
The seeded valuation scale is 10 levels with monetary ranges (the default scale spans $0 to $1,000,000+ in roughly logarithmic increments), each level optionally named. Common naming patterns customers adopt:
- 3-level: Low / Moderate / High (matching NIST 800-60).
- 4-level: Low / Moderate / High / Critical (matching the common business-impact scale).
- 5-level: Very Low / Low / Moderate / High / Very High (matching the 5×5 risk matrix).
- Numeric: leave the level numbers as-is and document what each number means in policy (for programs that want quantitative valuation).
Asset Valuation is configured in admin settings under Configure → Asset Valuation. The valuation level a new asset gets defaults to the value of the default_asset_valuation configuration setting (set in admin settings); customers can change it per-asset on the asset edit form.
The Asset Valuation field is load-bearing for risk-scoring math. When a risk is linked to an asset, the asset's valuation feeds into the risk's impact calculation: a high-valuation asset linked to a risk produces a higher impact score than a low-valuation asset linked to the same risk. This is the mechanism by which "this risk affects a high-criticality asset" automatically translates into "this risk has a high impact score" without the risk submitter having to do the calculation by hand.
For data classification, SimpleRisk's native asset model does not include a built-in classification field. The string 'DataClassification' => 'Data Classification' exists in the language file, but the only place it's used is in project and governance contexts, not on the asset record itself. Programs that need a classification field on assets have three options:
-
Use the Customization Extra to add a custom field. The Customization Extra lets admins add custom fields to several SimpleRisk entity types including assets. A custom Data Classification dropdown with the program's chosen levels is straightforward to add this way; the field becomes queryable and reportable like any built-in field. This is the recommended path for programs that need first-class classification support.
-
Use the asset's free-text Details field with a convention. The Asset Details field accepts free text; programs sometimes adopt a convention like "Classification: Confidential" as the first line of every Details field. The convention is searchable but not queryable — reporting that aggregates by classification would have to parse the text — so this is a starting point, not a long-term answer.
-
Use Tags as a classification proxy. The Tags field accepts free-text labels; tagging assets with their classification level (e.g.,
classification:confidential) produces a queryable filter without the customization Extra. The trade-off is that tags are flat — there's no enforcement that every asset has exactly one classification tag — and the convention has to be policed by hand.
For programs subject to PCI DSS specifically, the classification model is binary: tag the asset (or use a custom field) to mark it as in-scope or out-of-scope for the cardholder-data environment.
The relationship between the two: the Asset Valuation field is what feeds risk-scoring math; the data-classification field (whether implemented via custom field, tag, or convention) is what drives compliance scoping and policy applicability. Both can exist independently on the same asset and both contribute to the program's prioritization, but they answer different questions.
Common pitfalls
A handful of patterns recur when teams use (or misuse) classification and criticality.
-
Setting every asset to the default Asset Valuation. A program that imports thousands of assets and leaves them all at the default valuation produces a risk register where every linked risk inherits the same impact contribution from the asset, regardless of what the asset actually is. The whole point of the field is differentiation; static defaults defeat it. Spend time on the calibration before linking risks; set the default to a low level, and explicitly bump high-value assets up.
-
Treating Asset Valuation as data classification. The valuation field captures how important the asset is to operations, not how sensitive the data on it is. A development server hosting a copy of production customer data has low criticality (you can lose it without much operational impact) and high data sensitivity (the data on it deserves the same handling as production). Conflating the two produces a single number that doesn't usefully drive either prioritization. Track both separately.
-
Picking a classification scheme without legal/compliance input. A scheme written by the security team in isolation tends to use security vocabulary (Public / Internal / Restricted) that doesn't map cleanly to legal vocabulary (personal data / special category / pseudonymized) or compliance vocabulary (PCI in-scope / out-of-scope). The scheme needs to satisfy all the audiences that read it; involve legal and compliance early so the scheme they need is the same scheme the security team builds against.
-
Classification without enforcement. A scheme documented in policy but not actually applied to assets is theater. Auditors notice when the policy says "all assets are classified" and the asset inventory's classification field is empty for 80% of the rows. The scheme has value only to the extent the inventory reflects it; if you adopt a scheme, populate the field on every asset, and re-populate it on every new asset.
-
Too many classification levels. Five-or-more-level schemes are common in policy templates and rare in working programs. The proliferation of levels produces confusion at the boundaries (is Confidential or Restricted the right call for this dataset?) without producing useful differentiation in handling. Three levels (Public / Internal / Confidential) is enough for most programs; four if there's a regulated category that genuinely needs separate treatment.
-
Skipping the customization conversation and reaching for tags. Tags are convenient but unenforced; programs that use tags as a classification proxy end up with assets carrying conflicting tags, missing tags, or typo-variants of tags. If classification is load-bearing for the program (compliance scoping, policy applicability, regulatory reporting), invest in the Customization Extra and add a structured field. The Extra cost is much lower than the audit-finding cost of a tag-based scheme that doesn't survive scrutiny.
-
Using NIST 800-60's Low/Moderate/High in a non-federal context without explanation. The NIST scale is precise (each level has explicit impact-magnitude criteria) but federal-specific in tone. Adopting it in a commercial program without documenting how the levels map to commercial impact (revenue impact, customer impact, regulatory penalty) leaves auditors and stakeholders interpreting the same labels differently. If you adopt the NIST scale, write down what each level means for your organization.
-
Not updating valuation as the environment changes. A system that was a low-criticality test environment two years ago may be a high-criticality production system today, with the valuation field still showing low. The valuation needs the same review cadence the rest of the asset record gets; revisit it whenever the asset's role materially changes.