Skip to content
English
  • There are no suggestions because the search field is empty.

04.01 Why an Asset Inventory is the Foundation of GRC

Every framework worth its name starts with "know what you have." The asset inventory is what makes the rest of GRC possible — risks anchor to assets, controls protect assets, and compliance evidence describes how assets are managed.

Why this matters

You can't protect what you don't know about. Every working security and risk program starts from the same observation: a list of the things the organization owns and the data those things hold. Without that list, risk identification is guessing about a population it can't enumerate, control selection happens against systems the program doesn't realize exist, and compliance assertions ("we encrypt all production databases") quietly turn into "we encrypt all production databases we know about." The shadow inventory (the systems that aren't on the list because nobody put them there) is where most breaches live.

The other reason this matters: the asset inventory is the substrate everything else in GRC anchors to. Risks describe what could go wrong with an asset; controls describe what protects an asset; compliance evidence describes how an asset is managed. When the inventory is good, the program's other artifacts can reference it precisely ("the customer data warehouse on the east-coast cluster"); when the inventory is bad or missing, those artifacts get vague ("our production environment"), and vague is where audit findings come from. Programs that invest early in inventory hygiene save weeks of confused auditing later.

The third thing worth knowing: asset inventory is the discipline most working programs underestimate the cost of. The first cut of an inventory is easy — pull from the cloud-provider console, dump from the configuration-management database, ingest from the asset-discovery scanner. Keeping the inventory current as the environment changes is the hard part, because asset additions and retirements happen continuously and the discovery sources don't always agree. Build for the steady-state work, not just for the one-time import.

How frameworks describe this

Asset inventory is treated as foundational across every major security framework. The treatment varies in depth, but the consensus is unanimous: identify first, then everything else.

  • CIS Critical Security Controls v8 lists asset inventory as Controls 1 and 2 — Inventory and Control of Enterprise Assets (Control 1) and Inventory and Control of Software Assets (Control 2). They're first because the rest of the controls build on them; the CIS list is explicitly prioritized, and "you can't protect what you don't know about" is what put hardware and software inventory at the top. Both controls are in Implementation Group 1 (IG1), the baseline every organization is expected to implement regardless of size or risk profile.
  • NIST Cybersecurity Framework (CSF) v2.0 treats asset management as a category under the Identify function (ID.AM). The subcategories cover hardware (ID.AM-1), software platforms (ID.AM-2), data flows (ID.AM-3), external systems (ID.AM-4), prioritization based on classification/criticality (ID.AM-5), workforce roles (ID.AM-6), and data inventory (ID.AM-7, added in v2.0). The structure makes the breadth explicit: "asset" includes hardware, software, data, and people.
  • NIST SP 800-53 spreads asset management across the CM (Configuration Management) family, particularly CM-8 (System Component Inventory) which mandates a current inventory of system components including manufacturer, type, serial number, version, and physical location. The federal context adds rigor — system components have to be tracked individually and reported up to the system-level documentation — but the underlying concept is the same.
  • ISO/IEC 27001 Annex A clause A.5.9 (Inventory of information and other associated assets) is the asset-inventory requirement in ISO terms. The 2022 update made the requirement explicit at the management-system level: an organization seeking ISO 27001 certification needs a documented inventory of information assets (data, systems, services) along with assigned owners.
  • PCI DSS v4.0 Requirement 9.5 mandates a hardware inventory; Requirement 12.5 covers documented information-security responsibilities (which depends on knowing the assets to assign responsibility for); Requirement 6.3 covers the related software-inventory side for applications in scope of the cardholder-data environment.

The takeaway across all five: the asset inventory isn't something a program decides whether to maintain. It's something the program is going to be evaluated against, and the evaluations all start from "show us your inventory."

How SimpleRisk implements this

SimpleRisk's asset module surfaces three concrete capabilities under /assets/, plus the linkage points that connect assets to the rest of the program:

Asset records live at /assets/manage_assets.php. Each asset row captures the basics (name, IP address, location, team ownership, valuation level, and a free-text details field) along with a verification flag that gates the asset's use in risk associations. The full walk-through of the form and the verification workflow is in Managing Assets in SimpleRisk.

Asset groups live at /assets/manage_asset_groups.php. A group is a named collection of assets you want to treat as a unit for the purposes of linking risks, controls, and compliance evidence. A typical example: an asset group named "Production Database Cluster" containing the individual database server assets, so a single risk linked to the group automatically affects all the member servers. Groups are many-to-many (an asset can belong to multiple groups; a group can hold any number of assets), which is what makes "all customer-data systems" and "all east-coast systems" both work as orthogonal groupings of the same underlying assets.

Asset valuation is the closest SimpleRisk has to a built-in criticality field. Each asset record carries a value that references the asset_values table, a configurable 10-level scale of valuation ranges, each with an optional named label (commonly Low, Moderate, High, Very High, etc.). Asset valuation feeds risk-scoring calculations downstream: a high-value asset linked to a risk produces a higher-impact score than a low-value asset linked to the same risk. The full picture is in Data Classification and Asset Criticality, which also covers the load-bearing fact that data classification is not a built-in field — adding it requires the Customization Extra.

Risk linkage lives in two join tables: risks_to_assets (risk linked to specific assets) and risks_to_asset_groups (risk linked to a group, which expands to all member assets at report time). The linkage is editable from the risk's own form (the Affected Assets field on the Submit Risk form) and is what makes asset-shaped risks rollup correctly in dashboards. The full mechanics are in Tying Risks to Assets.

The inventory connects sideways into compliance through the control_to_assets table (controls assigned to specific assets, with per-asset maturity tracking) and into governance through the document-and-policy linkage (a policy that applies to specific asset categories can reference the affected assets via the same model). Both connections strengthen the same underlying point: the inventory is the substrate everything else hangs from.

Common pitfalls

A handful of patterns recur when teams build and maintain asset inventories.

  • One-time inventory followed by silence. A program imports the asset list from the cloud provider on day one, the inventory looks great, and then nobody updates it. Six months later half the assets in the list don't exist anymore and half the systems running in production aren't in the list. The inventory is operational only when there's a recurring mechanism (scheduled import, asset-creation gate, post-deployment hook) keeping it in sync with reality.

  • "All production servers" as an asset. Using a single record to represent a category of systems is what programs do when they don't want to do the inventory work. It's the asset-management equivalent of the "cybersecurity risk" antipattern: a row that can't be scored, can't be assigned an owner who can act on it, and can't be linked to a specific compliance test. Break categories into individual assets (or at least into asset groups composed of individual assets); the granularity is what makes the rest of the program work.

  • Verification as a checkbox. SimpleRisk has a verified-vs-unverified split on assets specifically to prevent the inventory from filling up with auto-discovery noise. Treating verification as "click through to dismiss the warning" defeats the point — the verified flag is supposed to mean "a human looked at this asset, confirmed it's real, and confirmed it belongs in the inventory." Treat it that way; the friction is load-bearing.

  • Inventory in the security team only. An asset inventory the security team maintains alone diverges from the asset inventory IT operations maintains in the configuration-management database, which diverges from the asset inventory finance maintains for amortization. Three inventories of the same underlying systems is worse than one. Pick one as the system of record (SimpleRisk's, IT's, or somewhere else) and have the others either feed from it or read from it; don't run them in parallel.

  • No data flow tracking. Every framework's asset-management category includes a "data flows" line item (CSF's ID.AM-3, ISO's data-mapping requirement, GDPR's processing-records mandate). Programs sometimes build out a hardware inventory and a software inventory and stop, leaving the data-flow piece untracked. The result is a program that knows what systems exist but can't answer "where does customer PII actually flow." Add data flow as an explicit dimension; SimpleRisk doesn't model it natively, but the asset details field and the custom-field option (via the Customization Extra) cover most cases.

  • Treating asset valuation as theater. The valuation level on each asset feeds risk-scoring calculations. Setting every asset to the same default valuation produces risk scores that don't differentiate between high-value and low-value assets, which produces dashboards where the prioritization is meaningless. Spend time on the valuation conventions; the calibration is what makes the downstream scoring useful.

  • Not using asset groups for the cases that need them. Programs sometimes try to link a risk to twenty individual assets when "the production database cluster" would have been one asset group with twenty members. The group abstraction reduces maintenance work as the cluster changes (add a new database server to the group; risks linked to the group automatically include it). When a set of assets routinely gets treated together, model it as a group.

  • Skipping the inventory because "we have a CMDB." A CMDB tracking thousands of items at the IT-operations level isn't automatically a usable GRC asset inventory; it's a different abstraction with different fields and different update cadences. The CMDB can feed the GRC inventory (via API ingestion or scheduled sync), but the GRC inventory needs the GRC fields (asset valuation, security ownership, control linkage) the CMDB usually doesn't carry. Use the CMDB as a source; build the GRC inventory on top.

Related