00.02 The GRC Program Lifecycle
The identify-assess-treat-monitor-report loop runs in every working GRC program — different frameworks name the steps differently, but the cycle is the same and SimpleRisk is built around it.
Why this matters
A GRC program is a loop, not a project. You identify what could go wrong, assess how bad each one is, decide what to do about it, monitor whether the decision is holding, and report on the result so the next iteration is informed by the last. Then the loop runs again — for the same risk a quarter later, for a new risk that surfaced this week, for a control whose test is due. The cadence is what separates a program from a binder.
The trouble with describing a loop is that the picture flattens. It looks like five tidy boxes with arrows. In practice the boxes overlap and the arrows go backward as often as forward. Identification is happening continuously; assessment kicks in whenever something gets identified or whenever a previously-assessed thing changes; treatment decisions trigger new identification ("we're moving to a new vendor, what does that surface?"); monitoring catches drift that goes back to assessment; reporting lands on a leadership desk and produces new directives that loop right back to the top. The lifecycle is the engine; it isn't a Gantt chart.
The reason it matters: every framework you'll touch is describing some version of this cycle. Once you can see the loop in your own program, mapping it to a framework's vocabulary is mostly a translation exercise. Programs that don't see the loop end up doing identification once, in a panic the quarter before an audit, and never circling back. The result is a register that ages out and a compliance posture that drifts from reality between audits.
How frameworks describe this
The four frameworks SimpleRisk customers see most often each describe the same loop in different language.
- NIST Risk Management Framework (RMF) organizes the lifecycle as seven steps: Prepare (organizational and system-level setup), Categorize (classify the system based on impact), Select (choose controls from NIST SP 800-53), Implement (deploy the controls), Assess (test that the controls work), Authorize (a senior official accepts the residual risk and approves the system to operate), and Monitor (continuous oversight). The seven steps map cleanly onto identify-assess-treat-monitor-report; RMF just splits Treatment into Select-Implement-Authorize because federal authorization-to-operate is its own discipline.
- ISO/IEC 27005 Information Security Risk Management describes the lifecycle in clauses 7 and 8 as risk identification, analysis, evaluation, treatment, and monitoring. The language is more abstract and less prescriptive than RMF, which is the point: ISO 27005 is methodology guidance for ISO 27001 and is meant to fit any organization rather than a specific authorization model.
- COSO Enterprise Risk Management organizes its lifecycle around the Performance component: identification, assessment, prioritization, response, and reporting. The COSO framing is broader than information-security risk; it covers strategic, operational, financial, compliance, and reporting risks under one tent. The lifecycle steps don't change meaningfully; the population of risks does.
- NIST Cybersecurity Framework (CSF) v2.0 uses six functions, Govern, Identify, Protect, Detect, Respond, and Recover, that aren't a sequential lifecycle so much as concurrent capability areas. CSF maps onto the loop differently: Identify is identification, Protect is treatment, Detect plus Respond plus Recover is monitoring (with incident response folded in), and Govern is the governance umbrella that holds the whole thing together. Worth noting because CSF v2.0 (Feb 2024) added Govern on top of the original five-function v1.1; older posts and customer documents may describe the five-function version, so verify which version the conversation assumes.
Pick whichever vocabulary matches your audience. The boardroom hears COSO, the federal customer hears RMF, the ISMS auditor hears ISO 27005, the CISO peer group hears CSF. The underlying loop is the same.
How SimpleRisk implements this
SimpleRisk is built around the lifecycle, not bolted to a framework. The core flow that ships out of the box exercises the entire loop on a per-risk basis.
Identification happens through the Submit Risk form on the Risk Management page, and through the import path for batch ingestion (see Importing and Exporting Risks). Newly-submitted risks land in a review queue rather than going live immediately; the Reviewing and Approving Risks step is where assessment formally happens, with the reviewer scoring the risk using one of six scoring methodologies (Classic, CVSS, DREAD, OWASP, Custom, or Contributing Risk; see Risk Scoring Methodologies). Treatment decisions get recorded against the risk along with a treatment plan: mitigate, accept, or transfer. The workflow for mitigation lives in Mitigating a Risk. Monitoring runs through the per-risk review date: every risk in the register has a next-review date, and the dashboards surface risks whose review window has lapsed (see Tracking Risks Over Time and The Risk Dashboard). Reporting closes the loop through the report library, with views grouping risks by status, by owner, by score, or by review date, that feeds the recurring leadership readout.
The compliance side runs the same loop at a different cadence. A framework gets installed (identification of which controls apply); control owners get assigned and the tests get scheduled (assessment); test failures trigger remediation work in the same risk register (treatment); the test cycle itself is the monitoring step; control evidence and posture rolls up into the compliance dashboard for reporting. Both loops feed the same governance forum, which is the layer where leadership accepts residual risk, approves the addition or retirement of controls, and sets the cadence for the next iteration.
The lifecycle isn't a SimpleRisk invention. The product is what makes it tractable to run at the scale of more than a few risks and a few controls.
Common pitfalls
A handful of failure modes are specific to programs that don't internalize the loop.
-
Treating the lifecycle as a project. "We're standing up GRC" lands as a six-month engagement with a kickoff meeting, a deliverable, and a sign-off. Six months later the deliverable is on a shared drive and the program is dormant. The loop never started. Identification has to keep running; assessment has to keep happening; the "stand-up" is just the first iteration.
-
Identify-once-and-done. A long workshop produces a register of forty risks, the team breathes a sigh of relief, and identification stops. New systems, new vendors, new regulations, new threats: none of them feed back into the register. Six months later the register is a snapshot of what the team was worried about in the spring, not what they should be worried about today. Identification is continuous; the workshop is just a recurring high-water mark for it.
-
Risks that never close. A register grows monotonically because risks get added but nothing gets accepted, transferred, or formally closed when the underlying conditions change. The result is a register where 80% of the rows are stale and the real work is hiding in the 20% that's current. Closing risks (with a recorded reason) is part of monitoring; without it, the register's signal-to-noise ratio decays.
-
Monitoring without reporting. The risk dashboards are pristine and nobody outside the GRC team has seen them in months. Reporting is the part that produces governance decisions; if leadership isn't seeing the lifecycle output, the loop is half-broken. A monthly five-minute readout in the security committee is a higher-value report than a quarterly fifty-page deck nobody reads.
-
Reporting without monitoring. The opposite failure: a polished quarterly report goes to the board, but the underlying risks haven't been re-reviewed in six months because nobody owns the monitoring step. The report shows what looks like a stable program; the practice underneath has drifted. The fix isn't to write more reports. Put a review cadence on the per-risk and per-control level and let the report follow the data.
-
One lifecycle for risk, a different one for compliance, never the two shall meet. Risks and controls are managed in parallel registers that never compare notes. A control failure on the compliance side doesn't surface as a new risk; a high risk on the risk side doesn't trigger a control-design conversation. The loops should feed each other; that's why GRC has one acronym and not three.
-
Skipping the governance layer entirely. The risk register and the compliance posture both report into "the security team" with no upward forum where risks get accepted, treatments get prioritized, and exceptions get blessed. Without a governance forum the loop never closes; identification and assessment produce a queue of unmade decisions that the security team carries on its own. See What is Governance.