10.01 The GRC Maturity Model
GRC programs go through recognizable maturity stages — from improvising in spreadsheets to running an optimized, audit-ready program. Here's the CMMI-style model, where SimpleRisk fits at each stage, and what to focus on as your program advances.
Why this matters
GRC programs don't appear fully-formed; they grow up through recognizable stages. The team running risks in a spreadsheet because they need something before next quarter's audit is at one stage; the team running a coordinated program with regular reviews, defined roles, and audit-ready evidence is at another. Programs that don't see the stages end up frustrated when they expect mature-program outcomes from early-stage practices ("we have a register, why isn't the audit going well?"); programs that do see the stages can plan the work that takes them from where they are to where they need to be.
The trap is treating maturity as a destination. There's no level you reach where you stop improving — every stage has its own challenges and its own opportunities. The model isn't a ladder you climb until you're done; it's a way to understand where the program is right now, what the next set of capabilities looks like, and what investment those capabilities require. Programs that fixate on "getting to Level 5" miss the operational reality that maturity is the discipline of operating well at the level you're at, with deliberate work to advance when the team has capacity.
The other reason this matters: SimpleRisk's surface scope and your maturity level should match. A program at the early-formation stage doesn't need the ComplianceForge SCF Extra and the Workflows Extra and the AI Extra all activated; the activation overhead exceeds the operational value. A program at the optimizing stage that's still running on Core-only with one framework is leaving payoff on the table. Knowing your maturity tells you which Extras and which features actually move the needle for you. See When and How to Bring in Extras for the per-Extra adoption guidance.
The fourth thing: the maturity model isn't a SimpleRisk invention. The five-level Capability Maturity Model Integration (CMMI) framework is the most widely-used reference, with the same five levels showing up in NIST and ISO contexts under different names. SimpleRisk's framework_controls schema even carries a maturity field that uses CMMI-style levels for per-control implementation maturity assessment.
How frameworks describe this
Several frameworks publish their own maturity models or capability frameworks; the structures are similar across them.
- Capability Maturity Model Integration (CMMI) is the canonical reference, with five levels: Initial (chaotic, ad-hoc), Managed (basic project management exists), Defined (organization-wide standard processes), Quantitatively Managed (measured, with statistical control), Optimizing (continuous improvement). The CMMI model isn't GRC-specific but maps cleanly onto GRC programs because the same capability evolution applies.
- NIST Cybersecurity Framework (CSF) v2.0 publishes Implementation Tiers — four tiers labeled Partial, Risk Informed, Repeatable, and Adaptive. The tiers describe how integrated risk management is into the organization's broader governance, with similar shape to CMMI but explicitly cybersecurity-flavored.
- NIST SP 800-53 r5 uses the term Maturity Level for control implementations, with similar Initial-through-Optimizing levels.
- ISO/IEC 33000 series (formerly ISO/IEC 15504, "SPICE") provides a process-assessment framework that's CMMI-aligned. ISO 27001-certified programs sometimes use this for ISMS process assessment.
- CIS Critical Security Controls publishes Implementation Groups (IG1, IG2, IG3) that aren't strictly maturity tiers but encode similar progression — IG1 is the baseline every organization should implement, IG2 adds capabilities for organizations with more sensitive data, IG3 adds capabilities for organizations with critical data and substantial resources.
The takeaway across all five: there's broad consensus that GRC programs evolve through recognizable stages, and the audit conversation routinely references where the program sits on whichever model the audience uses.
How SimpleRisk implements this
SimpleRisk doesn't enforce a maturity model — there's no "your program is currently at Level 3" indicator anywhere. What SimpleRisk does is make the operational signs of maturity legible through its surfaces. The dashboards, the reports, the audit trails are what tell you (and your auditor) where the program actually sits.
The five CMMI levels with the SimpleRisk-side signs of each:
Level 1 — Initial (chaotic, ad-hoc). The program responds to incidents and audits as they arise; there's no recurring cadence. SimpleRisk usage at this stage is typically:
- A few risks in the register (often submitted just before an audit), most without owners assigned.
- One or two frameworks installed (often just to satisfy an active audit).
- Few or no control tests scheduled; tests that have run are inconsistent in quality.
- The dashboards show empty or sparse widgets.
- No recurring program review cadence.
The next-step focus at Level 1 is establishing the discipline of recording: get every known risk into the register, assign every risk an owner, assign every control an owner, define a basic review cadence even if the cadence is "once a quarter, we look at what's overdue."
Level 2 — Managed (basic discipline, project-managed). The program has a register that gets updated, a control set that gets tested on cadence, and a recurring review meeting. SimpleRisk usage typically:
- Risk register populated with 20–100 risks, most with owners and review dates.
- One or two frameworks installed and operating.
- Control tests scheduled per-control, with the cadence honored most months.
- The Risk Management and Compliance dashboards have meaningful content.
- A recurring (monthly or quarterly) program review (see Running a Quarterly Program Review).
The next-step focus at Level 2 is standardizing the practices: write down what "good" looks like for each workflow, train the team to that standard, build the playbooks (see The Incident Management Extra for incident playbook authoring as a Level-2-to-3 step).
Level 3 — Defined (standardized, documented practices). The program has written standards for how each workflow runs and operates against them consistently. SimpleRisk usage typically:
- Document library populated with policies justifying every active control.
- Control-to-document mappings populated (see Managing Policies and Documents).
- Multiple frameworks running with cross-mapping (the SCF Extra often shows up here — see The ComplianceForge SCF Extra).
- Incident response playbooks authored before incidents occur (Incident Management Extra, see What is Incident Response?).
- Per-team segregation of risk visibility working as designed.
- Notifications and the Workflows Extra coordinating event-driven actions (see Notifications and Email Preferences, Understanding SimpleRisk Workflows).
The next-step focus at Level 3 is measuring: what gets measured gets managed. Mean time to remediate, control test pass rates, risk-acceptance rates, incident response cycle times. The SimpleRisk reports for these (Mean Time to Remediate, Audit Remediation Cycle Time, Risk Trend, etc.) become daily reading rather than occasional curiosity.
Level 4 — Quantitatively Managed (measured, with statistical control). The program runs on numbers; leadership sees current cycle times, trend lines, comparative metrics. Decisions are data-driven. SimpleRisk usage typically:
- Reports library used regularly (Mean Time to Remediate, Risk Average Over Time, Risk Appetite Report all on a recurring read schedule).
- The Risk Management Dashboard customized for the team's questions.
- The Compliance Dashboard's pass-rate trend monitored monthly.
- Risk appetite thresholds configured and enforced.
- Cross-framework mapping (SCF or UCF Extra) producing efficiency in audit cycles.
- Bulk operations (via the Import/Export Extra or scripted via the v2 API) part of routine workflow.
The next-step focus at Level 4 is continuous improvement: the metrics surface the patterns that point to improvement opportunities. Why is this control's test failing more often this quarter? What makes incident response cycle time longer for the data-exposure category? The team starts asking these questions and acting on the answers.
Level 5 — Optimizing (continuous improvement, organization-wide adaptation). The program adjusts itself based on what the metrics reveal: the workflows evolve, the playbooks update, the training cadence changes, the technology stack evolves. SimpleRisk usage typically:
- Workflows Extra customized substantially for the team's specific processes.
- Custom reports (via the Customization Extra) for organization-specific questions the built-in library doesn't answer.
- API integrations (v2 API used heavily) connecting SimpleRisk to the broader operational tooling.
- AI Extra used for recurring analytical questions (with appropriate skepticism — see Working with SimpleRisk AI).
- Quarterly retrospectives that produce program changes, not just status updates.
The next-step focus at Level 5 is staying there. Optimization isn't a final state; it's a discipline that requires continuous attention. Programs that reach Level 5 and stop investing slip back toward Level 4 as the operational drift accumulates.
Common pitfalls
A handful of patterns recur with maturity progression.
-
Skipping levels. A program at Level 1 trying to jump to Level 4 by buying tools and dashboards before the underlying discipline is in place produces a Level 4 appearance with Level 1 reality. The dashboards show empty widgets, the reports show nothing meaningful, the audit conversation goes badly. Build the discipline first; let the tooling support what's already operating.
-
Treating maturity as a sales pitch. "We're at Level 5" said by a program that doesn't actually do continuous improvement is a claim auditors and assessors see through within a few questions. The maturity claim has to be defensible by showing the evidence of what each level requires. If you can't produce the evidence, claim a lower level honestly.
-
Investing in tooling out of proportion to the team's capacity. A two-person GRC team activating a dozen Extras produces a tooling stack the team can't actually operate. Each Extra has its own maintenance overhead — configurations to keep current, integrations to maintain, learning to absorb. Match the tooling investment to the operational capacity. See When and How to Bring in Extras.
-
Ignoring the maturity field on controls. SimpleRisk's
framework_controls.maturityfield is one of the underused signals that tells the program where each control actually sits. Updating it as part of the test-cycle review (when a test passes consistently, bump the maturity; when it fails, drop it) turns it from decoration into operational signal. Programs that leave the field at default forever are missing the per-control maturity picture. -
Comparing to other organizations' maturity. A peer organization's claim to be at Level 4 doesn't mean your Level 2 program is failing; it means their operational reality and yours are different. The maturity model is for internal progression measurement, not for inter-organization comparison. Worry about your own next-step focus.
-
Stalling at Level 2. Many programs settle at Level 2 (basic discipline, recurring cadence) and stay there for years because the operational pressure to advance isn't visible. Level 2 is genuinely good (the program is working) but the payoff from Level 3 and beyond (cross-framework mapping, playbook-driven response, measured operations) is substantial. Plan for the next stage; don't accept Level 2 as the destination by default.
-
Treating "Optimizing" as a finish line. Level 5 isn't where you stop investing; it's where the investment shifts from building capability to maintaining it. Programs that reach Level 5 and reduce investment slip backward as people change roles, regulations evolve, the threat landscape shifts. Optimization is recurring work.
-
Not communicating the maturity stage to leadership. Leadership often expects the program to be at a higher maturity than it is, and acts on that assumption. If the program is at Level 2, communicate that clearly along with what Level 3 would require. Leadership can either commit the resources to advance or accept the current level — both are valid responses, but neither is possible if leadership doesn't know where the program actually sits.