Skip to content
English
  • There are no suggestions because the search field is empty.

10.02 Running a Quarterly Program Review

A working agenda for the quarterly program review meeting — which dashboards to read, which reports to walk, which decisions to bring to leadership, and how to capture the outputs so the next quarter starts with last quarter's commitments visible.

Why this matters

The quarterly program review is the meeting that turns a year of operational work into the conversation leadership needs to make program-level decisions. Without it, the program produces a continuous stream of activity (risks reviewed, controls tested, incidents responded to) without ever pulling up to the altitude where leadership can ask the questions only leadership can answer — what's our risk appetite, are we investing in the right places, do we need to add capacity, are we satisfying the audits we're committed to. The dashboards exist in part to make the review tractable; the review exists in part to make the dashboards meaningful.

The trap is treating the review as a status update. A meeting where the GRC team reads dashboards aloud to a quiet leadership group produces no decisions and no follow-up. The review is operational only when leadership engages — asks the questions, signals priorities, accepts (or rejects) the program's recommendations, commits the resources the next quarter needs. The discipline of the meeting structure is what makes the engagement happen.

The other thing worth knowing: the quarterly cadence is what most working programs converge on. Some run monthly; some run semi-annually; some run "as-needed" (which usually means never). The right cadence depends on the program's maturity (a Level 2 program needs more frequent review to keep momentum; a Level 4 program can sustain quarterly), the regulatory environment (SOC 2 and ISO 27001 expect documented management review), and the leadership audience (a board reviews quarterly typically, an audit committee may review monthly during pre-audit windows). Pick a cadence the program can sustain; don't promise weekly and deliver monthly.

The fourth thing: SimpleRisk's surfaces are designed to support the review without requiring the GRC team to assemble bespoke materials each time. The Risk Management Dashboard, the Compliance Dashboard, the standard reports, and the per-team views together cover most of the questions the review will surface. Programs that re-build slide decks from scratch each quarter are doing twice the work — read directly from the dashboards and reports during the meeting, supplement only where the standard surfaces don't answer the question.

Before you start

Have these in hand at least a few days before the review meeting:

  • A defined audience. Quarterly reviews can serve different audiences — the board, the audit committee, the executive risk committee, the security leadership team. The audience determines the right level of detail (board wants headlines and decisions; security leadership wants operational detail). Don't try to serve all audiences with one meeting.
  • An agenda the audience has seen in advance. A meeting agenda emailed thirty minutes before the meeting produces unprepared attendees. Send the agenda at least a week ahead, with links to the relevant dashboards/reports the meeting will reference.
  • A defined output format. Decisions made in the meeting need to be captured somewhere — minutes, action items, decision log. Pick a format the program uses consistently; reading "what did we decide last quarter?" requires those decisions to be findable.
  • The program's data current as of the meeting. Risks not reviewed in months, control tests not completed for the cycle, incidents without lessons-learned captured — these all distort the review. Schedule the data-currency push (the discipline of Tracking Risks Over Time, the test cycle cleanup) to land in the days before the meeting, not during it.
  • Captured decisions and action items from the prior review. A review meeting that doesn't reference the prior meeting's commitments is one of two things: a fresh start, or a meeting where prior commitments are quietly being dropped. The follow-up on prior commitments should be the first substantive agenda item.

Step-by-step

1. Open with prior-quarter commitments

The first agenda item is "what did we commit to last quarter, and what happened with each commitment?" For each commitment:

  • Done — note the completion, briefly describe the outcome.
  • In progress — describe where it stands, what's left, when it will be done.
  • Not started — describe why, decide whether to recommit or to drop.
  • Dropped — explain why and document the reason.

This sets the tone that commitments matter. Programs that skip this step end up making the same recommendations meeting after meeting because nobody's tracking the follow-through.

2. Walk the Risk Management posture

Open the Risk Management Dashboard (/reports/risk_management_dashboard.php) and walk the four default widgets:

  • Open vs Closed — is the register growing or being worked down? What's the trajectory?
  • Mitigation Planned vs Unplanned — what fraction of open risks have plans? Is the planning queue clearing?
  • Reviewed vs Unreviewed — how is the review cadence holding? Are new submissions getting their first review on time?
  • Risks by Month — what's the submission trend? Any anomalies (a spike from a recent assessment, a trough from team capacity issues)?

Then surface the specific items that need leadership attention:

  • High Risk Report (/reports/high.php) — high and very high risks across the org. Are any in need of leadership-level decisions (formal risk acceptance, additional resource commitment, escalation)?
  • All Open Risks Needing Review (/reports/review_needed.php) — what's the past-due count? Is the cadence slipping? What's needed to recover?
  • Risk Appetite Report (/reports/risk_appetite.php) — are we within tolerance? If not, what specifically is over and what's the plan?

The full report library is in The Built-In Reports; the four reports above are the minimum quarterly set.

3. Walk the Compliance posture

Open the Compliance Dashboard (/reports/compliance_dashboard.php):

  • Controls by Framework — distribution. Any framework with disproportionately few controls might need attention (or might just reflect a smaller framework legitimately).
  • Pass Rate Trend — is the rate steady, improving, declining? What does the trajectory tell us about the program?
  • Pass/Fail Distribution — current breakdown. The Inconclusive segment in particular deserves attention — see Control Tests and Evidence Collection.

Then drill to specific compliance items:

  • Audit Timeline (/reports/audit_timeline.php) — what's coming up? What audit is on the next quarter's calendar? Are we ready?
  • Audit Remediation Cycle Time (/reports/audit_remediation_cycle_time.php) — how long is it taking us to close audit findings? Trend?
  • Control Gap Analysis (/reports/control_gap_analysis.php) — what controls are missing tests, owners, or are out of date? Plan for closing the gaps?

4. Walk the Governance posture

Open the Governance Dashboard and the related reports:

  • Document Program Report (/reports/document_program_report.php) — how many policies, how many overdue for review, how many pending approval. The document program is operating when the overdue count stays low.
  • Exception Report (/reports/exception_report.php) — open exceptions, expiration timeline, approval status. Are exceptions accumulating? Are renewals being handled?

Surface to leadership:

  • Exceptions that need formal acceptance. The renewal cycle is leadership's chance to push back — "we approved this exception twice already; what's the actual plan to close the underlying gap?"
  • Policy decisions that need leadership input. New policies needing approval, retired policies needing sign-off, framework adoption decisions.

5. Review incident activity

If the Incident Management Extra is active:

  • Number of incidents this quarter, by severity and category. What's the volume look like? Any patterns?
  • Mean time from identification to closure. Trend?
  • Incidents that surfaced new risks. Did the post-incident reviews produce additions or changes to the risk register? See From Incident to Risk and Back.
  • Lessons learned worth highlighting. What did the program learn this quarter that should change how we operate?

For programs without the Extra, an out-of-band incident summary (from the SOC's own reporting) covers the same ground.

6. Surface the cross-cutting items

Some items don't fit neatly into Risk, Compliance, Governance, or Incident — they cut across:

  • Vendor and third-party risk. Recent vendor assessments completed (see Third-Party and Vendor Risk Assessments), notable findings.
  • Vulnerability management trends. Volume coming through the Vulnerability Management Extra (if installed), patterns by scanner platform, average elevation rate.
  • Program metrics. Risk Average Over Time, Mean Time to Remediate, control test pass rate trend — quarterly comparison to prior quarters.

7. Surface decisions and recommendations

The middle of the meeting is information; the end is decisions. For each recommendation the program is bringing to leadership, frame it as:

  • The situation. What we're seeing in the data.
  • The recommendation. What we recommend doing about it.
  • The trade-off. What we'd be giving up to do it (resources, opportunity cost, alternative approach).
  • The decision needed. What specifically we need leadership to approve, reject, or modify.

Common decision categories:

  • Risk acceptance. Specific high-residual risks the program is asking leadership to formally accept rather than treat further.
  • Resource investment. Headcount, tooling (specific Extras), training that the program needs to advance.
  • Scope changes. New frameworks to adopt, old frameworks to retire, scope expansion or contraction.
  • Policy updates. Policy changes needing executive sign-off.

8. Capture commitments and close

Before leaving the meeting, document:

  • Decisions made. What was approved, rejected, modified.
  • Action items. Specific things to do, with owners and deadlines.
  • Open questions. Things that need follow-up before the next review.

Send the captured outputs within 24 hours of the meeting; review the outputs at the start of the next quarter's review.

9. Schedule the next review

Schedule the next quarterly review on the calendar before everyone leaves the room. The discipline of having the next meeting on the calendar is what makes the cadence sustainable.

Common pitfalls

A handful of patterns recur with quarterly reviews.

  • Reading dashboards aloud without engagement. The single most common failure mode. The GRC team walks through the dashboards, leadership listens politely, no decisions are made, the meeting ends. The review is operational only when leadership engages — asks questions, pushes back, makes decisions. Build the agenda to prompt engagement (specific decisions to surface, specific recommendations to evaluate) rather than to inform passively.

  • Skipping the follow-up on prior commitments. Programs that don't open with prior-quarter commitments train leadership to treat each commitment as ephemeral. The follow-up is what makes commitments matter. Even when the news is "we didn't do what we committed to," report it honestly; the alternative is a program that quietly drops commitments without anyone noticing.

  • Bringing only good news. A review meeting where everything looks great every quarter is suspicious. Real programs have problems; surfacing them honestly is what builds leadership trust. A review that buries failures in the margins of the report produces a leadership audience that stops trusting the report.

  • Not preparing the data. Walking into the review with stale risks, missed test cycles, and uncaptured incidents produces a review where the GRC team spends the meeting explaining why the data isn't current. Land the data-currency push before the meeting.

  • Inviting too many people. A review meeting with 25 attendees can't make decisions — too much coordination cost, too much "let's take this offline." Keep the room small (the people who actually need to make decisions, plus the GRC team presenting). Send the captured outputs to a broader audience after.

  • No defined outputs. Meetings without a defined output format produce no operational follow-through. The outputs (decisions, action items, open questions) are the meeting's product; without them, the meeting is just talking.

  • Postponing reviews because "nothing has changed." The meeting cadence is what keeps the program coherent over time. Skipping a quarter because "nothing major is happening" produces a six-month gap in leadership engagement and accumulating drift between the program's reality and leadership's understanding of it. Hold the meeting even when it's brief.

  • Treating the review as the only leadership touchpoint. The quarterly review is for the major decisions; the smaller decisions and the urgent escalations need their own channels (Slack, email, ad-hoc meetings). A program that batches every leadership communication into the quarterly review is too slow for the operational pace.

  • Ignoring the meeting's role in maturity advancement. The review is one of the operational disciplines that distinguishes higher-maturity programs (see The GRC Maturity Model). A program that runs quarterly reviews well is operationally at Level 3 or beyond on that practice; one that doesn't is at Level 2 on it. The discipline matters.

Related