08.01 What Makes a Good GRC Report?
A good GRC report answers the question its audience actually has, with current data, presented at the level of detail the audience can act on. Here's how the disciplines differ across audiences and how SimpleRisk's reports and dashboards fit each one.
Why this matters
A GRC program produces a stream of artifacts (risk register entries, control test results, compliance posture, incident records, exception approvals) that aren't useful to leadership in raw form. The report is what turns the stream into a question-answering surface. A report nobody reads is the same as no report; a report read by the wrong audience is worse than no report because it produces decisions made on misunderstood data.
The trap is treating "report" as a single category. The board wants something different from what the SOC manager wants from what the auditor wants. A 60-page risk register dump is the right artifact for the auditor and exactly the wrong one for the board. A one-page dashboard with three numbers is the right artifact for the board and exactly the wrong one for an auditor verifying control operation. The discipline of GRC reporting is picking the right artifact for the audience — which means knowing the audiences before you build the artifacts.
The other thing worth knowing: GRC reports stand or fall on whether the underlying data is current. A beautifully-formatted risk-trend chart that pulls from a register that hasn't been reviewed in six months is a lie told confidently. The discipline of keeping the register current (covered in Tracking Risks Over Time) is what makes the reporting layer trustworthy. Dashboards don't fix bad data; they make bad data look authoritative.
The fourth thing: SimpleRisk distinguishes reports (single-question detailed views, usually a table or a chart) from dashboards (multi-widget summary views aggregating multiple questions). The product surfaces both through the Reports Hub at /reports/dashboards.php, with three dashboards (Risk Management, Compliance, Governance) and 30+ reports. The choice of which to use depends on the question and the audience.
How frameworks describe this
Most major frameworks specify reporting requirements as part of the broader GRC program rather than as a standalone discipline.
- NIST Cybersecurity Framework (CSF) v2.0 under Govern function (
GV.OV-01,GV.OV-02,GV.OV-03) requires that risk-management performance be reviewed by the organization, with findings reported and used to improve the program. The CSF treats reporting as a feedback mechanism feeding the next governance cycle, not as an end in itself. - NIST SP 800-53 PM-9 (Risk Management Strategy) and PM-31 (Continuous Monitoring Strategy) require the program to communicate risk-management posture to senior leadership. AU-6 (Audit Record Review, Analysis, and Reporting) covers the audit-trail side: events have to be reviewed, analyzed, and reported. The federal model assumes reporting at multiple levels (operational dashboards for the SOC, periodic readouts for senior leadership, ad-hoc reports for incident response).
- ISO/IEC 27001 clause 9.1 (Monitoring, measurement, analysis, and evaluation) and clause 9.3 (Management Review) require that the organization measure ISMS performance and report results to top management at planned intervals. The ISO model is explicit that reporting is the input to management review, which is the input to continuous improvement.
- SOC 2 doesn't prescribe specific reports, but the audit produces a SOC 2 report itself — and the underlying control operation is evaluated against documented evidence that the program reports against throughout the period.
- PCI DSS v4.0 requires multiple specific reports: vulnerability scan reports (Requirement 11.3), file integrity monitoring reports (10.5), access reviews (8.2.5), incident response reports (12.10.4). The PCI model is prescriptive about which reports the program has to produce.
The takeaway across all five: reporting is required, the audiences are tiered, and the reports produced have to be defensible against a reader who'll question them. The frameworks don't tell you what tools to use; they tell you what questions the program has to answer, on what cadence, to what audience.
How SimpleRisk implements this
SimpleRisk separates reports from dashboards both conceptually and in the data model. The Reports Hub (/reports/dashboards.php) is the entry point for both, with category filters (All, Favorites, Risk Management, Compliance, Governance) that limit the visible tiles to one domain.
Dashboards are multi-widget summary views built on the UILayout widget framework. Each dashboard has a named layout (overview for Risk Management, compliance_dashboard for Compliance, governance_dashboard for Governance) with a configurable set of widgets in $ui_layout_config. The user can add, remove, and rearrange widgets per their own preferences (saved per user); the dashboard's default layout is what new users see. The three Core dashboards:
- Risk Management Dashboard (
/reports/risk_management_dashboard.php) — four default widgets covering open-vs-closed risks, mitigation-planning coverage, review coverage, and submission-rate trend. Full walk-through in The Risk Dashboard. - Compliance Dashboard (
/reports/compliance_dashboard.php) — three default widgets covering controls-by-framework, pass-rate trend, and pass/fail distribution, with a framework-filter dropdown. Full walk-through in The Compliance Dashboard. - Governance Dashboard (
/reports/governance_dashboard.php) — covers document program status, exception state, and control-framework breakdowns.
Reports are single-question detail views, mostly built as standalone PHP pages under /reports/. Each report is registered in simplerisk/includes/reports_catalog.php with a label, description, path, kind (report), tags (which domain it belongs to), and the permissions required to view it. Permissions are enforced by session checks (check_riskmanagement, check_compliance, check_governance, check_asset); the Reports Hub filters the visible reports to those the user can view.
The full report library is documented in The Built-In Reports — there are 30+ reports across the four domains.
The audience-to-artifact mapping that most working programs converge on:
- Board / executive leadership — the dashboards (especially the Risk Management Dashboard's headline metrics) plus a one-page exception-and-acceptance summary for the items that need formal sign-off.
- Audit / compliance team — the Compliance Dashboard for posture, plus the Audit Timeline, Dynamic Audit Report, and Control Gap Analysis reports for evidence trails.
- Security operations — the operational reports (All Open Risks Needing Review, High Risk Report, Mitigations By Date, Submitted Risks By Date) for the daily-and-weekly cadence work.
- Risk owners — All Open Risks Assigned to Me by Risk Level for the personal view.
- External auditors — depends on the framework, but usually the Compliance Dashboard plus drill-down into specific controls and their evidence.
A few load-bearing properties of SimpleRisk's reporting layer worth knowing:
- Permission-filtered visibility. Each report's permissions determine who can see it. The Reports Hub respects this — users see only the reports their role grants. This means audience targeting works partly through role assignment.
- No built-in scheduled distribution. Core SimpleRisk doesn't ship a "email this report weekly" surface. Programs that need recurring distribution either layer on the Notification Extra, build a custom cron job, or accept that distribution is manual (someone screenshots or exports and forwards).
- No public-link sharing for reports. Reports require login. Unlike assessment questionnaires (which use tokenized public URLs), there's no equivalent for reports — sharing with a non-SimpleRisk reader means exporting the data first.
- Per-report exports are sparse. Only the Dynamic Risk Report has a built-in CSV download (and that's gated by the Import/Export Extra). Most other reports are view-only; for distribution, browser print-to-PDF or screenshot is the typical path. Article 04 covers the export options in detail.
Common pitfalls
A handful of patterns recur with GRC reporting.
-
One report for every audience. The single biggest failure pattern. A program that produces a single 40-page register dump and sends it to the board, the audit team, and the security team is doing all three audiences a disservice — none of them will read it usefully. The discipline is audience-specific reporting; the same underlying data, presented at the level of detail the audience can act on.
-
Dashboards as the reporting strategy. Dashboards are great for the daily check-in and the executive summary, but they don't replace detail reports. A program whose entire reporting story is "we have dashboards" can't answer "show me every risk past its review date" or "list all open exceptions with their expiration dates" without dropping into report-level detail. Use both; they answer different questions.
-
Reporting on data nobody curates. A dashboard that pulls from a register where 60% of entries haven't been reviewed in six months is reporting on a snapshot the program has stopped maintaining. The dashboard looks current; the data isn't. Curate the underlying data on cadence (review reminders, the mitigation pipeline, the test cycle) before relying on the reports built on top.
-
Reports without a defined audience. "Generate a report" without a named audience produces an artifact nobody owns and nobody reads. Every report should have a named audience (the board, the audit committee, the SOC manager, the control owner) and a named reader (the human who's expected to read each delivery). Without that, the report is busywork.
-
Conflating compliance posture with risk posture. A compliance dashboard showing "100% of controls passing tests" doesn't mean the program has zero risk; the compliance posture and the risk posture answer different questions. A program reporting only on compliance posture leaves leadership thinking the program is comprehensively healthy when there may be open high-severity risks the controls don't address. Report on both, distinctly.
-
No date stamping on the report. A report distributed to leadership without a clear "as of" date encourages readers to assume it's current as of when they're reading it, which may be weeks after the data was pulled. Stamp every distributed report with the data's "as of" timestamp; the small discipline saves the conversation about why the numbers don't match what someone saw yesterday.
-
Treating exports as the report. A CSV pulled from the Dynamic Risk Report and emailed to a stakeholder is data, not a report. The reader has to interpret it; without the framing, the prioritization, and the recommended actions, the export is raw input. For executive distribution, package the export with at least a one-paragraph summary and a recommended decision; the export is the supporting detail.
-
Skipping the audit trail on report distributions. When a report is shared externally (with an auditor, with a regulator, with a major customer doing due diligence), record the distribution somewhere. The shared artifact becomes a reference point in future conversations; "the report we sent in March" needs to be findable when March's auditor questions it in October.