06.01 What is a GRC Assessment?
An assessment is a structured way to ask questions and capture defensible answers about controls, risks, vendors, or compliance. Here's how assessments differ from audits, what kinds the major frameworks expect, and how SimpleRisk's Assessments Extra implements them.
Why this matters
An assessment is the GRC discipline of asking questions, capturing the answers in a defensible form, and acting on what the answers reveal. The questions might be self-asked ("how is our backup process operating?"), addressed to a control owner ("describe the access-review process for the production database"), sent to a vendor ("what's your SOC 2 status?"), or required by a framework ("how does your program meet ISO 27001 A.5.15?"). The structured-questions-and-answers shape is what distinguishes an assessment from "asking around in Slack" — the assessment is repeatable, auditable, and produces an artifact the program can point to later.
The trap is conflating assessment with audit. They overlap but they're not the same. An audit is an external evaluation against a defined standard, conducted by an independent party with the authority to issue a formal opinion (a SOC 2 report, an ISO 27001 certification, a PCI compliance attestation). An assessment is the broader, lower-stakes activity of asking questions to understand current state. It can feed an audit (a self-assessment as the first pass before the auditor arrives), but it isn't itself the audit. A program that does assessments throughout the year and then survives an audit at year-end is doing both correctly; a program that does no assessments and runs only the audit produces a posture that's only true on the audit dates.
The other thing worth knowing: assessments scale through repetition, not through volume per assessment. A 200-question questionnaire produces a 12% response rate; three 30-question questionnaires get answered. Long assessments are theater that produce no usable data. The skill is in writing the right questions (short, answerable, and tied to a decision the program will make from the answer) and in repeating the assessment cycle so the data stays current.
The fourth thing: SimpleRisk implements assessments through the Assessments Extra, which is dual-labeled in the product (showing as "Risk Assessment Extra" on the admin extras list and as "Assessments Extra" once you're inside the management page). Both names refer to the same Extra. If your install doesn't show an Assessments sub-menu in the sidebar, the Extra isn't activated.
How frameworks describe this
Most major frameworks expect periodic assessments as part of an operating program. The shape and frequency vary, but the requirement to ask the questions is consistent.
- ISO/IEC 27001 clause 9.2 (Internal Audit) requires that the organization conduct internal audits at planned intervals to provide information on whether the ISMS conforms to requirements and is effectively implemented. Most ISO programs satisfy 9.2 in part through assessment activities (self-assessments by control owners, third-party assessments by external assessors who aren't the certification auditors, periodic management reviews). The audit cadence in the standard is annual; the assessment cadence is shorter (quarterly is common).
- NIST Cybersecurity Framework (CSF) v2.0 under GV.OV-01 mandates assessment of strategy, performance, and risk-management oversight against organizational outcomes; GV.SC-07 specifically requires risk assessment of suppliers, services, and products. The CSF treats assessment as a recurring governance activity, not as a one-time campaign.
- SOC 2 doesn't prescribe specific assessment activities, but the Trust Services Criteria are evaluated against documented evidence of operation, and that evidence is most efficiently produced through assessments performed throughout the audit period rather than reconstructed at year-end.
- NIST SP 800-53 CA-2 (Control Assessments) is the explicit federal control: "develop a control assessment plan; assess the controls in accordance with the assessment plan; produce a control assessment report; provide the results to authorizing officials." The NIST language is prescriptive but the underlying activity (ask questions about controls, capture answers, act on findings) is universal.
- PCI DSS v4.0 Requirement 12.4 mandates an annual risk assessment process for organizations in scope; specific assessments are also required throughout the standard (e.g., 6.5 for software-development risk assessment, 3.7 for cryptographic-architecture assessment).
The pattern across all five: assessments are recurring, documented, and produce evidence the audit reads. Programs that do assessments well make audits easier; programs that don't end up doing the assessment work in the audit window itself, which is much more expensive.
How SimpleRisk implements this
SimpleRisk's Assessments Extra centers on the Questionnaire: a configurable set of questions sent to one or more recipients, with the responses captured against tokenized identifiers and reviewed by program staff. The Extra exposes its capabilities under the Assessments menu, which after activation contains: Assessment Contacts (people you send assessments to), Questionnaire Questions (the question library), Questionnaire Templates (reusable question sets), Questionnaires (instances built from templates and contacts), Questionnaire Results (responses awaiting review), Risk Analysis (aggregate stats per questionnaire), Import/Export, and Questionnaire Audit Trail.
Three distinct types of assessment fit naturally onto the Questionnaire shape:
Self-assessments — the program asks its own people questions about the controls they operate. The recipient is an internal SimpleRisk user; the questionnaire goes through the same tokenized-email mechanism as external assessments but with contact_type=user in the tracking record. The full walk-through is in Running a Self-Assessment.
Third-party and vendor assessments — the program asks vendors or external partners about their controls. The recipient is an Assessment Contact (an external email-and-name record stored in assessment_contacts, separate from SimpleRisk users); the questionnaire goes via the same mechanism with contact_type=assessment. SimpleRisk doesn't have a separate "vendor" entity — Assessment Contacts serve that role. The walk-through is in Third-Party and Vendor Risk Assessments.
Control assessments — questions are mapped to specific framework controls (via the questionnaire_question_to_control table); the resulting responses produce per-control assessment evidence the program reads alongside the standard control test cycle. The walk-through is in Control Assessments and Evidence Collection.
A single questionnaire can serve more than one of these roles. A vendor assessment that includes questions mapped to your ISO 27001 controls produces both a third-party-assessment artifact (the vendor's responses) and a control-assessment artifact (the responses against the mapped controls). The Extra's data model supports the overlap natively; you don't need separate questionnaires for separate purposes if the questions are shared.
The Questionnaire workflow has a few load-bearing properties:
- Tokenized recipient identification. Each recipient gets a unique 40-character token; the response form (
/assessments/questionnaire.index.php?token=X) opens without a SimpleRisk login. External recipients answer through their email link; no account creation is required. - Pending Risks queue. Responses can flag a question as a pending risk (a finding that the program should track in the risk register). Pending risks land in
questionnaire_pending_risksfor human review before getting promoted to formal risks. The bypass option (Bypass 'Pending Risks' and create Risks immediately) exists for trusted questionnaires but defaults to off. - Review and approval. Every completed response moves through an approval step: Approve finalizes, Reject sends back for revision. The approval is what makes the assessment evidence rather than just data.
- Recurrence. Questionnaires can be set to send on a recurring schedule (the Schedule and send this assessment every [N] days setting); responses pre-populate from the prior round so recipients only confirm or update. This is how the Extra supports annual recertification cycles without re-typing.
The Extra explicitly does not create entries in the compliance module's framework_control_test_audits table when an assessment runs against mapped controls. Control assessments via the Extra live entirely in the Extra's tables; if you want them to also appear as control tests in the compliance dashboards, you do that work manually (or through custom scripting). The two surfaces are independent by design.
Common pitfalls
A handful of patterns recur when programs use assessments.
-
Assessment as audit substitute. A program that runs a thorough self-assessment and treats the result as if the auditor had blessed it is conflating internal and external evidence. Self-assessments are excellent first passes but they aren't audits — the assessor and the assessed are the same party. Run self-assessments to find your own gaps; rely on independent assessment or formal audit for the external attestation.
-
Long questionnaires nobody finishes. The 200-question questionnaire arrives, the recipient skims the first ten, and the response stalls. Long questionnaires produce low completion rates, and the responses you do get are worse-quality than shorter ones because attention drops. Aim for 20–40 questions per questionnaire; if you have more questions to ask, send multiple questionnaires on different cadences rather than one giant one.
-
Free-text answers nobody reads. "Describe your backup process" produces a paragraph the reviewer skims and forgets. The answer can't be aggregated, can't be trended, and can't be compared across recipients. Use multiple-choice or yes/no for the questions you want to score; reserve free text for the supplementary detail on the small number of questions where the answer genuinely needs prose.
-
No close-out. A questionnaire that gets sent, gets responses, and never gets approved sits in the results page indefinitely. The approval step is what turns the response into evidence; without it, the assessment generated data without producing a decision. Schedule a recurring approval session in the program calendar.
-
Treating Assessment Contacts as a CRM. The contacts feature stores name, email, company, phone, manager, and details. It isn't a vendor-management system; it doesn't track contracts, doesn't track tier classifications, doesn't auto-renew due dates. If your program needs vendor-relationship management, the contacts feature is the assessment-recipient half of that picture; the rest lives elsewhere (a contract management tool, a procurement system, or a custom field set added via the Customization Extra).
-
Mapping questions to controls and expecting the compliance module to update. A question mapped to a framework control captures useful evidence in the Assessments Extra's tables, but it doesn't automatically create a control test audit in the compliance module. Programs that expect bidirectional integration between the two surfaces are surprised; the surfaces are independent. If the assessment's findings need to land as control tests, do that work explicitly.
-
Sending the same assessment forever without revision. A questionnaire designed two years ago for a security posture that's since evolved produces responses against questions that no longer reflect what the program cares about. Revise the questions on each annual cycle; the recurring-send feature is convenient but doesn't mean the content should stay frozen.
-
No reminder cadence. A single email arrives in a control owner's inbox under twelve other things and never gets opened. The questionnaire's reminder cadence (Notify assessment contacts every [N] days until completed) exists exactly for this case; setting it to off and then complaining about low completion rates is a self-inflicted wound. For high-priority assessments, also send a personal heads-up before the automated email so the recipient knows it's coming.