02.07 Control Tests and Evidence Collection
Define a test against a control, schedule its audit cadence, perform the test, attach evidence, and link any risks the test surfaces — all of it on the framework_control_test_audits substrate that drives compliance posture reporting.
Why this matters
Tests are how a compliance program verifies that the controls it claims to operate are actually operating. A control with no test attached is a control the program intends to operate; a control with a test attached and a recent passing result is a control the program has evidence of operating. The difference matters: an auditor reading the documentation reads it against the operations, and the evidence trail is what bridges the two.
Most working compliance programs run dozens to hundreds of tests on a rolling cadence — quarterly for most controls, monthly for high-criticality ones, annually for low-impact ones. The mechanics of "which tests are due this week, who owns each, what evidence does each need" is the daily work of compliance, and it scales worst when done by hand. SimpleRisk's test-and-evidence module is built to handle the bookkeeping at scale; the practitioner work is defining the tests once and then executing them on schedule.
The other thing worth knowing: tests link bidirectionally to risks. A failed control test surfaces a risk (the control isn't operating, the risk it was supposed to mitigate is now exposed); a successful retest can close the surfaced risk. SimpleRisk's data model captures the link via the test_result_to_risk table, which means the risk register and the compliance posture stay in sync as controls degrade and recover. The integration is what makes "compliance affects risk and risk affects compliance" tractable in operations rather than just in theory.
Before you start
Have these in hand before you open the test workflow:
- The right permissions for the part of the workflow you're doing. Defining tests requires Able to Define Tests. Initiating audits (manually scheduling a test execution) requires Able to Initiate Audits. Submitting a test result and attaching evidence requires Able to Modify Audits. The three permissions are separate so that a program can split test definition (second-line work) from test execution (first-line work). (See Permission Reference.)
- An installed framework with controls in it. The test workflow operates against controls; with no controls installed, there's nothing to test. See Installing a Framework.
- A clear sense of what passing looks like for the control under test. A test definition without a documented expected result produces audits where the tester has to invent the bar each time, which produces inconsistent results across audits. Spend time on the Expected Results field; the consistency it produces is what makes test-result trends meaningful.
- For tests with attached evidence: a clear sense of what evidence is required and what isn't. A test that attaches every available document to every audit produces an evidence repository nobody can navigate; a test that attaches no evidence produces audit results that can't be defended. The right answer is "the specific artifacts that prove this test passed" — usually a small number per audit, named for the audit they support.
Step-by-step
1. Define the test against the control
Tests are defined under the Compliance module's Define Tests tab. Sidebar: Compliance → Define Tests opens the page; click Add Test to open the test definition form.
Fill in the test definition fields:
- Test Name — a short descriptive name. "Quarterly access review for the production database" beats "Test 17."
- Framework Control — the control this test validates. Pick from the installed controls.
- Tester — single-user picker. The default tester for audits generated from this test definition. Editable per audit.
- Additional Stakeholders — multi-user picker. People notified about audits but not responsible for executing them.
- Teams — multi-team picker. Teams whose work the test covers.
- Test Frequency — number of days between scheduled tests. Common values: 30 (monthly) for high-criticality controls, 90 (quarterly) for most, 180 or 365 for low-impact. The frequency drives the auto-initiation cadence.
- Last Test Date — the date of the most recent execution. Used as the anchor for computing the next test date.
- Next Test Date — auto-calculated from Last Test Date + Test Frequency, but editable if you want a specific date for the next audit.
- Auto Initiate Audit — toggle. When on, SimpleRisk's cron creates a new audit instance automatically when the Next Test Date arrives. When off, audits are initiated manually.
- Audit Initiation Offset — number of days before Next Test Date to create the audit (so the tester has lead time). Default 0; a non-zero value gives the tester advance notice.
- Objective — what the test is trying to verify. Free text.
- Test Steps — the procedure for performing the test. Free text. Be specific enough that someone other than the test author can execute it.
- Approximate Time — minutes per execution. Used in capacity planning reports; doesn't gate execution.
- Expected Results — what a passing test looks like. Free text. The most important field for consistency — vague expected results produce inconsistent audit results across testers.
- Tags — free-text labels for filtering and reporting.
Click Save Test. SimpleRisk writes the definition to framework_control_tests and starts the auto-initiation cycle (if enabled).
2. Initiate an audit (or wait for auto-initiation)
When Auto Initiate Audit is enabled, SimpleRisk's async-job cron creates audits automatically as their Next Test Date arrives. The new audits appear in the Active Audits queue at /compliance/active_audits.php and the assigned tester gets a notification (if the email cron is configured).
For tests that aren't auto-initiated, manual initiation is the path. Open the test definition and click Initiate Audit, or use the Audit Initiation workflow at /compliance/audit_initiation.php to initiate audits in batch (multiple tests at once for the same audit period).
Either path creates rows in framework_control_test_audits linking the test definition, the framework control, the tester, the audit creation timestamp, and the audit's initial status.
3. Perform the audit and submit the result
The tester sees the audit in the Active Audits queue. Click the audit to open /compliance/view_test.php?id=
, which renders the full test execution form: the test definition (Objective, Test Steps, Expected Results visible at the top), the result-submission section, and the evidence-attachment section.
Fill in the result fields:
- Test Result — dropdown with three values: Pass (control operated as designed), Fail (control did not operate as designed), Inconclusive (test could not be performed or the result was ambiguous). Required.
- Tester — pre-populated with the test definition's tester; editable if a different person actually performed the test.
- Test Date — the date the test was performed. Defaults to today.
- Teams — multi-team picker, pre-populated from the test definition.
- Summary — free-text textarea describing what was tested, what was found, and the rationale for the result. The audit-trail content for this audit lives here; spend the time on it.
- Tags — free-text labels for filtering.
4. Attach evidence
Evidence files attach via the file upload section of the form. Pick the files (screenshots of the control configuration, logs from the test execution, exported reports, signed-off documents, whatever the control's nature requires) and upload. The files write to the compliance_files table with ref_type='test_audit' and ref_id=
, linking each file to the specific audit instance.
Evidence conventions worth following:
- Name files for the audit they support. "AC-2-Q3-2026-access-review-export.pdf" is a name future-you will thank present-you for. "Document.pdf" is not.
- Attach the minimum sufficient evidence. Five files describing the access review beat fifty files including every related document. The audit conversation reads the files; padding makes the conversation slower.
- Date the evidence. A screenshot dated to the audit period proves the control operated during that period. An undated screenshot doesn't anchor to a time.
- Anonymize where appropriate. Customer data in evidence files becomes data subject to the same retention and access controls that govern the customer data itself. If the control can be demonstrated with redacted or sample data, prefer that.
5. Link surfaced risks
A failed test usually surfaces a risk: the control isn't operating, so the risk it was supposed to mitigate is now exposed. The submission form has a Submit Risk path inline; choose to either link the audit to an existing open risk in the register, or submit a new risk inline that gets linked to the audit through the test_result_to_risk table.
The link goes both ways: the risk record shows the audits that surfaced it; the audit record shows the risks it produced. When a subsequent audit passes (the control is now operating again), the form prompts to remove the link from the now-resolved risk, optionally closing it via the standard close workflow (requires Able to Close Risks). The result is a register where compliance failures and recoveries flow naturally into the risk register without manual reconciliation.
6. Save the audit
Click Submit. SimpleRisk writes:
- The test result row in
framework_control_test_results(linked to the audit viatest_audit_id). - The evidence file rows in
compliance_files. - Any risk-link rows in
test_result_to_risk. - Audit-log entries for the result submission, the evidence attachment, and any risk creation or linking.
- Updates to the test definition's Last Test Date (now today) and Next Test Date (today + Test Frequency).
The audit transitions to the closed state (depending on the install's closed_audit_status configuration) and moves out of the Active Audits queue. Past audits remain visible at /compliance/past_audits.php.
7. Verify the compliance posture refresh
After the audit save, the compliance dashboard widgets should reflect the new result. Open the compliance dashboard and confirm:
- The control's most-recent-result indicator shows the new result (Pass / Fail / Inconclusive).
- The framework's posture rolls up correctly — a failed test on a critical control should drop the framework's posture indicator visibly.
- Linked risks appear on the risk dashboard with the audit reference visible in the audit trail.
The verification catches the cases where the audit save worked but the rollup didn't — usually a sign of a misconfigured test-to-control link or a missing framework assignment.
8. Bulk and programmatic paths
For programs running large test cycles, two paths beyond the per-audit UI are available:
- Bulk audit initiation. The audit-initiation page at
/compliance/audit_initiation.phpinitiates many audits in one operation, which is useful at the start of a quarterly audit cycle. - v2 API. The endpoints
POST /api/v2/compliance/define_tests(datatable for defining tests),POST /api/v2/compliance/active_audits(datatable for the audit queue),GET /api/v2/compliance/audits/{id}(fetch a single audit),PATCH /api/v2/compliance/audits/{id}(update audit status), andDELETE /api/v2/compliance/audits/{id}(delete an audit) cover programmatic workflows. The result-submission endpoint is implemented as part of the audit-update PATCH; the field set matches the form. Useful for integrations with external test-execution tools (a vulnerability scanner that automatically posts results, a configuration-management database that signals control state, etc.).
Common pitfalls
A handful of patterns recur when teams operate the test-and-evidence workflow.
-
Evidence that doesn't reflect reality. A test result of Pass with attached evidence dated three months before the audit period doesn't prove the control operated during that period. Evidence has to be dated within the audit window for it to count. Re-run the evidence collection during the audit period rather than padding with old artifacts.
-
Tester is the same person as the control owner. A control owner who tests their own control is doing first-line and third-line work simultaneously. The whole point of a separate tester is to catch the cases where the owner wouldn't notice the control isn't operating. Set the tester to someone other than the control owner; SimpleRisk's permission model supports the split.
-
Skipping the Expected Results field. An audit form that opens with a blank or vague Expected Results section forces the tester to invent the bar each time. Different testers invent different bars, the test results drift, and trend analysis becomes meaningless. Spend time on the Expected Results when defining the test; the per-audit consistency it produces is the foundation of useful test-result reporting.
-
Auto-initiation enabled with no one watching the queue. A test set to auto-initiate quarterly creates audits regardless of whether anyone's actually performing them. After two cycles of unanswered audits, the Active Audits queue is full of stale items and the program looks worse on dashboards than the operations actually warrant. Enable auto-initiation only when the program has the capacity to actually perform the tests on schedule.
-
Attaching too much evidence. "Attach every document related to access control" produces a 50-file evidence trail per audit that nobody navigates. The auditor wants the minimum sufficient set: "here are the three artifacts that demonstrate the control operated this period." Prune the evidence to what's actually probative.
-
Failing to link surfaced risks. A failed control test that doesn't produce a corresponding risk in the register leaves the program with a known control gap and no risk record showing it. Use the inline risk-submission path on the audit form to capture the gap; the risk register is what tracks the gap until the control comes back to passing.
-
Treating Inconclusive as a soft pass. Inconclusive means "the test couldn't determine whether the control operated." It's not a soft pass; it's a flag that the test design or execution couldn't produce a verdict. Inconclusive results need a follow-up: improve the test design, perform a retest with better tooling, or escalate to a different reviewer. Letting Inconclusive accumulate is letting unknowns accumulate.
-
Re-using the same evidence files across audits. Re-attaching the same PDF to every quarterly audit is fast, but it's also exactly the pattern auditors look for as a tell that the control isn't actually being re-evaluated each cycle. If the evidence is genuinely unchanged across audits (a policy document that hasn't been updated, for example), reference it in the Summary rather than re-attaching it; the audit trail makes the unchanged-content case clearly.
-
Letting Closed audits become wallpaper. The Past Audits view fills up over time; programs sometimes stop reading it. The historical audit trail is what answers "how has this control performed over time" — questions that drive the compliance posture trend reports. Keep an eye on past audits, especially when the same control is producing recurring Fail or Inconclusive results.