10.04 The AI Extra Overview
The Artificial Intelligence Extra integrates Claude (Anthropic), GPT (OpenAI), Gemini (Google), Mistral, Grok (xAI), and local Ollama models into SimpleRisk for risk FAIR analysis, control-to-document matching, document chunking, and policy assistance. Pick a provider, configure the API key, enable the per-feature toggles, let the AI suggest content for operators to review.
Requires: AI Extra
AI integration, the per-provider API clients, and the AI-driven features all live in the AI Extra at
simplerisk/extras/artificial_intelligence/. The Extra integrates with seven providers and powers several AI-assisted workflows in SimpleRisk.
Why this matters
GRC programs spend significant operator time on activities that benefit from language-model assistance: estimating impact and likelihood for FAIR analysis, mapping controls to evidence in policy documents, summarizing long policies into specific control statements, drafting initial policy text. None of these are tasks where AI is the authority (operators always review and accept or reject the AI's output), but as a suggestion layer, AI accelerates work that would otherwise be slow and tedious.
The AI Extra brings this assistance into SimpleRisk. The Extra integrates with the major commercial AI providers (Anthropic Claude, OpenAI GPT, Google Gemini, Mistral, xAI Grok) and with local-inference Ollama for fully-private deployments. Per-feature toggles control which AI assists with which workflows; per-provider configuration controls the API key, model, and budget.
The honest scope to know up front: AI is a suggestion tool, not an authority. Every AI output goes through operator review before becoming canonical SimpleRisk data. The Extra is built around a "AI suggests, operator approves/edits/rejects" pattern, not "AI runs the program." This is the right shape (risk and compliance decisions need human accountability), but it means AI doesn't reduce headcount; it speeds up the headcount you have.
The other thing worth knowing: data leaves your environment when commercial providers are configured. Calling Claude or GPT means sending the prompt (which contains the relevant SimpleRisk data: risk descriptions, document text, control names) to Anthropic's or OpenAI's API. The Extra doesn't anonymize or scrub the data before sending. For programs handling regulated content (PHI, PII, classified material), commercial providers may not be appropriate; the Ollama local-inference option keeps data fully within your environment.
The third thing: provider responses are processed asynchronously via the cron queue worker. AI calls can be slow (10–60 seconds is typical for substantive prompts); doing them synchronously would freeze the user's browser. The Extra queues AI jobs and processes them in the background; the UI shows "processing" status and updates when results arrive.
How frameworks describe this
AI in GRC is a recent development; framework guidance is still evolving. NIST AI RMF (NIST AI 100-1) is the most comprehensive guidance on managing AI risk, including data privacy considerations for AI-using applications. ISO/IEC 42001 (AI management systems) provides a management-system framework for AI deployment. For programs adopting AI features in SimpleRisk, both documents are worth reviewing — the AI Extra's privacy implications, model governance, and decision accountability all map to AI RMF concerns.
How SimpleRisk implements this
Supported providers
The Extra defines provider support in $AI_PROVIDERS (in simplerisk/includes/artificial_intelligence.php):
- Anthropic Claude — default. Models:
claude-sonnet-4-20250514(the default), other Claude variants. API: native Anthropic Messages API. - OpenAI GPT — models:
gpt-4o,gpt-4o-mini,o1,o3-mini. API: OpenAI Chat Completions API. - Google Gemini — models:
gemini-2.0-flash,gemini-1.5-pro,gemini-1.5-flash. API: Gemini. - Mistral — Mistral models via Mistral's API.
- xAI Grok — Grok models via xAI's API.
- Ollama — local inference. Models: whatever you've loaded into your local Ollama instance (Llama, Mistral, etc.). API: local endpoint, no cloud calls.
- Custom — user-supplied OpenAI-compatible endpoint (for organizations running their own LLM behind an OpenAI-API-compatible proxy).
The provider is selected per-install via the ai_provider setting; only one provider is active at a time.
AI use cases
The Extra adds AI assistance for several SimpleRisk workflows:
- Risk FAIR Analysis (
extra_ai_risk_suggestions): the AI estimates threat capability, vulnerability, threat event frequency, loss event frequency, and loss magnitude for a risk based on its description and the program's pre-configured FAIR context. Output populates the FAIR analysis fields on the risk; operators review and adjust before saving. - Control-to-Document Matching (
extra_ai_control_suggestions): given a control and a set of policy documents, the AI identifies which documents address the control and where (paragraph or section). Useful for compliance evidence collection. - Document Enhancement (
extra_ai_document_suggestions): the AI assists with policy writing — suggests improvements, identifies gaps against a target framework, drafts initial sections. - Document-to-Control Chunking: given a policy document, the AI extracts the control statements within it and offers to map them to a target framework's controls.
Each use case has a per-feature toggle; programs can enable some and disable others.
The AI context
For FAIR-style risk analysis, the AI needs the program's organizational context (typical loss magnitudes, threat assumptions, vulnerability levels). The Extra exposes a one-time questionnaire (extra_ai_context_* settings) that captures this context. Subsequent FAIR analyses use this context to ground their estimates in your program's reality.
The context is stored in the settings table. Update it via the Extra's questionnaire when the organization's risk profile shifts (mergers, business model changes, regulatory changes).
Job-based processing
AI requests run as queued jobs:
ai_risk_fair_analyze— performs FAIR analysis for a single risk.ai_control_to_document_process— matches one control to documents.ai_document_to_control_chunker— extracts control statements from a document.
The cron worker (cron_queue_worker.php) picks up queued AI jobs and dispatches to the configured provider. Job status flows from pending → in_progress → success/failed. The UI polls the job status and updates when complete.
For installs with high AI volume, the cron worker can backlog. Monitor queue depth; consider scaling worker instances or batching prompts.
Data privacy
When using a commercial provider:
- The prompt is sent to the provider's API. The prompt typically includes the relevant SimpleRisk data (risk subject, description, FAIR context; or control name, document text, etc.).
- The response is returned to SimpleRisk. SimpleRisk stores the AI's output for operator review; no storage at the provider beyond their standard API logs.
- The provider's privacy policy applies. Commercial providers (Anthropic, OpenAI, Google) have specific data-handling commitments for API calls — generally, prompts aren't used for training and aren't retained beyond a short period. Check the provider's API privacy policy for current commitments.
- Some commercial providers offer Zero Data Retention (ZDR) or HIPAA-eligible BAA tiers. If your program handles regulated data, configure the appropriate provider tier.
When using Ollama:
- The prompt and response stay within your environment. No external API calls.
- Inference performance depends on your local hardware. Larger models need GPUs; smaller models run on CPUs.
- Model quality varies. Open-source models (Llama, Mistral) are improving rapidly but typically lag commercial offerings on substantive tasks. Test the quality against your use case.
Cost considerations
Commercial AI providers charge per token (input + output). Substantive prompts (FAIR analysis with full context) are typically 1,000–10,000 tokens; cost ranges from fractions of a cent to a few cents per call. Budget calculator: assume $0.01–$0.10 per AI-assisted operation; multiply by expected volume.
For programs with high AI volume, the local-Ollama option eliminates per-call costs at the trade-off of upfront infrastructure investment.
What the AI Extra doesn't do
- Replace operator judgment. Every AI output requires operator review. The Extra doesn't auto-commit AI suggestions to the canonical record.
- Generate audit findings. AI can summarize control test results; it doesn't run the tests or determine pass/fail.
- Take administrative actions. The AI doesn't modify users, change settings, deactivate Extras, etc.
- Provide explainability beyond what the model offers. Why did the model suggest "High" likelihood? The model's reasoning is internal; SimpleRisk relays the output, not the model's chain of thought.
- Substitute for proper risk modeling. AI-suggested FAIR estimates are starting points; the actual estimates require human judgment grounded in real program data.
Common pitfalls
A handful of patterns recur with the AI Extra.
-
Activating AI features without operator-review discipline. AI output going into the register without review compounds errors. Enforce review.
-
Sending sensitive content to commercial providers without considering privacy. A risk description naming a specific customer, a policy document with PHI — these go to the provider's API. For regulated content, consider Ollama or a provider tier with appropriate guarantees.
-
Treating AI output as authoritative. It's a suggestion. Operators review and decide.
-
Picking a model that's not appropriate for the task. Smaller / faster models are cheaper but less accurate for substantive analysis. Match the model to the use case.
-
Not setting an AI budget alert. Per-token billing can produce surprising bills if a misbehaving integration spams the AI. Set provider-side spending alerts.
-
Forgetting that the cron worker has to run. AI jobs queue without it; the UI shows "processing" forever.
-
Activating all AI features at once. Some are useful for some programs; some aren't. Activate selectively.
-
Not capturing the AI context (FAIR questionnaire) before using FAIR analysis. The AI's estimates without context default to generic; with context they're grounded in your program's reality. Capture the context first.
-
Storing the AI provider's API key in source control. Treat as a secret.
-
Using AI to generate the audit response that would otherwise have required real analysis. The AI can draft; the operator must verify against actual evidence. Skipping the verification produces audit findings that don't match reality.
-
Not periodically reviewing AI suggestions for systematic biases. Models can have systematic biases (always suggesting High severity, always missing certain control types). Sample-review periodically; recalibrate the prompts or the operator-review focus accordingly.
Related
Reference
- Permission required:
check_adminfor activation, provider configuration, and per-feature toggles. - API endpoint(s): The Extra exposes AI-related endpoints under
/api/v2/ai/...(when active). - Implementing files:
simplerisk/extras/artificial_intelligence/index.php(enable_artificial_intelligence_extra(),disable_artificial_intelligence_extra());simplerisk/includes/artificial_intelligence.php($AI_PROVIDERSprovider catalog,AIClientclass,callAnthropicNative(),callOpenAICompatible());simplerisk/extras/artificial_intelligence/jobs/(per-use-case job definitions);simplerisk/cron/cron_queue_worker.php(the worker that processes AI jobs). - Database tables: None unique to AI beyond standard job-queue tables (used by the cron worker).
config_settingskeys:extra_artificial_intelligence(Extra activation flag);ai_provider(provider slug);ai_api_key(per-provider API key);ai_model(model name); per-feature flagsextra_ai_risk_suggestions,extra_ai_control_suggestions,extra_ai_document_suggestions;ai_context_*(FAIR context);ai_context_last_saved,ai_context_last_updated(context-management timestamps).- Supported providers: Anthropic Claude (native); OpenAI GPT; Google Gemini; Mistral; xAI Grok; Ollama (local); Custom (OpenAI-compatible).
- External dependencies: A provider account (Anthropic, OpenAI, etc.) and an API key, OR a local Ollama installation; the cron queue worker; outbound network access from SimpleRisk to the provider's API.