Skip to content
English
  • There are no suggestions because the search field is empty.

09.06 Working with SimpleRisk AI

The AI Extra adds a chat sidebar plus three opt-in AI assistance modes (control suggestions, risk analysis, document generation). Multi-provider — Anthropic Claude or OpenAI-compatible. Here's how to use it and what to watch out for.

Requires: AI Extra

The AI chat sidebar and the per-feature AI assistance modes described here are added by the AI Extra (directory simplerisk/extras/artificial_intelligence/). Core SimpleRisk has no AI features; without the Extra activated and an AI provider configured, this article doesn't apply.

Why this matters

The AI Extra adds a generative-AI assistant to SimpleRisk. The most visible piece is the chat sidebar — a right-side panel accessible from a topbar icon that lets you ask questions and get answers from a configurable AI provider, with your accessible risks loaded as context for the conversation. The less visible pieces are three optional AI-assistance modes that can suggest content as you work: control suggestions (compare controls to documents), risk analysis (FAIR-style analysis on risks), and document generation (AI-assisted policy writing).

The trap with any AI assistant is treating its output as authoritative. SimpleRisk's AI assistant has no special knowledge of your environment beyond what it's told — the data it sees is whatever the chat context loads (your accessible risks, the specific risk you're viewing if any) plus its provider's general training. It can produce plausible-sounding answers that are wrong; it can suggest controls that don't apply to your specific framework; it can write policies that don't reflect your actual practice. The AI is a useful first-draft surface and a useful question-answering assistant for general GRC vocabulary; it isn't a substitute for the human judgment the program runs on.

The other thing worth knowing: the AI Extra supports multiple providers (Anthropic Claude is the primary, OpenAI-compatible providers are also supported via configuration). Programs can choose the provider that matches their procurement situation and their data-handling requirements. The configuration also supports custom API endpoints, which lets self-hosted or alternative-vendor models work alongside the standard providers.

The third thing: AI calls cost money. Every chat message, every AI suggestion, every AI-generated document goes to the configured provider's API and bills against the API key. Programs should configure usage limits at the provider level (Anthropic and OpenAI both expose dashboards for this) and treat the AI features as paid capabilities to be used deliberately, not as free background help.

Before you start

Have these in hand:

  • The AI Extra installed and activated by an admin, with a configured AI provider and a valid API key. Without these, the topbar AI chat icon doesn't appear and the AI suggestion features don't fire.
  • The relevant permission for the AI feature you're using:
  • riskmanagement for the AI chat (the chat endpoint is gated by this permission).
  • riskmanagement for AI risk analysis recommendations.
  • add_documentation for AI document creation.
  • A clear question or task for the AI. Open-ended "tell me about risk management" produces general-purpose answers; specific questions like "what controls would address SOC 2 CC6.1 in a SaaS context?" produce more useful responses. The AI is a tool; the question is the work.
  • An understanding of what context the AI has access to. The chat receives your accessible risks as background (so it can answer "how am I doing on risk X?"); it does not have access to your full database, your compliance posture, your incident history, or your governance documents unless you provide them in the conversation.
  • Awareness of your organization's policy on sending data to external AI providers. The chat conversations and the AI-generated content are sent to the provider's API for processing. For organizations with sensitivity about external data sharing, this is a procurement-and-policy conversation, not a technical one. If your install uses a self-hosted AI endpoint configured in ai_api_url, the data stays within your control; otherwise it goes to the provider.

Step-by-step

1. Activate the Extra (admin)

Activation is admin-only. The admin navigates to Configure → Extras and activates the AI Extra (or whatever it's labeled in the admin extras list — the Extra's directory is simplerisk/extras/artificial_intelligence/). Activation:

  • Sets the extra_artificial_intelligence configuration setting to true.
  • Adds the AI chat icon to the topbar (visible only when an API key is configured).
  • Exposes the AI configuration settings.

2. Configure the AI provider (admin)

Once activated, the admin configures the AI provider through the AI Extra's configuration surface (typically under Configure in the admin section). The relevant configuration settings:

  • ai_provider — the provider name. Default is anthropic; openai is also supported.
  • ai_model — the specific model to use (e.g., claude-sonnet-4-20250514 for Anthropic).
  • ai_api_key — the API key for the configured provider. Stored encrypted.
  • ai_api_url — optional custom API endpoint. Useful for self-hosted models or for routing through a corporate AI gateway.
  • extra_ai_control_suggestions — toggle for AI assistance on controls.
  • extra_ai_risk_suggestions — toggle for AI assistance on risks.
  • extra_ai_document_suggestions — toggle for AI assistance on documents.

The admin chooses which AI assistance modes to enable; each mode is independent and can be turned on or off without affecting the others.

3. Use the AI chat sidebar

Once the Extra is active and configured, the topbar shows an AI chat icon (typically on the right side, near the user account dropdown). Click the icon to open the chat panel — a right-side panel approximately 340 pixels wide that overlays the page content.

Inside the chat panel:

  • Conversation history appears at the top, showing prior messages in the current conversation.
  • Suggested prompts may appear (depending on your install's configuration) to help start a conversation.
  • Message input at the bottom is where you type questions.

The chat is contextually aware of your accessible risks. The conversation context loads:

  • Your full set of accessible risks (filtered by your role's permissions).
  • The specific risk's details, if you're currently viewing a risk's detail page.

This means questions like "summarize my open High and Very High risks" or "what's the residual risk on the risk I'm currently viewing?" can be answered with reference to actual data. Questions outside that scope (compliance posture, incident history, document library) are answered from the AI's general knowledge, not from your SimpleRisk data.

The chat is one-conversation-at-a-time within the session. Closing the panel and reopening preserves the conversation; navigating to a new page may refresh the context (the loaded risks update) but typically preserves the chat history within the session.

4. Use AI control suggestions

When extra_ai_control_suggestions is enabled, the compliance module's control views may surface AI-generated suggestions for control improvements. The exact surface varies by SimpleRisk version; the suggestions are typically presented as a panel or modal on the control's edit view.

The suggestions compare the control's documented attributes against the documents linked to the control and propose:

  • Improvements to the control's description.
  • Additional documents that might justify the control.
  • Gaps where the control's stated implementation may not match its description.

Treat the suggestions as a first draft — useful for catching documentation gaps you might have missed, not authoritative for what the control actually does in your environment. Apply the suggestions only after reviewing them against the operational reality.

5. Use AI risk analysis

When extra_ai_risk_suggestions is enabled, the risk views may expose an AI-driven analysis option that produces a FAIR-style (Factor Analysis of Information Risk) breakdown of a risk. The endpoint is GET /api/v2/ai/recommendations/risk and produces:

  • A structured analysis of the risk's likelihood and impact factors.
  • Recommended mitigation directions.
  • Estimated loss exposure (dollar-denominated, FAIR-style).

The analysis runs against the risk's recorded fields (subject, assessment, scoring, linked assets) and the AI's general knowledge of the threat landscape. The output isn't a substitute for a real FAIR analysis (which requires expertise the AI doesn't have specific to your environment), but it's a useful starting prompt for the conversation about whether the risk is correctly scored.

The results are cached in the ai_recommendations_risk table so repeated queries don't re-bill for the same analysis. Refresh the analysis when the risk's underlying details change meaningfully.

6. Use AI document creation

When extra_ai_document_suggestions is enabled, the document module exposes AI-assisted document drafting via GET /api/v2/ai/document/create. The workflow:

  1. Trigger AI document creation from the document add or edit view.
  2. Provide a prompt or topic ("Acceptable Use Policy for a SaaS company handling customer PII").
  3. The AI generates a draft document.
  4. You review, edit, and save the resulting document.

The generated content is a starting point, not a finished policy. Every AI-generated policy needs human review for organizational fit, regulatory accuracy, and operational reality. Don't publish AI-generated documents without that review; the audit conversation about a policy that turns out to be hallucinated content is one to avoid.

7. Manage AI usage and cost

The AI features call out to paid APIs. Cost-conscious management:

  • Set spending limits at the provider level. Both Anthropic and OpenAI dashboards expose monthly spend limits and per-key rate limits. Configure these before turning the Extra loose on a team.
  • Monitor the provider's usage dashboard. Periodic review of the API key's usage tells you which features are being used most heavily.
  • Disable AI assistance modes you're not using. The three feature toggles (extra_ai_control_suggestions, extra_ai_risk_suggestions, extra_ai_document_suggestions) are independent. Turn off the ones you don't actively need.
  • Consider rate-limiting at the API gateway. For organizations with corporate AI gateways, configuring the AI Extra to route through the gateway (ai_api_url) lets the gateway enforce organization-wide limits.

Common pitfalls

A handful of patterns recur with AI features.

  • Treating AI output as authoritative. The most common failure mode. AI-generated content (chat answers, risk analyses, draft documents) is plausible-sounding but not always correct. A control suggestion that mentions a regulation that doesn't apply to your organization, a risk analysis that uses an industry-standard threat model that's wrong for your context, a draft policy that references practices your team doesn't follow — all of these are realistic AI failures. Treat AI output as a draft to be reviewed; don't apply suggestions or publish documents without human verification.

  • Not configuring usage limits. Activation without provider-side spending limits produces a feature that can run up substantial costs unexpectedly. A bug in the calling code, a user accidentally generating many AI requests, or a deliberately abusive use can all produce large bills. Set the limits before activating.

  • Sharing sensitive data with the AI without thinking. The chat sends your messages to the configured provider. For Anthropic and OpenAI, that's the vendor's cloud; for self-hosted models, it stays in your control. If the chat conversation includes sensitive information (specific user names, customer data, security-sensitive details), the data has been transmitted to the provider. Match the AI's privacy posture to the sensitivity of the information you're sharing.

  • Forgetting that AI doesn't see all of SimpleRisk. The chat context is your accessible risks plus the current page's entity. It doesn't see your compliance posture, your incident history, your governance documents, or your assets unless you explicitly include them in the conversation. Questions about those entities that assume the AI has them as context will produce answers based on the AI's general knowledge, not your specific data.

  • Using AI suggestions without reviewing them. A document generated by AI and saved without human review is a document the program can't defend in audit. The same applies to AI control suggestions auto-applied without verification. The AI is fast; the verification is slow; skipping the verification negates the time savings the AI was supposed to provide.

  • Configuring providers that change frequently. If the ai_model setting points at a provider's model name and the provider deprecates the model, the AI calls start failing. Periodically check the provider's deprecation announcements and update the model name when the current one nears retirement.

  • Treating AI cost as zero. Each AI call bills against the API key. A team that uses the chat heavily produces meaningful spend. Treat the AI features as paid capabilities; track the spend; consider whether the value justifies the cost periodically.

  • Hardcoding API keys in version control. The ai_api_key setting holds a credential. If the SimpleRisk database backups end up in version control or shared storage with too-broad access, the key is exposed. Treat the key as a secret; rotate periodically; ensure database backups are stored with appropriate access controls.

  • Letting AI suggestions accumulate without acting on them. A program that turns on AI control suggestions and then never reads them produces unused output and unnecessary cost. Disable features the team isn't actively using.

Related