Skip to content
English
  • There are no suggestions because the search field is empty.

10.05 Configuring AI Providers

Pick a provider (Anthropic, OpenAI, Google Gemini, Mistral, xAI Grok, Ollama, or custom OpenAI-compatible), get an API key (or set up Ollama locally), set the provider and model in SimpleRisk's AI settings, configure the per-feature toggles, capture the FAIR context, and verify with a test call.

Requires: AI Extra

Provider configuration, API key storage, and per-provider call dispatch all live in the AI Extra at simplerisk/extras/artificial_intelligence/. See The AI Extra Overview for what the Extra does before reading this article.

Why this matters

Provider choice affects cost, performance, privacy, and quality. Anthropic's Claude tends to do well on substantive analysis; OpenAI's GPT-4o is fast and broadly capable; local Ollama keeps everything in your environment but with quality trade-offs. Once configured, the choice is mostly invisible to operators (the AI just produces output), but the procurement and privacy implications are real upfront.

This article walks through the configuration end-to-end: from "Extra activated" to "AI calls succeeding."

Before you start

Have these in hand:

  • Admin access to Configure → Extras → Artificial Intelligence Extra for the configuration UI.
  • The AI Extra activated. See Installing Extras.
  • A provider chosen. Decision factors:
  • Privacy: regulated content (PHI, PII, classified) → Ollama or a privacy-tier commercial provider.
  • Quality: substantive analysis → Claude Sonnet 4, GPT-4o, Gemini 2.0 Pro.
  • Cost: low-volume / budget-constrained → smaller / cheaper models (gpt-4o-mini, gemini-1.5-flash, claude-haiku tiers).
  • Latency: interactive workflows → faster models (gpt-4o-mini, gemini-1.5-flash, ollama on capable hardware).
  • An API key for the chosen provider (or a working Ollama install).
  • The cron queue worker running — AI calls execute via the worker. Without it, AI jobs queue forever.
  • A test risk or test document for verification.

Step-by-step

1. Pick the provider

Decide based on the factors above. Common starting points:

  • Anthropic Claude if you want strong substantive analysis and don't need cheapest-possible cost.
  • OpenAI GPT-4o if you want broadly capable and easy to integrate.
  • Google Gemini if you're already in Google Cloud and want consolidated billing.
  • Ollama if data sovereignty is non-negotiable and you have local infrastructure for inference.
  • Mistral / Grok for specific use cases or as alternatives to the above.

2. Get an API key (commercial providers)

The procurement varies by provider:

  • Anthropic: https://console.anthropic.com/settings/keys. Create an account; provision an API key.
  • OpenAI: https://platform.openai.com/api-keys. Create an account; provision an API key.
  • Google Gemini: https://aistudio.google.com/app/apikey or via Google Cloud Console for production use.
  • Mistral: https://console.mistral.ai/. Account; API key.
  • xAI Grok: https://x.ai/api. Account; API key.

For commercial providers, also configure billing and any spending alerts at the provider's console — runaway integrations can produce surprise bills.

3. Set up Ollama (local provider)

For local-inference deployment:

  1. Install Ollama on a server with appropriate resources (https://ollama.com/download). For substantive AI tasks, a GPU is highly recommended.
  2. Pull the model(s) you'll use: ollama pull llama3.2, ollama pull mistral, etc.
  3. Run Ollama in serve mode: ollama serve. By default, listens on http://localhost:11434.
  4. If SimpleRisk runs on a different server, configure Ollama to listen on a network-accessible interface and ensure the network path is open.

4. Configure the provider in SimpleRisk

Sidebar: Configure → Extras → Artificial Intelligence Extra → Configure (or the equivalent label). The settings page exposes:

  • Provider (ai_provider) — pick from the dropdown: Anthropic, OpenAI, Google Gemini, Mistral, Grok, Ollama, Custom.
  • API Key (ai_api_key) — paste the API key. Stored encrypted in settings.
  • Model (ai_model) — pick the specific model. Defaults vary per provider:
  • Anthropic default: claude-sonnet-4-20250514.
  • OpenAI default: gpt-4o.
  • Gemini default: gemini-2.0-flash.
  • Ollama: whatever model you've pulled (e.g., llama3.2).
  • Endpoint URL (for Custom or Ollama): the full URL of the provider's API or your local Ollama instance (e.g., http://localhost:11434/v1).

Save.

5. Configure the per-feature toggles

The Extra adds AI capabilities to several SimpleRisk workflows. Toggle each:

  • extra_ai_risk_suggestions — enables AI-assisted FAIR analysis on risks.
  • extra_ai_control_suggestions — enables AI-assisted control-to-document matching.
  • extra_ai_document_suggestions — enables AI-assisted document drafting and gap analysis.

Enable only the features you'll use. Disabled features don't make AI calls; you save on cost and reduce surface area.

6. Capture the AI context (for FAIR analysis)

If you're using extra_ai_risk_suggestions, the AI's FAIR estimates are much better when grounded in your program's context. The Extra exposes a one-time questionnaire:

  1. Sidebar: Configure → Extras → Artificial Intelligence Extra → Context (or equivalent).
  2. Walk through the questionnaire: typical loss magnitudes for your organization, threat assumptions, vulnerability levels, etc.
  3. Save.

The context is stored in the extra_ai_context_* settings; subsequent FAIR analyses use it.

Update the context when the organization's risk profile materially changes (mergers, business model shifts, regulatory changes).

7. Verify with a test call

Verify the configuration end-to-end:

  1. Open a test risk in SimpleRisk.
  2. Click the AI assistance trigger (the icon or button added by the Extra; varies by version).
  3. The job queues; the UI shows "processing."
  4. Wait for the cron worker to pick up the job (typically seconds to a minute).
  5. The AI's output appears for operator review.

Verify:

  • The AI returned a sensible response. If the response is empty or nonsensical, check the model selection (a too-small model may struggle with substantive prompts) and the prompt configuration.
  • The response was attributed to the right provider. The audit log should record the provider that handled the call.
  • No error in the cron worker logs. Failed AI calls produce errors; check the SimpleRisk debug log.

If the test fails:

  • 401 / authentication error: API key is wrong or expired. Regenerate at the provider; update SimpleRisk.
  • 402 / payment required: provider billing is unfunded. Check provider console.
  • 429 / rate limit: hitting provider rate limits. Slow down or upgrade your provider tier.
  • Connection timeout: SimpleRisk can't reach the provider (firewall, network).

8. Set provider-side spending alerts (commercial)

For commercial providers, configure alerts:

  • Anthropic: monthly spend alerts in the console.
  • OpenAI: usage limits and email alerts.
  • Google Cloud: billing alerts on the Gemini-using project.

Alert thresholds you can defend operationally to your finance team. Catching a 10× budget spike early matters more than the exact threshold value.

9. Plan for provider changes

The AI provider landscape evolves rapidly. Programs that picked Provider X two years ago may find Provider Y is now better fit (cost, quality, privacy). Plan for occasional provider switches:

  • Configuration is per-install, so switching is straightforward (change the provider, key, and model; verify).
  • Existing AI-generated content is unaffected — the data is just records in SimpleRisk; the provider that generated them isn't recorded permanently.
  • Coordinate the switch with operations: the AI behavior may change subtly with different models; warn operators that AI suggestions may differ in shape.

10. Document the configuration

Capture in your operations runbook:

  • Which provider you're using.
  • Which model.
  • Where the API key is stored.
  • Who manages the provider account (for billing, key rotation).
  • Where the FAIR context document lives.
  • How to disable AI features in case of a provider outage.

Common pitfalls

A handful of patterns recur with AI provider configuration.

  • Configuring without testing. "It saved, so it must work" — until the first AI call fails 12 hours later. Test immediately after configuration.

  • Using a model that doesn't fit the task. A small/cheap model may handle short summarization fine but produce poor FAIR analyses. Match model capability to use case.

  • Picking commercial providers for regulated content without checking privacy posture. PHI to OpenAI without a BAA is a HIPAA violation. Verify the provider's data-handling commitments before configuring.

  • Hardcoding the API key in source. Use the Extra's encrypted setting; don't paste into config files or environment variables that bleed into logs.

  • Forgetting to fund the provider account. API calls fail with 402 once the trial credit runs out. Set up billing.

  • Not setting spending alerts. A misbehaving integration that calls AI in a tight loop produces a surprise bill. Alerts catch it.

  • Switching providers mid-program without warning operators. AI output changes; operators see different shapes; trust drops. Communicate the change.

  • Trying to use Ollama with a model too small for the task. A 3B-parameter model can't do substantive FAIR analysis well. Either scale up the model and the hardware or use a commercial provider.

  • Configuring Ollama at a network address SimpleRisk can't reach. Firewall rules or non-routable addresses cause connection failures. Verify network path.

  • Letting the AI Extra's API key expire silently. Keys can be revoked or expire; AI calls suddenly fail. Track key lifecycles.

  • Activating the Extra and the per-feature toggles without considering the user-facing surface. Operators see new AI buttons in the UI; they need to know what they do. Train before activating.

  • Treating AI output as canonical without operator review. The Extra's design assumes review. Don't bypass it.

Related

Reference

  • Permission required: check_admin for provider configuration and per-feature toggles.
  • API endpoint(s): None for configuration; AI use cases expose endpoints under /api/v2/ai/....
  • Implementing files: simplerisk/extras/artificial_intelligence/index.php (Extra activation, configuration UI); simplerisk/includes/artificial_intelligence.php ($AI_PROVIDERS, AIClient, callAnthropicNative(), callOpenAICompatible()); simplerisk/cron/cron_queue_worker.php (the worker that processes AI jobs).
  • Database tables: Standard job-queue tables (used by the cron worker for AI jobs).
  • config_settings keys: extra_artificial_intelligence (Extra activation); ai_provider (provider slug); ai_api_key (encrypted); ai_model (model name); per-feature extra_ai_risk_suggestions, extra_ai_control_suggestions, extra_ai_document_suggestions; ai_context_* (FAIR context fields); ai_context_last_saved, ai_context_last_updated.
  • Supported providers: Anthropic, OpenAI, Google Gemini, Mistral, xAI Grok, Ollama, Custom (OpenAI-compatible).
  • External dependencies: A provider account and API key (commercial), or a working Ollama install (local); the cron queue worker; outbound network access to the provider (commercial).