Skip to content
English
  • There are no suggestions because the search field is empty.

01.04 Installing on AWS, Azure, and GCP

SimpleRisk's cloud deployment patterns — the AWS Marketplace AMI, the Docker-based deployments on managed Kubernetes, the bring-your-own-VM patterns on Azure and GCP, and the cloud-specific concerns (managed databases, load balancers, secrets management).

Why this matters

Most production SimpleRisk deployments today run in a public cloud, even when the organization started with on-premise plans. The reasons aren't SimpleRisk-specific — modern infrastructure operations gravitate toward managed databases, container orchestration, and the operational tooling cloud providers ship with their platforms. SimpleRisk doesn't fight this; it deploys cleanly into any of the major clouds via the same patterns that work for native Linux (Installing on Linux) or Docker (Installing via Docker). The cloud-specific concerns layer on top.

The honest scope to know: SimpleRisk doesn't ship cloud-native deployment artifacts in the application repository. There's no Terraform module, no CloudFormation template, no Helm chart bundled with SimpleRisk's source. What ships is the application code (this repository) and the Docker images (the simplerisk/docker repository); turning those into a cloud deployment is the operator's work. SimpleRisk does maintain published cloud-marketplace artifacts for some clouds (the AWS Marketplace AMI is the most prominent example); for those, follow the marketplace listing's instructions.

The other reason this matters: cloud deployments have recurring operational concerns the on-premise patterns don't have — managed-database backups need to be configured at the cloud provider level (not in SimpleRisk), secrets management needs to integrate with the cloud's secret store (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager), TLS certificates may come from the cloud's certificate manager. These aren't SimpleRisk-specific decisions, but they affect how you configure SimpleRisk and how you operate it once deployed.

Before you start

Have these in hand:

  • A cloud account with appropriate permissions to provision compute, networking, database, and secrets resources.
  • A deployment shape decision: AWS Marketplace AMI, Docker on managed Kubernetes (EKS/AKS/GKE), Docker Compose on a single VM, or native Linux on a single VM. The choice affects everything downstream.
  • A read on the System Requirements — the prerequisites apply regardless of where the workload runs.
  • A read on either Installing on Linux or Installing via Docker depending on which deployment shape you've picked. The cloud article describes the cloud-specific concerns; the underlying install workflow comes from one of those other articles.
  • A naming and tagging strategy for the cloud resources — VPCs, subnets, security groups, IAM roles, etc. Cloud deployments accumulate resources fast; consistent naming and tagging from the start avoids "what does this resource even do" archaeology later.

Step-by-step

Pattern 1: AWS Marketplace AMI

The AWS Marketplace AMI is the simplest cloud deployment for organizations on AWS. SimpleRisk publishes a pre-built AMI that includes the application, the prerequisites, and a default configuration; launching an EC2 instance from the AMI gives you a working SimpleRisk install in minutes.

Steps:

  1. Find the AMI in the AWS Marketplace. Search for "SimpleRisk" in the Marketplace; the SimpleRisk-published listing has the official AMI.
  2. Subscribe to the AMI through the Marketplace UI. Pricing terms are on the listing.
  3. Launch an EC2 instance from the AMI. Pick instance size per System Requirements sizing guidance; t3.medium or m5.large are reasonable starting points for small-to-mid deployments.
  4. Configure security groups: inbound 80/443 from the appropriate sources (typically your corporate network's egress IPs or the load balancer in front), inbound 22 for SSH from administrators only.
  5. Wait for the instance to fully boot (a few minutes). The AMI's first-boot scripts initialize the database, configure default credentials, and start the web server.
  6. Open the instance's public DNS or IP in a browser. You'll see SimpleRisk's installer or, for some AMI versions, a partially-configured install with default admin credentials documented in the Marketplace listing.
  7. Complete the initial setup through the wizard if presented, or change the default credentials immediately if the install came pre-configured. Either way, change the admin password on first login.
  8. Configure HTTPS through an Elastic Load Balancer (ALB) with an ACM certificate. The ALB terminates TLS and forwards HTTP to the EC2 instance. See HTTPS and TLS Configuration.
  9. Set up backups of the EC2 instance's EBS volume (AWS Backup or scheduled snapshots) and, if using an external RDS database, RDS automated backups.

The AMI pattern is the lowest-effort path to a running SimpleRisk on AWS. The trade-off is coupling — you're running on SimpleRisk's curated AMI, which constrains your customization options compared to building your own AMI or running in containers. For most evaluation deployments and many production deployments, that's fine; for organizations with strict standardized-image policies, the AMI may not fit.

Pattern 2: Docker on managed Kubernetes (EKS / AKS / GKE)

For organizations running Kubernetes, SimpleRisk's Docker image deploys as standard pods. The deployment shape mirrors the Docker Compose pattern documented in Installing via Docker, translated to Kubernetes manifests.

Reference manifest shape:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simplerisk
spec:
  replicas: 1
  selector:
    matchLabels:
      app: simplerisk
  template:
    metadata:
      labels:
        app: simplerisk
    spec:
      containers:
      - name: simplerisk
        image: simplerisk/simplerisk:
  
    ports: - containerPort: 80 env: - name: DB_HOST value: "
   
    " - name: DB_NAME value: "simplerisk" - name: DB_USER valueFrom: secretKeyRef: name: simplerisk-db key: user - name: DB_PASSWORD valueFrom: secretKeyRef: name: simplerisk-db key: password volumeMounts: - name: config mountPath: /var/www/simplerisk/includes/config.php subPath: config.php - name: uploads mountPath: /var/www/simplerisk/uploads - name: logs mountPath: /var/www/simplerisk/logs volumes: - name: config configMap: name: simplerisk-config - name: uploads persistentVolumeClaim: claimName: simplerisk-uploads - name: logs emptyDir: {} 
   
  

Plus a Service and an Ingress (or LoadBalancer-type Service) in front for traffic ingress.

Kubernetes-specific notes:

  • Database: use a managed database service (RDS for AWS EKS, Azure Database for AKS, Cloud SQL for GKE) rather than running MySQL in-cluster. Managed databases handle backups, failover, and patching at the cloud-provider level.
  • Secrets: use the cloud-provider's secret manager for database credentials and other sensitive values. Mount them into the pod via the standard Kubernetes Secret patterns or via the provider's CSI driver (AWS Secrets and Configuration Provider, Azure Key Vault Provider for Secrets Store CSI Driver, GCP Secret Manager CSI Provider).
  • Persistent storage: use the cloud-provider's PV/PVC implementation (EBS for EKS, Azure Disk for AKS, Persistent Disk for GKE). The uploads volume needs to be ReadWriteMany if you scale to multiple SimpleRisk pods; ReadWriteOnce works for single-pod deployments.
  • Cron jobs: use Kubernetes CronJob resources to invoke the SimpleRisk cron scripts on schedule, or designate one SimpleRisk pod as the cron runner.
  • Ingress and TLS: use the cloud-provider's load balancer (ALB for EKS, Application Gateway for AKS, GCP Load Balancer for GKE) with cloud-managed certificates. Configure HTTP-to-HTTPS redirect at the ingress level; SimpleRisk doesn't do it itself.

The full operational pattern for Kubernetes deployments is beyond the scope of this article — Kubernetes operations is its own discipline, and SimpleRisk on Kubernetes follows standard Kubernetes patterns rather than introducing SimpleRisk-specific deviations. The repository at simplerisk/docker is the source of truth for image specifics and any published Helm-chart-style manifests.

Pattern 3: Docker Compose on a single cloud VM

For organizations not running Kubernetes but wanting containerization, the simpler pattern is a single cloud VM running Docker Compose with the SimpleRisk and MySQL containers (or external managed database).

Steps:

  1. Provision a VM: EC2 instance, Azure VM, or GCE instance per the System Requirements sizing.
  2. Install Docker and Docker Compose on the VM (cloud-provider's stock Linux images usually have package manager paths for these).
  3. Follow Installing via Docker for the application install. The cloud-VM context is just "a Linux VM that happens to be in the cloud"; the Docker workflow is identical.
  4. Configure the cloud-provider's load balancer in front of the VM for TLS termination.
  5. Configure backups at the cloud-provider level (snapshots of the VM's disk, plus separate database backups if using managed RDS/Azure DB/Cloud SQL).

This pattern is appropriate for organizations comfortable with VMs but wanting Docker's deployment artifact model. It's simpler than Kubernetes but less flexible at scale.

Pattern 4: Native Linux on a single cloud VM

The cloud equivalent of the native Linux install. Provision a VM, install SimpleRisk per Installing on Linux, put a load balancer in front. Operationally identical to on-premise native Linux except the VM lives in the cloud.

This pattern is appropriate for organizations with strong Linux operations practice and weaker Docker / Kubernetes practice. It exposes the most cloud-native operational concerns (managing the OS, applying patches, managing the VM lifecycle) but doesn't require learning containerization first.

Cloud-agnostic considerations

Regardless of which pattern you pick, several considerations apply:

Managed database: the major cloud providers all offer managed MySQL services (RDS for MySQL on AWS, Azure Database for MySQL, Cloud SQL for MySQL on GCP). Using the managed service for SimpleRisk's database adds operational benefits (automated backups, point-in-time recovery, automated patching) at the cost of some configuration complexity (network access from the SimpleRisk compute, secrets rotation handled at the database level). For production deployments at any meaningful scale, the managed database is usually worth the operational delta.

Secrets management: SimpleRisk's database credentials and (after install) configuration values like AI Extra API keys, scanner API credentials, etc. should live in the cloud-provider's secrets manager and be injected into the SimpleRisk environment at runtime. Avoid baking secrets into AMIs, container images, or environment files committed to version control.

TLS certificates: cloud-provider certificate managers (ACM on AWS, App Service Certificates on Azure, Google-managed SSL certificates on GCP) are the easiest TLS path for cloud deployments. They auto-renew, integrate with the cloud's load balancers, and don't require handling certificate files on the SimpleRisk hosts.

IAM and network isolation: place SimpleRisk in a private subnet (or equivalent network isolation) and reach it through a load balancer in a public subnet. The SimpleRisk hosts shouldn't have direct internet exposure beyond outbound for the install-time and runtime HTTPS calls. Configure security groups / NSGs / firewall rules to allow only the necessary inbound traffic.

Logging and monitoring: send SimpleRisk's logs (web server logs, PHP error logs, the SimpleRisk debug log) to the cloud-provider's logging service (CloudWatch on AWS, Azure Monitor, Cloud Logging on GCP) for centralized analysis and alerting. Set up monitoring for the SimpleRisk health endpoint and for the underlying VM/container metrics.

Backup and disaster recovery: cloud deployments need a documented backup strategy that includes the database (managed-service automated backups + manual snapshots before major changes), the SimpleRisk uploads volume, and the SimpleRisk configuration. Test the restore path periodically; an untested backup isn't a backup. See Database Backup and Restore.

Common pitfalls

A handful of patterns recur with cloud SimpleRisk deployments.

  • Running SimpleRisk in a public subnet without a load balancer. The SimpleRisk container or VM exposed directly to the internet is the wrong shape for production. Always put a load balancer in front, even for "small" deployments — the load balancer provides TLS, HSTS, request-rate limiting, and a clear separation between the public network and the SimpleRisk compute.

  • Using the cloud-provider's default MySQL settings without checking SQL modes. Managed-database services often default to strict SQL modes that SimpleRisk's installer rejects. Disable STRICT_TRANS_TABLES, NO_ZERO_DATE, ONLY_FULL_GROUP_BY at the database parameter group / configuration level before running the installer.

  • Hard-coding database credentials in environment variables visible to the cloud console. Most cloud consoles let other admins see the environment variables passed to a service. For database credentials and other secrets, use the secrets-manager integration; the secret reference is what's in the visible environment, not the secret value itself.

  • Forgetting that the VM's IP changes when the VM restarts. AWS EC2 instances without an Elastic IP, Azure VMs without a static public IP, GCP instances with ephemeral IPs all get new IPs after a stop/start. The DNS or load-balancer configuration that points at the IP needs to handle this — either by using the cloud's persistent IP option or by relying on the load balancer (which is reachable at a stable DNS name) and not at the underlying VM.

  • Building custom AMIs without a maintenance plan. A custom AMI built once and never updated drifts as the base OS gets patches the AMI doesn't include. For custom AMIs, build a refresh pipeline that produces an updated AMI quarterly (or on each SimpleRisk release).

  • Treating the AWS Marketplace AMI as a black box. The Marketplace AMI is convenient but it's also opaque if you need to customize SimpleRisk beyond what the AMI's defaults support. For deployments that need substantial customization (custom auth, custom Extras, integration with internal infrastructure), starting from the Linux or Docker pattern gives more flexibility than starting from the Marketplace AMI.

  • Letting cloud bills surprise you. SimpleRisk's compute requirements are modest, but the surrounding cloud resources (load balancer hours, managed database, snapshot storage, network egress) add up. Tag SimpleRisk's resources consistently and review the cost-allocation reports monthly; surprise bills are usually misconfigured snapshot retention or accidental scale-up.

  • Not configuring outbound HTTPS through the cloud's egress controls. SimpleRisk needs outbound HTTPS to GitHub during install and to scanner/AI/notification endpoints during operation. Cloud deployments with strict outbound egress controls (NAT gateways with limited destinations, security-group egress rules, firewall policies) need to whitelist the relevant destinations. Without that, the install hangs at Step 6 and Extras silently fail to sync.

  • Neglecting the cron jobs. Cloud-deployed SimpleRisk has the same cron requirements as on-premise — the notification cron, the maintenance crons, the Extras' sync crons. In Kubernetes that's CronJob resources; in Docker Compose on a VM it's host crontab plus docker exec; in the AMI it's whatever the AMI's pre-configured cron setup uses. Verify cron is actually running after deployment; the silence-when-broken failure mode is real.

Related

Reference

  • Permission required: Cloud provider IAM appropriate to the deployment shape (EC2/EKS/RDS for AWS, equivalents for Azure/GCP). SimpleRisk-side: not applicable (pre-installation).
  • Implementing files: Application code from this repository; container images from the simplerisk/docker repository; cloud deployment artifacts (Marketplace AMI, etc.) maintained outside the application repository.
  • Database tables: Created during Step 6 of the initial wizard, regardless of cloud provider — same schema as Linux/Docker installs.
  • config_settings keys: Set during Step 6 (same keys as native install). Post-install, cloud-deployment-specific values may be overridden via environment variables or mounted config.
  • Cloud resources: Compute (EC2/VM/Pod), managed database (RDS/Azure DB/Cloud SQL — strongly recommended for production), load balancer (ALB/Application Gateway/Cloud LB), persistent storage (EBS/Azure Disk/Persistent Disk), secrets manager, certificate manager, logging service.
  • External dependencies: Cloud-provider services per the deployment shape; outbound HTTPS to raw.githubusercontent.com during install; outbound HTTPS for runtime Extras.