Cloud Deployment (GCP)
This runbook uses OpenTofu to provision infrastructure and Helm to deploy the chart on Scaleway Kapsule (mutualized). All just commands should be run from the repository root. Steps that require manual commands specify their working directory explicitly.
What OpenTofu provisions
OpenTofu creates the following resources on Scaleway (nl-ams region): - A VPC and private network - A Kapsule cluster (mutualized, k8s v1.32) with an autoscaling node pool (PLAY2-MICRO, 1-3 nodes) - A security group allowing HTTP/HTTPS traffic
PostgreSQL runs in-cluster via the Bitnami Helm chart (not as a Scaleway managed database). The container registry (srdp-registry) must be created beforehand via the Scaleway Console.
1) Prepare cloud credentials
- Copy
kubernetes/opentofu/secrets.sh.exampletokubernetes/opentofu/secrets.shand fill in your Scaleway credentials (SCW_ACCESS_KEY, SCW_SECRET_KEY, SCW_DEFAULT_PROJECT_ID). - Load them before running OpenTofu:
bash cd kubernetes/opentofu source ./secrets.sh
2) Build and push container images
Run the build script after sourcing credentials (it logs into the Scaleway registry):
bash
cd kubernetes/opentofu
source ./secrets.sh
./build-and-push.sh
This builds and pushes Marimo, Quarto, and srdp-etl (Dagster user code) to rg.nl-ams.scw.cloud/srdp-registry.
3) Provision infrastructure with OpenTofu
bash
cd kubernetes/opentofu
tofu init -upgrade # first run only, from kubernetes/opentofu/
cd ../.. # back to repo root
just prod-apply
4) Export kubeconfig
bash
just prod-use-kubeconfig # from repo root
5) Prepare production Helm values
- Copy
kubernetes/srdp-chart/values-prod.example.yamltokubernetes/srdp-chart/values-prod.yamlif you are starting fresh. - Fill in:
global.domainandoauth2-proxycookie/whitelist domains (use a real domain or<lb-ip>.nip.ioonce you know the load balancer IP).- Zitadel master key, admin/user DB passwords, Dagster DB password, and OAuth2 client credentials.
- ACME email for Traefik (Let's Encrypt).
- Replace these placeholder values in
values-prod.yaml:CHANGE_ME_POSTGRES_PASSCHANGE_ME_ZITADEL_DB_PASSCHANGE_ME_DAGSTER_DB_PASSCHANGE_ME_ZITADEL_MASTERKEY_32CHARSCHANGE_ME_ZITADEL_ADMIN_PASSCHANGE_ME_OAUTH_COOKIE_SECRET_32XXXXXXXXXXXXXXXXXXandXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXfor the OAuth2 client ID/secret
- Keep DB credentials aligned:
CHANGE_ME_POSTGRES_PASSmust be used consistently for:zitadel-db.auth.postgresPasswordzitadel-db.primary.initdb.passwordzitadel.zitadel.secretConfig.Database.Postgres.Admin.Password
CHANGE_ME_ZITADEL_DB_PASSmust be used consistently for:zitadel-db.auth.passwordzitadel.zitadel.secretConfig.Database.Postgres.User.Password
CHANGE_ME_DAGSTER_DB_PASSmust be used consistently for:- the Dagster password inside
zitadel-db.primary.initdb.scripts dagster.postgresql.postgresqlPassword
- the Dagster password inside
- Master key format: ZITADEL expects a 32-character master key string. Generate one, for example, with
tr -dc 'A-Za-z0-9' </dev/urandom | head -c 32. - Password complexity: Zitadel's first human/admin password must include uppercase, lowercase, digits, and at least one symbol. For example, use
SrdpTest123!rather thansrdpTest123.
The production values template enables PostgreSQL replication (architecture: replication) with a read replica. Daily backups via a CronJob are already configured in the base values.yaml.
6) Deploy with Helm (staged rollout)
A. Bring up Traefik only (to get the LB IP)
bash
just prod-traefik-only
B. Update domains once the LB IP exists
bash
just prod-get-values # prints LOAD_BALANCER_IP
Replace every occurrence of the old LB IP in values-prod.yaml with <LB_IP>.nip.io. The fields that contain it are:
global.domainzitadel.zitadel.configmapConfig.ExternalDomainzitadel.zitadel.configmapConfig.firstInstance.org.human.email.address(thezitadel-admin@auth.…address)zitadel.login.customConfigmapConfig— theCUSTOM_REQUEST_HEADERSvalue (Host:auth.…andX-Zitadel-Public-Host:auth.…)oauth2-proxy.extraArgs:cookie-domain,whitelist-domain,oidc-issuer-url, and theHost:auth.…header
C. Enable Zitadel + OAuth2-Proxy
bash
just prod-auth-only
D. Configure Zitadel apps
- In Zitadel (
https://auth.<LB_IP>.nip.io/), create OIDC apps for Marimo, Quarto, and Dagster with redirect URIs: https://marimo.<LB_IP>.nip.io/oauth2/callbackhttps://quarto.<LB_IP>.nip.io/oauth2/callbackhttps://dagster.<LB_IP>.nip.io/oauth2/callback- Copy the client ID/secret into
values-prod.yaml(oauth2-proxy config section).
E. Final deploy with all apps enabled
bash
just prod-full
Quick reference: full deployment flow
```bash
1. Prepare (first time only, from kubernetes/opentofu/)
cd kubernetes/opentofu && source ./secrets.sh && tofu init -upgrade
2. Build and push container images (from kubernetes/opentofu/)
cd kubernetes/opentofu && source ./secrets.sh && ./build-and-push.sh
3. Provision infrastructure (from repo root)
just prod-apply just prod-use-kubeconfig
4. Staged Helm rollout (from repo root)
just prod-traefik-only just prod-get-values # note the LOAD_BALANCER_IP
→ Update values-prod.yaml with LB IP in domain fields + secrets
just prod-auth-only
→ Configure Zitadel OIDC apps + copy client ID/secret into values-prod.yaml
just prod-full ```
7) Clean up
bash
just prod-destroy
This runs prod-uninstall first (which deletes the Traefik LoadBalancer service to release the Scaleway-managed LB, uninstalls the Helm release, and only then removes leftover jobs/PVCs) before running tofu destroy to remove the cluster and network infrastructure.
If you prefer manual commands:
```bash
Delete the LB service first (Scaleway LB must be released before the private network can be destroyed)
export KUBECONFIG=kubernetes/opentofu/kubeconfig.yaml kubectl delete svc srdp-traefik -n srdp --ignore-not-found sleep 30 helm uninstall srdp -n srdp kubectl delete jobs --all -n srdp kubectl delete pvc --all -n srdp cd kubernetes/opentofu && source ./secrets.sh && tofu destroy -auto-approve ```
- Quarto Static Site:
- URL:
https://quarto.<your-public-domain> - You should have access immediately without needing to log in again (Single Sign-On).
- URL: