Skip to content

Production Deployment

Architecture

SchemaStack production runs on a Hetzner CAX21 ARM server (4 vCPU, 8GB RAM) with Docker, fronted by Cloudflare. Quarkus services compile to GraalVM native images; Spring Boot services run on JVM.

Cloudflare (all proxied — origin IP hidden)

│ schemastack.io (same-origin, zero CORS):
├── /admin/*   → CF Worker → CF Pages (admin app)
├── /api/*     → CF Worker → Hetzner (metadata-rest)
├── /sse/*     → CF Worker → Hetzner (consumer-worker, streaming)
├── /*         → CF Worker → CF Pages (spread app)

├── data.schemastack.io     → Hetzner (workspace-api)
├── docs.schemastack.io     → CF Pages (public docs)
└── dev.schemastack.io → CF Pages + CF Access (dev docs)

Hetzner CAX21 ARM (4 vCPU, 8GB RAM, Docker Compose)
├── Traefik         (reverse proxy, blue/green)
├── PostgreSQL 15   (metadata DB)
├── RabbitMQ 3.13
├── metadata-rest        (Quarkus native, 128MB)
├── consumer-worker      (Quarkus JVM, 384MB)
├── processor-service    (Spring Boot JVM, 576MB)
├── workspace-api        (Spring Boot JVM, 576MB)
└── Dozzle              (log viewer, 32MB)

Services

ServicePortRuntimeRole
metadata-rest8080NativeAdmin API — entities, views, columns, orgs, auth. Runs Liquibase on startup.
consumer-worker8080JVMRabbitMQ task completion consumer + SSE broadcaster + file upload/thumbnails
processor-service8082JVMSchema migration processor — consumes from RabbitMQ, executes DDL
workspace-api8083JVMDynamic CRUD API — runtime Hibernate via ByteBuddy

Memory Budget

ServiceContainer Limit
PostgreSQL1 GB
RabbitMQ512 MB
Traefik128 MB
metadata-rest (native)128 MB
consumer-worker (JVM, -Xmx256m)384 MB
processor-service (JVM)576 MB
workspace-api (JVM)576 MB
Dozzle32 MB
Steady-state total~3.0 GB

During blue/green deploy both sets run briefly: ~4.4 GB.

Initial Server Setup

Local Prerequisites

bash
brew install ansible

One-Time Server Setup

After creating a Hetzner CAX21 ARM server (Ubuntu 24.04) with your SSH key:

bash
# SSH in as root
ssh root@schemastack

# Create deploy user (no password — SSH key only)
adduser --disabled-password --gecos "" deploy
usermod -aG sudo deploy
echo "deploy ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/deploy

# Copy SSH keys to deploy user
mkdir -p /home/deploy/.ssh
cp ~/.ssh/authorized_keys /home/deploy/.ssh/
chown -R deploy:deploy /home/deploy/.ssh
chmod 700 /home/deploy/.ssh && chmod 600 /home/deploy/.ssh/authorized_keys
exit

From your Mac:

bash
# Generate deploy key + copy to server
ssh-keygen -t ed25519 -f ~/.ssh/schemastack_deploy
ssh-copy-id -i ~/.ssh/schemastack_deploy deploy@schemastack

Add the SSH config entry so you can use ssh schemastack everywhere:

# ~/.ssh/config
Host schemastack
  Hostname 46.225.228.56
  Port 22
  User deploy
  ProxyCommand none

Cloudflare Origin Certificate

  1. Cloudflare dashboard → select schemastack.ioSSL/TLSOrigin Server
  2. Click Create Certificate
  3. Key type: RSA (2048), Hostnames: *.schemastack.io, schemastack.io, Validity: 15 years
  4. Copy Origin Certificate → save as traefik/certs/origin.pem
  5. Copy Private Key → save as traefik/certs/origin.key

The private key is only shown once — save it immediately.

Cloudflare Pages & Worker

One-time setup — create all four Pages projects:

bash
# Frontend apps (Worker proxies to these)
npx wrangler pages project create schemastack-admin --production-branch main
npx wrangler pages project create schemastack-spread --production-branch main

# Docs sites (accessed via CNAME subdomains)
npx wrangler pages project create schemastack-docs --production-branch main
npx wrangler pages project create schemastack-dev --production-branch main

Set custom domains so the docs sites are reachable at their subdomains:

Option A: CLI

bash
npx wrangler pages project edit schemastack-docs --domains docs.schemastack.io
npx wrangler pages project edit schemastack-dev --domains dev.schemastack.io

Option B: Dashboard

  1. Go to Workers & Pages → select the project (e.g. schemastack-docs)
  2. Custom domains tab → Set up a custom domain
  3. Enter docs.schemastack.io → Cloudflare automatically creates the CNAME DNS record
  4. Repeat for schemastack-dev with dev.schemastack.io

TIP

If you already created the CNAME DNS records manually, Cloudflare will detect them and activate the custom domain immediately. If not, the dashboard creates them for you.

One-time setup — deploy the Worker:

bash
cd schemastack-deployment/cloudflare/worker
npm install
npx wrangler deploy
npx wrangler secret put ORIGIN_SECRET   # paste same value as in .env.production.local

Deploying Pages — build + deploy via deploy-pages.sh:

bash
cd schemastack-deployment

./scripts/deploy-pages.sh              # Deploy all 4 sites
./scripts/deploy-pages.sh docs         # Public docs only (docs.schemastack.io)
./scripts/deploy-pages.sh dev          # Dev docs only (dev.schemastack.io)
./scripts/deploy-pages.sh admin        # Admin app only (schemastack.io/admin)
./scripts/deploy-pages.sh spread       # Spread app only (schemastack.io)

The script builds each site from source and deploys to Cloudflare Pages:

SiteSource repoBuildCF Pages project
Public docsschemastack-docsnpm run build:publicschemastack-docs
Dev docsschemastack-docsnpm run build:devschemastack-dev
Admin appschemastack-feng build adminschemastack-admin
Spread appschemastack-feng build spreadschemastack-spread

Secrets (Ansible Vault)

All production secrets are stored in .env.production.local, encrypted with Ansible Vault:

bash
# Create env file from template and fill in real values
cp .env.production .env.production.local
# Edit — replace all CHANGE_ME values

# Encrypt all secrets with Ansible Vault (same vault password for all)
ansible-vault encrypt .env.production.local
ansible-vault encrypt traefik/certs/origin.key

The vault password is the only secret you need to remember. Encrypted files are safe to commit.

To avoid typing the vault password every time, store it in a hidden file:

bash
echo 'your-vault-password' > .vault_pass
chmod 600 .vault_pass
# .vault_pass is already in .gitignore

Then pass --vault-password-file .vault_pass (or the short form) to any vault or playbook command:

bash
# Edit encrypted files
ansible-vault edit .env.production.local --vault-password-file .vault_pass

# View without editing
ansible-vault view .env.production.local --vault-password-file .vault_pass

# Decrypt to stdout (useful for piping/diffing)
ansible-vault decrypt .env.production.local --output -  --vault-password-file .vault_pass

Without .vault_pass, use --ask-vault-pass instead and enter the password interactively.

Provision Server

bash
cd ansible && ansible-playbook -i inventory.yml playbook-provision.yml --vault-password-file ../.vault_pass

SSH Access to Services

PostgreSQL, RabbitMQ, and Dozzle are only accessible via SSH tunnel:

ServiceTunnel CommandLocal URL
Dozzle (logs)ssh -N -L 8888:localhost:8888 schemastackhttp://localhost:8888
RabbitMQ Managementssh -N -L 15672:localhost:15672 schemastackhttp://localhost:15672
PostgreSQLssh -N -L 5432:localhost:5432 schemastackpsql -h localhost

Combine multiple tunnels in a single command:

bash
ssh -N -L 8888:localhost:8888 -L 15672:localhost:15672 -L 5432:localhost:5432 schemastack

Container Status & Logs

Checking Status

bash
ssh schemastack

# All containers with status
docker ps -a

# Resource usage (CPU, memory)
docker stats --no-stream

Viewing Logs in Browser (Dozzle)

Dozzle provides a real-time web UI for browsing all container logs. It binds to localhost only (127.0.0.1:8888) so it's not exposed to the internet.

Start an SSH tunnel:

bash
ssh -N -L 8888:localhost:8888 schemastack

Then open http://localhost:8888. Dozzle shows real-time streaming logs for all containers with search, filtering, and multi-container views.

CLI Alternatives

bash
# Logs for a specific container
docker logs <container-name> --tail 100 -f

# All containers from the compose project
cd /opt/schemastack
docker compose logs -f --tail 50

TIP

Docker log rotation is configured at 50MB per file with 3 files max per container, so logs won't fill the disk.

Building

Native Image Build (Quarkus)

Uses the docker-native Maven profile with Mandrel builder:

bash
# Single service
./mvnw package -DskipTests -Ddocker-native -pl quarkus/metadata/metadata-rest -am

# All services (via deploy script)
cd schemastack-deployment && ./scripts/build-jars.sh

Full Build & Deploy

bash
# 1. Build all artifacts locally
cd schemastack-deployment
./scripts/build-jars.sh

# 2. Deploy via Ansible (blue/green, zero downtime)
cd ansible
ansible-playbook -i inventory.yml playbook-deploy.yml --vault-password-file ../.vault_pass

Manual Deploy (SSH)

bash
ssh deploy@<server>
/opt/schemastack/scripts/deploy.sh

Blue/Green Flow

  1. Detect active color (blue or green)
  2. Build Docker images from pre-built artifacts
  3. Start opposite color — metadata-rest first (Liquibase migrations)
  4. Wait for healthchecks (/q/health/ready for native, /actuator/health for JVM)
  5. Stop + remove old color — Traefik routes to remaining set

Routing

Cloudflare DNS

RecordTypeTargetProxyNotes
@No DNS record needed — CF Worker route handles schemastack.io/*
originAHetzner IPDNS only (grey cloud)Worker fetches from here; protected by shared secret header
dataAHetzner IPProxied (orange cloud)workspace-api public API
docsCNAMECF PagesProxiedPublic docs
devCNAMECF PagesProxiedDev docs (CF Access)

Traefik (on Hetzner)

HostPathTarget
origin.schemastack.io/api/*metadata-rest:8080
origin.schemastack.io/sse/*consumer-worker:8080
data.schemastack.io/*workspace-api:8083

CF Worker (schemastack.io)

typescript
/admin/*  → CF Pages Service Binding (admin)
/api/*    → fetch(origin.schemastack.io/api/...)
/sse/*    → fetch(origin.schemastack.io/sse/...)   // streams, no buffering
/*        → CF Pages Service Binding (spread)

Security

  • Origin secret header: CF Worker sends X-Origin-Secret on every proxied request to origin.schemastack.io. Traefik router rules use HeadersRegexp to reject requests without a matching header (returns 404). This prevents direct access even though the IP is visible in DNS (grey cloud, required for Worker fetch).
    • Set on Worker: wrangler secret put ORIGIN_SECRET
    • Set on server: ORIGIN_SECRET in .env.production.local
  • data.schemastack.io proxied via Cloudflare (orange cloud) — origin IP hidden, DDoS protected
  • UFW on Hetzner: ports 22, 80, 443
  • Cloudflare Origin Certificate for Hetzner ↔ CF (Full Strict mode)
  • Dev docs (Cloudflare Access + OTP): dev.schemastack.io and all *.schemastack-dev.pages.dev aliases are protected with email-based one-time PIN authentication. Setup:
    1. Zero TrustAccessApplicationsAdd an applicationSelf-hosted
    2. Application name: Dev Docs
    3. Application domain: dev.schemastack.io
    4. Add second domain: *.schemastack-dev.pages.dev (prevents bypass via preview URLs)
    5. Create policy: name Team access, action Allow, include rule Emails → add team email addresses
    6. Authentication method: One-time PIN (default — sends OTP to the allowed email)
    7. Session duration: 24 hours (default, adjustable under application settings)

SchemaStack Internal Developer Documentation