Appearance
Production Deployment
Architecture
SchemaStack production runs on a Hetzner CAX21 ARM server (4 vCPU, 8GB RAM) with Docker, fronted by Cloudflare. Quarkus services compile to GraalVM native images; Spring Boot services run on JVM.
Cloudflare (all proxied — origin IP hidden)
│
│ schemastack.io (same-origin, zero CORS):
├── /admin/* → CF Worker → CF Pages (admin app)
├── /api/* → CF Worker → Hetzner (metadata-rest)
├── /sse/* → CF Worker → Hetzner (consumer-worker, streaming)
├── /* → CF Worker → CF Pages (spread app)
│
├── data.schemastack.io → Hetzner (workspace-api)
├── docs.schemastack.io → CF Pages (public docs)
└── dev.schemastack.io → CF Pages + CF Access (dev docs)
Hetzner CAX21 ARM (4 vCPU, 8GB RAM, Docker Compose)
├── Traefik (reverse proxy, blue/green)
├── PostgreSQL 15 (metadata DB)
├── RabbitMQ 3.13
├── metadata-rest (Quarkus native, 128MB)
├── consumer-worker (Quarkus JVM, 384MB)
├── processor-service (Spring Boot JVM, 576MB)
├── workspace-api (Spring Boot JVM, 576MB)
└── Dozzle (log viewer, 32MB)Services
| Service | Port | Runtime | Role |
|---|---|---|---|
| metadata-rest | 8080 | Native | Admin API — entities, views, columns, orgs, auth. Runs Liquibase on startup. |
| consumer-worker | 8080 | JVM | RabbitMQ task completion consumer + SSE broadcaster + file upload/thumbnails |
| processor-service | 8082 | JVM | Schema migration processor — consumes from RabbitMQ, executes DDL |
| workspace-api | 8083 | JVM | Dynamic CRUD API — runtime Hibernate via ByteBuddy |
Memory Budget
| Service | Container Limit |
|---|---|
| PostgreSQL | 1 GB |
| RabbitMQ | 512 MB |
| Traefik | 128 MB |
| metadata-rest (native) | 128 MB |
| consumer-worker (JVM, -Xmx256m) | 384 MB |
| processor-service (JVM) | 576 MB |
| workspace-api (JVM) | 576 MB |
| Dozzle | 32 MB |
| Steady-state total | ~3.0 GB |
During blue/green deploy both sets run briefly: ~4.4 GB.
Initial Server Setup
Local Prerequisites
bash
brew install ansibleOne-Time Server Setup
After creating a Hetzner CAX21 ARM server (Ubuntu 24.04) with your SSH key:
bash
# SSH in as root
ssh root@schemastack
# Create deploy user (no password — SSH key only)
adduser --disabled-password --gecos "" deploy
usermod -aG sudo deploy
echo "deploy ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/deploy
# Copy SSH keys to deploy user
mkdir -p /home/deploy/.ssh
cp ~/.ssh/authorized_keys /home/deploy/.ssh/
chown -R deploy:deploy /home/deploy/.ssh
chmod 700 /home/deploy/.ssh && chmod 600 /home/deploy/.ssh/authorized_keys
exitFrom your Mac:
bash
# Generate deploy key + copy to server
ssh-keygen -t ed25519 -f ~/.ssh/schemastack_deploy
ssh-copy-id -i ~/.ssh/schemastack_deploy deploy@schemastackAdd the SSH config entry so you can use ssh schemastack everywhere:
# ~/.ssh/config
Host schemastack
Hostname 46.225.228.56
Port 22
User deploy
ProxyCommand noneCloudflare Origin Certificate
- Cloudflare dashboard → select
schemastack.io→ SSL/TLS → Origin Server - Click Create Certificate
- Key type: RSA (2048), Hostnames:
*.schemastack.io, schemastack.io, Validity: 15 years - Copy Origin Certificate → save as
traefik/certs/origin.pem - Copy Private Key → save as
traefik/certs/origin.key
The private key is only shown once — save it immediately.
Cloudflare Pages & Worker
One-time setup — create all four Pages projects:
bash
# Frontend apps (Worker proxies to these)
npx wrangler pages project create schemastack-admin --production-branch main
npx wrangler pages project create schemastack-spread --production-branch main
# Docs sites (accessed via CNAME subdomains)
npx wrangler pages project create schemastack-docs --production-branch main
npx wrangler pages project create schemastack-dev --production-branch mainSet custom domains so the docs sites are reachable at their subdomains:
Option A: CLI
bash
npx wrangler pages project edit schemastack-docs --domains docs.schemastack.io
npx wrangler pages project edit schemastack-dev --domains dev.schemastack.ioOption B: Dashboard
- Go to Workers & Pages → select the project (e.g.
schemastack-docs) - Custom domains tab → Set up a custom domain
- Enter
docs.schemastack.io→ Cloudflare automatically creates the CNAME DNS record - Repeat for
schemastack-devwithdev.schemastack.io
TIP
If you already created the CNAME DNS records manually, Cloudflare will detect them and activate the custom domain immediately. If not, the dashboard creates them for you.
One-time setup — deploy the Worker:
bash
cd schemastack-deployment/cloudflare/worker
npm install
npx wrangler deploy
npx wrangler secret put ORIGIN_SECRET # paste same value as in .env.production.localDeploying Pages — build + deploy via deploy-pages.sh:
bash
cd schemastack-deployment
./scripts/deploy-pages.sh # Deploy all 4 sites
./scripts/deploy-pages.sh docs # Public docs only (docs.schemastack.io)
./scripts/deploy-pages.sh dev # Dev docs only (dev.schemastack.io)
./scripts/deploy-pages.sh admin # Admin app only (schemastack.io/admin)
./scripts/deploy-pages.sh spread # Spread app only (schemastack.io)The script builds each site from source and deploys to Cloudflare Pages:
| Site | Source repo | Build | CF Pages project |
|---|---|---|---|
| Public docs | schemastack-docs | npm run build:public | schemastack-docs |
| Dev docs | schemastack-docs | npm run build:dev | schemastack-dev |
| Admin app | schemastack-fe | ng build admin | schemastack-admin |
| Spread app | schemastack-fe | ng build spread | schemastack-spread |
Secrets (Ansible Vault)
All production secrets are stored in .env.production.local, encrypted with Ansible Vault:
bash
# Create env file from template and fill in real values
cp .env.production .env.production.local
# Edit — replace all CHANGE_ME values
# Encrypt all secrets with Ansible Vault (same vault password for all)
ansible-vault encrypt .env.production.local
ansible-vault encrypt traefik/certs/origin.keyThe vault password is the only secret you need to remember. Encrypted files are safe to commit.
To avoid typing the vault password every time, store it in a hidden file:
bash
echo 'your-vault-password' > .vault_pass
chmod 600 .vault_pass
# .vault_pass is already in .gitignoreThen pass --vault-password-file .vault_pass (or the short form) to any vault or playbook command:
bash
# Edit encrypted files
ansible-vault edit .env.production.local --vault-password-file .vault_pass
# View without editing
ansible-vault view .env.production.local --vault-password-file .vault_pass
# Decrypt to stdout (useful for piping/diffing)
ansible-vault decrypt .env.production.local --output - --vault-password-file .vault_passWithout .vault_pass, use --ask-vault-pass instead and enter the password interactively.
Provision Server
bash
cd ansible && ansible-playbook -i inventory.yml playbook-provision.yml --vault-password-file ../.vault_passSSH Access to Services
PostgreSQL, RabbitMQ, and Dozzle are only accessible via SSH tunnel:
| Service | Tunnel Command | Local URL |
|---|---|---|
| Dozzle (logs) | ssh -N -L 8888:localhost:8888 schemastack | http://localhost:8888 |
| RabbitMQ Management | ssh -N -L 15672:localhost:15672 schemastack | http://localhost:15672 |
| PostgreSQL | ssh -N -L 5432:localhost:5432 schemastack | psql -h localhost |
Combine multiple tunnels in a single command:
bash
ssh -N -L 8888:localhost:8888 -L 15672:localhost:15672 -L 5432:localhost:5432 schemastackContainer Status & Logs
Checking Status
bash
ssh schemastack
# All containers with status
docker ps -a
# Resource usage (CPU, memory)
docker stats --no-streamViewing Logs in Browser (Dozzle)
Dozzle provides a real-time web UI for browsing all container logs. It binds to localhost only (127.0.0.1:8888) so it's not exposed to the internet.
Start an SSH tunnel:
bash
ssh -N -L 8888:localhost:8888 schemastackThen open http://localhost:8888. Dozzle shows real-time streaming logs for all containers with search, filtering, and multi-container views.
CLI Alternatives
bash
# Logs for a specific container
docker logs <container-name> --tail 100 -f
# All containers from the compose project
cd /opt/schemastack
docker compose logs -f --tail 50TIP
Docker log rotation is configured at 50MB per file with 3 files max per container, so logs won't fill the disk.
Building
Native Image Build (Quarkus)
Uses the docker-native Maven profile with Mandrel builder:
bash
# Single service
./mvnw package -DskipTests -Ddocker-native -pl quarkus/metadata/metadata-rest -am
# All services (via deploy script)
cd schemastack-deployment && ./scripts/build-jars.shFull Build & Deploy
bash
# 1. Build all artifacts locally
cd schemastack-deployment
./scripts/build-jars.sh
# 2. Deploy via Ansible (blue/green, zero downtime)
cd ansible
ansible-playbook -i inventory.yml playbook-deploy.yml --vault-password-file ../.vault_passManual Deploy (SSH)
bash
ssh deploy@<server>
/opt/schemastack/scripts/deploy.shBlue/Green Flow
- Detect active color (blue or green)
- Build Docker images from pre-built artifacts
- Start opposite color — metadata-rest first (Liquibase migrations)
- Wait for healthchecks (
/q/health/readyfor native,/actuator/healthfor JVM) - Stop + remove old color — Traefik routes to remaining set
Routing
Cloudflare DNS
| Record | Type | Target | Proxy | Notes |
|---|---|---|---|---|
@ | — | — | — | No DNS record needed — CF Worker route handles schemastack.io/* |
origin | A | Hetzner IP | DNS only (grey cloud) | Worker fetches from here; protected by shared secret header |
data | A | Hetzner IP | Proxied (orange cloud) | workspace-api public API |
docs | CNAME | CF Pages | Proxied | Public docs |
dev | CNAME | CF Pages | Proxied | Dev docs (CF Access) |
Traefik (on Hetzner)
| Host | Path | Target |
|---|---|---|
origin.schemastack.io | /api/* | metadata-rest:8080 |
origin.schemastack.io | /sse/* | consumer-worker:8080 |
data.schemastack.io | /* | workspace-api:8083 |
CF Worker (schemastack.io)
typescript
/admin/* → CF Pages Service Binding (admin)
/api/* → fetch(origin.schemastack.io/api/...)
/sse/* → fetch(origin.schemastack.io/sse/...) // streams, no buffering
/* → CF Pages Service Binding (spread)Security
- Origin secret header: CF Worker sends
X-Origin-Secreton every proxied request toorigin.schemastack.io. Traefik router rules useHeadersRegexpto reject requests without a matching header (returns 404). This prevents direct access even though the IP is visible in DNS (grey cloud, required for Worker fetch).- Set on Worker:
wrangler secret put ORIGIN_SECRET - Set on server:
ORIGIN_SECRETin.env.production.local
- Set on Worker:
data.schemastack.ioproxied via Cloudflare (orange cloud) — origin IP hidden, DDoS protected- UFW on Hetzner: ports 22, 80, 443
- Cloudflare Origin Certificate for Hetzner ↔ CF (Full Strict mode)
- Dev docs (Cloudflare Access + OTP):
dev.schemastack.ioand all*.schemastack-dev.pages.devaliases are protected with email-based one-time PIN authentication. Setup:- Zero Trust → Access → Applications → Add an application → Self-hosted
- Application name:
Dev Docs - Application domain:
dev.schemastack.io - Add second domain:
*.schemastack-dev.pages.dev(prevents bypass via preview URLs) - Create policy: name
Team access, action Allow, include rule Emails → add team email addresses - Authentication method: One-time PIN (default — sends OTP to the allowed email)
- Session duration: 24 hours (default, adjustable under application settings)