How to set up your HQ
- Hosted
- Self-hosted
The simplest way to use Open Politics is through our hosted platform:
- Register at open-politics.org
- Log in at open-politics.org/
- Begin using the platform with pre-loaded data and continue to:
App Setup
Configure your API keys and providers
Our preferred way to self-host on a single machine or develop with is Docker Compose. For Multi-machine setups, we recommend Kubernetes.But in both the cases you can choose to outsource services to managed providers for a more lightweight (but less private) setup.
Required:Required customizations:This generates key pairs, uploads to Hetzner, and updates cloud-init templates.Manual alternative: Upload your SSH key to Hetzner Console and set
| Service | Purpose | Example Provider | |
|---|---|---|---|
| Postgres | Main database | Hetzner | |
| Redis | Cache and queue | Upstash | |
| MinIO / S3 | Object storage | Hetzner | |
| Nominatim | Geocoding | Nominatim | - uses lot of disk space (170gb+) |
| LLM API | Language model | OpenAI | - high gpu and/or cpu usage |
| Tavily / SerpAPI | Web search | Tavily |
- Docker Compose
- Kubernetes
Run the complete platform locally:
Clone repository
Copy
git clone https://github.com/open-politics/open-politics-hq.git
cd open-politics-hq
Configure environment
Copy
cp .env.example .env
# Edit .env with your configuration
# All values with changeThis need to be changed before the app will start.
# for example with 'openssl rand -base64 14' in your terminal.
Initial setup may take time depending on your system configuration.
If you are here, we assume you know what you are doing.
Base Helm Chart
We have a base helm chart available hereCopy
git clone https://github.com/open-politics/open-politics-hq
cd .deployments/end-to-end-hetzner-k3s-terraform-helm/open-politics-hq-deployment/hq-cluster-chart
cp values.example.yaml values.yaml
# Edit values.yaml with your configuration
Hetzner Kubernetes Stack
Our easiest way to deploy HQ via Kubernetes is to use Hetzner. We provide an end-to-end script that provisions a master and worker nodes via Terraform and installs K3s. We suggest using K9s alongside our scripts to manage and scale the cluster.Hetzner Kubernetes Stack
Hetzner Kubernetes Stack
HQ on Hetzner (K3s) — End-to-End Deployment Guide
This repository delivers a batteries-included, reproducible way to provision a Hetzner K3s cluster and deploy your application stack with one command.What This Gives You
- 3 Hetzner Cloud servers by default: 1 control plane, 2 workers (configurable)
- K3s with sensible flags for Hetzner CCM integration
- Traefik Ingress with automatic TLS via Let’s Encrypt (ACME)
- Persistent storage using Hetzner CSI
- Application stack via Helm: frontend, backend API, Celery workers, Redis, optional Postgres/MinIO templates
Quick Start
Configure Hetzner token
Copy
cd .deployments/end-to-end-hetzner-k3s-terraform-helm/open-politics-hq-deployment/
cp .tfvars.example .tfvars
cp hq-cluster-chart/values.example.yaml hq-cluster-chart/values.yaml
# Edit .tfvars and values.yaml with your configuration (see below)
Configuration Checklist
Before running./deploy.sh, ensure you’ve completed:- Hetzner API Token: Set
hcloud_tokenin.tfvars - Domain: Set
domainin.tfvars(must matchenv.config.DOMAINinvalues.yaml) - Application Values: Copy and customize
hq-cluster-chart/values.yaml - Secrets: Replace all
xxxplaceholders with real values - SSH Keys: Run
./scripts/manage-ssh-keys.sh generate - DNS Ready: Have your domain ready for A record creation
Prerequisites
- Hetzner Cloud account and API token (Read/Write)
- Local tools:
terraform,kubectl,helm,jq - An SSH key uploaded to Hetzner; the scripts can rotate/manage keys automatically
Initial Configuration
Infrastructure Configuration (.tfvars)
Copy
cp .tfvars.example .tfvars
hcloud_token: Hetzner API token (get from console)domain: your public domain for TLS/ingress
worker_count: number of workers (default: 2)master_server_type/worker_server_type: server types (default:cx22)location: Hetzner location (default:fsn1)cluster_slug: resource naming prefix (default:hq)
Application Configuration (values.yaml)
Copy
cp hq-cluster-chart/values.example.yaml hq-cluster-chart/values.yaml
email: admin email for Let’s Encryptenv.config.DOMAIN: your domain (must match.tfvars)env.config.BACKEND_CORS_ORIGINS: update with your domain- Docker image repositories for backend, frontend, and celery-worker
xxx placeholders):SECRET_KEY,FIRST_SUPERUSER_PASSWORD,POSTGRES_PASSWORDMINIO_ROOT_PASSWORD,MINIO_SECRET_KEYSMTP_USER,SMTP_PASSWORD,EMAILS_FROM_EMAIL- API keys:
OPENAI_API_KEY,TAVILY_API_KEY,GOOGLE_API_KEY,MAPBOX_ACCESS_TOKEN
SSH Key Management
Copy
./scripts/manage-ssh-keys.sh generate
ssh_key_name in .tfvars.How It Works
The./deploy.sh orchestrator performs two phases:- Provision infrastructure: Terraform creates network, firewall, servers; cloud-init installs K3s
- Deploy stack: Configures Traefik, installs Hetzner CCM/CSI, deploys the Helm chart
| Command | Description |
|---|---|
./deploy.sh | Full deploy (infra → app) |
./scripts/connect.sh | Fetch kubeconfig and set kubectl context |
./scripts/status.sh | Summary of nodes, services, ingress, PVs |
./scripts/logs.sh | Tail app logs |
./scripts/monitor.sh | Open k9s |
./scripts/destroy.sh | Tear everything down |
DNS and TLS
Get Load Balancer IP
Copy
kubectl get svc traefik -n kube-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Create DNS A records
Point your domain (and
www) to the Load Balancer IP. Traefik will obtain Let’s Encrypt certificates automatically (typically 2-5 minutes after DNS propagation).Troubleshooting
| Issue | Solution |
|---|---|
| Workers don’t schedule pods | CCM removes taints automatically; wait or remove manually |
| No Load Balancer IP | Confirm Traefik is LoadBalancer and CCM is Ready |
| Domain mismatch | Ensure .tfvars domain matches values.yaml |
| CORS errors | Verify BACKEND_CORS_ORIGINS includes your domain |
| ACME stuck | Verify DNS A records point to LB IP; run ./scripts/check-ssl.sh |
| SSH failures | Run ./scripts/manage-ssh-keys.sh generate |
Cost
Default: 3×cx22 + 1 Load Balancer ≈ €40–45/month. Reduce costs by lowering worker_count or using smaller server types.