From First App to Production Platform: Databases, HTTPS, and Autoscaling on ITSH
From First App to Production Platform
Your first container is running on Kubernetes. A Deployment, a Service, an HTTPRoute — done. But a production-ready application needs more: a database, HTTPS with automatic certificates, persistent storage, and the ability to scale under load.
On a self-managed cluster, that means days of configuration. On an ITSH namespace, it’s a few YAML files.
This article covers what the platform offers beyond basic container deployment — and why you don’t need a cluster admin for any of it.
Automatic HTTPS with Let’s Encrypt
On a self-managed cluster, the path to HTTPS looks like this: install cert-manager, configure a ClusterIssuer, adjust the ingress controller, debug annotations. On ITSH, cert-manager is already installed and preconfigured. You need one annotation:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
parentRefs:
- name: default
namespace: nginx-gateway
sectionName: https-myapp-example-com
hostnames:
- "myapp.example.com"
rules:
- backendRefs:
- name: myapp
port: 80
The TLS certificate is provisioned automatically. No manual intervention, no renewal monitoring.
HTTP → HTTPS Redirect
To automatically redirect HTTP requests to HTTPS, add a second HTTPRoute:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp-redirect
spec:
parentRefs:
- name: default
namespace: nginx-gateway
sectionName: http
hostnames:
- "myapp.example.com"
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301
backendRefs: []
DNS Setup
Point your domain to the cluster ingress:
- A record:
91.98.6.3 - AAAA record:
2a01:4f8:1c1f:7bfa::1
The gateway automatically sets X-Forwarded-For and X-Real-IP headers with the original client IP. Your application just needs to trust the private network CIDR 10.1.0.0/16.
Two YAML files, one DNS entry — and your app is reachable via HTTPS with automatic certificate renewal.
Managed Databases
PostgreSQL with CloudNativePG
CloudNativePG runs on the cluster and manages PostgreSQL instances as a Kubernetes resource. Creating a database:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: myapp-db
spec:
instances: 1
storage:
size: 10Gi
storageClass: hcloud-volumes
postgresql:
parameters:
max_connections: "100"
The operator automatically creates a secret called myapp-db-app containing the connection string. Reference it directly in your Deployment:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-db-app
key: uri
10 lines of YAML for a production-ready PostgreSQL instance with persistent storage. On your own cluster, you’d need to install the operator, configure storage classes, and define backup policies.
Redis-Compatible Cache with Dragonfly
For caching or session storage:
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: myapp-cache
spec:
replicas: 1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
Reachable at redis://myapp-cache:6379. Dragonfly is Redis-compatible — existing Redis clients work without modification.
MariaDB is also available as a database option if your application requires MySQL compatibility.
Persistent Storage and Object Storage
Block Storage for Databases and Single-Pod Apps
For workloads that need fast, dedicated storage:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: hcloud-volumes
resources:
requests:
storage: 20Gi
hcloud-volumes provides SSD-backed block storage from Hetzner Cloud. Ideal for databases and single-pod applications. For multi-pod scenarios with a shared filesystem, NFS storage (ReadWriteMany) is also available.
S3 Object Storage
For S3-compatible object storage, create a secret with your credentials and reference it in your deployment:
env:
- name: S3_ENDPOINT
valueFrom:
secretKeyRef:
name: s3-credentials
key: endpoint
- name: S3_BUCKET
valueFrom:
secretKeyRef:
name: s3-credentials
key: bucket
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: s3-credentials
key: access_key
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: s3-credentials
key: secret_key
Works with any S3-compatible SDK — AWS SDK, MinIO client, Boto3. Ideal for uploads, media files, and backups.
Autoscaling
Horizontal Pod Autoscaler
Want your application to scale up automatically during traffic spikes and scale back down after? An HPA is all it takes:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
At 70% CPU utilization, Kubernetes automatically scales up — to a maximum of 10 replicas. When load drops, it scales back down.
For advanced scenarios — scaling based on queue length, HTTP requests, or Prometheus metrics — KEDA (Kubernetes Event-Driven Autoscaling) is available.
Resource Recommendations with Goldilocks
Not sure how much CPU and memory your app actually needs? Goldilocks analyzes real behavior and provides recommendations:
# Enable Goldilocks for your namespace
kubectl label ns myapp goldilocks.fairwinds.com/enabled=true
# After a few minutes: check recommendations
kubectl describe vpa -n myapp
For automatic adjustments without restarts, you can create a VPA with updateMode: InPlaceOrRecreate — pods are resized while running.
Putting It All Together: A Production-Ready Platform
Concrete scenario: you’re running a SaaS application with an API, background worker, and web frontend.
What runs in your namespace:
- API Deployment with HPA (2–10 replicas depending on load)
- Worker Deployment for async jobs
- PostgreSQL via CloudNativePG for the primary database
- Dragonfly as session and cache layer
- S3 bucket for user uploads and media
- HTTPRoute with automatic TLS certificate
What you don’t manage:
- Cluster updates and node patches
- cert-manager, gateway controller, monitoring stack
- Backup infrastructure and storage provisioning
- Network policies and security patches
Everything is deployed via GitOps through ArgoCD. A git push rolls out changes automatically. A git revert undoes them.
Conclusion
A Kubernetes namespace on ITSH is more than a place to run containers. Managed databases, automatic HTTPS, persistent storage, and autoscaling make it a full production platform — without you having to manage cluster infrastructure.
The work stays where it belongs: on your application. The platform handles the rest.