Container Deployment Without a DevOps Team
Container Deployment Without a DevOps Team
Kubernetes has a reputation problem. Mention the name and people think of hundreds of YAML files, multi-day cluster setups, and job postings requiring three years of Kubernetes experience.
That perception isn’t wrong — if you’re running a cluster from scratch. But it’s misleading when all you want is to get containers reliably into production.
This article shows how a managed Kubernetes namespace reduces ops overhead to near zero — and why that matters for teams without dedicated DevOps staff.
What a DevOps Team Actually Does
To understand what you can save, you need to know what a DevOps team handles in a Kubernetes context:
Cluster administration (completely eliminated with a managed namespace):
- Upgrading Kubernetes versions
- Maintaining and backing up the etcd cluster
- Managing node pools (scaling, OS updates, kernel patches)
- Configuring network policies and CNI plugins
- Operating ingress controllers / Gateway API
- Certificate management (cert-manager)
- Monitoring stack (Prometheus, Grafana, Alertmanager)
- Log aggregation (Loki, ELK, Fluentd)
- Implementing backup strategies
Application deployment (stays with you — but simplified):
- Writing Dockerfiles
- Maintaining Kubernetes manifests
- Configuring CI/CD pipelines
- Managing secrets
With a managed namespace, the entire first list disappears. ITSH operates the cluster, the nodes, monitoring, backups, and certificates. What remains is what already belongs to the development process: packaging and deploying your own application.
From Code to Deployment in 30 Minutes
Let’s say you have a Go API that you want to deploy as a container. Here’s the entire path from code to running application on an ITSH namespace:
1. Create a Dockerfile (5 minutes)
FROM golang:1.24-alpine AS build
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -buildmode=pie -ldflags="-s -w" -trimpath -o /app ./cmd/server
FROM alpine:3.22
COPY --from=build /app /app
EXPOSE 8080
ENTRYPOINT ["/app"]
2. Build and Push the Container Image (5 minutes)
# Build and push to a container registry (e.g. GitHub Container Registry)
docker build -t ghcr.io/my-org/api:1.0.0 .
docker push ghcr.io/my-org/api:1.0.0
3. Write a Kubernetes Manifest (10 minutes)
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: ghcr.io/my-org/api:1.0.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 3
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- port: 80
targetPort: 8080
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api
spec:
parentRefs:
- name: main-gateway
hostnames:
- "api.my-project.com"
rules:
- backendRefs:
- name: api
port: 80
4. Deploy (2 minutes)
kubectl apply -f k8s/ -n my-namespace
Or, if ArgoCD is set up: just push your code to Git. ArgoCD detects the change and rolls it out automatically.
5. Verify (2 minutes)
# Pods running?
kubectl get pods -n my-namespace
# Check logs
kubectl logs -n my-namespace deployment/api
# Deployment status
kubectl rollout status deployment/api -n my-namespace
That’s it. No Terraform. No Ansible. No node configuration. No ingress class setup. No cert-manager installation.
GitOps: Deployments via Git Push
Running kubectl apply manually works, but it’s not ideal for teams. ArgoCD — already integrated with ITSH — makes the deployment process declarative and auditable.
How it works:
- Kubernetes manifests live in a Git repository
- ArgoCD watches the repository
- On every change to the configured branch, ArgoCD syncs the cluster state
Benefits for teams without DevOps:
- Audit trail: Every deployment is a Git commit with author, timestamp, and diff.
- Rollback via Git:
git revertrestores the previous state. - No cluster access needed: Developers push manifests, ArgoCD handles the rest.
- Drift detection: ArgoCD notices when cluster state diverges from Git.
# Update the image tag in the manifest
# (or automate via CI/CD pipeline)
sed -i 's|api:1.0.0|api:1.1.0|' k8s/deployment.yaml
git add k8s/deployment.yaml
git commit -m "chore: update api to 1.1.0"
git push
# ArgoCD rolls out automatically
Common Concerns — and Why They Don’t Apply to Managed Namespaces
“Kubernetes is too complex for our team”
A full cluster: yes. A namespace: no. You need to understand three concepts — Deployment, Service, HTTPRoute. That’s an afternoon of learning.
“We have no one who can debug Kubernetes issues”
The most common problems with application deployments are:
- Image can’t be pulled -> Check registry credentials
- Pod won’t start ->
kubectl logsandkubectl describe pod - Application isn’t reachable -> Check service selector and port
These are application problems, not cluster problems. Cluster problems (node failures, network partitions, etcd corruption) are handled by the ITSH team.
“What about secrets and sensitive data?”
Kubernetes Secrets are isolated within the namespace. You create them once with kubectl:
kubectl create secret generic app-secrets \
--from-literal=database-url='postgres://user:pass@db:5432/mydb' \
-n my-namespace
For an additional layer, secrets can be managed via a Git repository using Sealed Secrets or SOPS.
“Do we need a container registry?”
Any public or private registry works — GitHub Container Registry (ghcr.io), Docker Hub, or a self-hosted solution. For private images, just add an image pull secret to your namespace.
What Developers Can Do Instead of Ops Tasks
The time a team doesn’t spend on cluster maintenance goes into product development. A conservative estimate:
| Task | Hours/Month (Self-Hosted) | Hours/Month (Managed) |
|---|---|---|
| Kubernetes upgrades | 4–8 h | 0 h |
| Node maintenance | 2–4 h | 0 h |
| Monitoring setup | 4–8 h (initial), 2 h (ongoing) | 0 h |
| Certificate management | 1–2 h | 0 h |
| Backup verification | 2–4 h | 0 h |
| Incident response (cluster) | 2–8 h | 0 h |
| Total | 15–34 h | 0 h |
At an hourly rate of €80, that’s €1,200–2,720 per month in saved labor — on top of lower infrastructure costs.
A Realistic Setup for a Three-Person Team
Scenario: An agency runs a client platform with an API, frontend, and background jobs.
Infrastructure on ITSH:
- 1 namespace with PAYG billing
- 3 deployments (API, frontend, worker)
- 1 PostgreSQL database (as StatefulSet or external service)
- ArgoCD for automatic deployments
- Daily backups included
Workflow:
- Developer writes code and pushes to GitHub
- GitHub Actions builds the container image
- Image is pushed to the registry
- Manifest file is updated with the new tag (manually or automatically)
- ArgoCD deploys the change
- Monitoring alerts if something goes wrong
Not a single step in this workflow requires cluster administration knowledge.
Monthly costs:
- ITSH namespace: ~€30–50/month (depending on resource usage)
- GitHub Actions: Free tier is enough for most SMBs
- Total: under €50/month for a production-ready Kubernetes environment
Conclusion
Container deployments on Kubernetes don’t require a DevOps team — they require a platform that handles cluster operations. The actual work lies in application deployment: writing Dockerfiles, maintaining manifests, setting up CI/CD. These are tasks any developer can master.
A managed namespace on ITSH reduces the entry point to three steps:
- Package your application into a container
- Write a Kubernetes manifest
- Deploy — via
kubectl applyor GitOps
Everything else — cluster updates, monitoring, backups, TLS, scaling — runs in the background.