Add a Private Docker Registry to Your Kubernetes Cluster
Controlling container image distribution is a foundational aspect of securing production workloads. Hosting your own Docker registry—rather than relying exclusively on public registries—reduces exposure to both supply chain threats and external outages. It also allows tighter control over image provenance and access.
Why Not Just Use Docker Hub?
- Ingress latency from public endpoints.
- Docker Hub pull rate limits (current free tier: 100 pulls/6hr per IP).
- Potential for unauthorized access to proprietary images.
- No guarantee of uptime or compliance.
A self-hosted Docker registry (often registry:2
as of this writing) eliminates these dependencies. Here’s what works, what breaks, and which configuration patterns are battle-tested for real clusters.
1. Deploy a Local or Private Docker Registry
For local development and early CI/CD pipelines, running Docker’s registry container directly works:
docker run -d -p 5000:5000 --restart=always --name registry registry:2.8.1
- Port 5000 is conventional for development; production registries should generally use a load balancer and HTTPS.
- Filesystem-backed storage is the default, but consider S3 or GCS backends for HA scenarios.
Note: Default deployment is HTTP only. This is intentionally insecure for demo/test clusters.
Registry behind a Reverse Proxy (nginx example)
For production, front Docker registry with a reverse proxy that terminates TLS:
Client
|
HTTPS
v
+---------+ HTTP +-----------+
| NGINX | <----------> | REGISTRY |
+---------+ +-----------+
Configuring TLS at this layer with Let’s Encrypt or an internal CA is standard practice.
2. Build and Push an Image
Example: Push a standard image (nginx:latest
) to the local registry.
docker pull nginx:1.25-alpine
docker tag nginx:1.25-alpine localhost:5000/nginx:1.25-alpine
docker push localhost:5000/nginx:1.25-alpine
Verify:
curl http://localhost:5000/v2/_catalog
Expected output:
{"repositories":["nginx"]}
Side effect: Image tags in your registry are not automatically garbage-collected. Clean up regularly for disk-constrained environments.
3. Configure Kubernetes to Trust and Use the Private Registry
Several mechanisms exist, depending on authentication and networking setup.
a) Insecure Local Registry (Minikube, kind)
Kubernetes clusters won’t pull from HTTP (default) registries unless nodes are explicitly instructed.
For kind
:
Example kind-config.yaml
:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:5000"]
endpoint = ["http://localhost:5000"]
Then:
kind create cluster --config kind-config.yaml
For Minikube:
minikube addons enable registry
# or configure --insecure-registry flag for minikube start
Known issue: Node restarts may forget custom registry configs; re-apply scripts as needed.
b) Authenticated Registry – ImagePullSecrets
For anything beyond local development, configure access credentials.
Create a secret:
kubectl create secret docker-registry regcred \
--docker-server=PRIVATE_REGISTRY_URL \
--docker-username=ci-user \
--docker-password=REDACTED \
--docker-email=devops@company.com
Reference this secret in your deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-private
spec:
replicas: 2
template:
metadata:
labels:
app: nginx-private
spec:
containers:
- name: nginx
image: PRIVATE_REGISTRY_URL/nginx:1.25-alpine
imagePullSecrets:
- name: regcred
Non-obvious tip: Instead of referencing imagePullSecrets
in each deployment, patch the default ServiceAccount:
kubectl patch serviceaccount default \
-n staging \
-p '{"imagePullSecrets": [{"name": "regcred"}]}'
Now secrets apply cluster-wide within the namespace.
4. Deploy and Troubleshoot
kubectl apply -f nginx-private.yaml
kubectl get pods -l app=nginx-private -o wide
kubectl describe pod <pod-name>
Key logs to investigate:
Failed to pull image "localhost:5000/nginx:1.25-alpine": rpc error: code = Unknown desc = failed to resolve image ...
- This usually traces back to missing credentials, mismatched registry host, or network policies.
- On cloud-managed clusters (EKS, AKS, GKE), firewall rules or CNI plugins may block registry traffic.
5. Production Checklist
Requirement | Solution | Note |
---|---|---|
TLS/SSL | Proxy registry behind NGINX/Traefik | Automate via Let’s Encrypt or Vault |
Credentials | Use least-privileged accounts | Integrate with GitOps secret workflows |
Image Scanning | Trivy or Clair at CI-stage | Avoid unscanned images in pipeline |
Storage Backend | S3 or GCS for persistence | Manage bucket lifecycle and retention |
Always test registry availability during simulated outages.
Side Notes
- Not all Kubernetes installers (kubeadm, k3s, microk8s) handle insecure registries consistently — check containerd and/or dockerd configs per node.
- Docker Registry does not support per-repo ACLs natively; front with a smart proxy if multi-tenant auth is required.
- For advanced setups, consider Harbor or JFrog Artifactory for more robust access control and auditability.
Summary
Private registries harden your supply chain and improve performance, but introduce operational tasks such as credential rotation, regular cleanup, and TLS maintenance. Simple local setups suffice for dev; production workloads demand proper secrets management and scanning integration.
For a fully compliant pipeline, chain registry updates into your CI system to force vulnerability scans and signature verification before any deployment is triggered. For most organizations, the trade-off in complexity is justified by traceability and uptime improvements.
This approach is field-tested on Kubernetes 1.25+, registry:2.8.x, and both containerd and Docker runtimes. Got a specialized registry or curious about image signing with Cosign? That’s for another day.