Docker How To Start A Container

Docker How To Start A Container

Reading time1 min
#Docker#Containers#DevOps#DockerRun#ContainerStartup#DockerTips

Mastering Docker Container Startup: Beyond docker run

Spin up a container with docker run and you’ll get something running—but not necessarily something robust, secure, or debuggable. Operational teams routinely see wasted resources, lost logs, or brittle services due to default container starts. Better to structure container startup as you’d develop a production workload—intentionally, and with a deep understanding of Docker’s control flags.


The Minimalist Approach: Only for Experimentation

docker run -it ubuntu:22.04 /bin/bash

This defaults to a disposable, unnamed container in interactive terminal mode. Fine for debugging or exploring a base image. For production scenarios, it introduces issues: orphaned processes, ambiguous container identification, and inconsistent resource usage.


Set Container Name with --name

Scripted automation, log retrieval, or inter-container comms require deterministic references. Use:

docker run --name web-01 -it nginx:1.25-alpine /bin/sh

Now, docker logs web-01 or docker exec -it web-01 sh is trivial. Relying on randomly generated names (e.g., nifty_heisenberg) will break cluster scripts and confuse anyone investigating incidents.


Detached Mode: -d for Background Services

For API servers, workers, or anything managed by orchestration:

docker run -d --name api-v1 -p 9000:80 my-api:2.5
  • -d detaches from TTY.
  • -p 9000:80 maps host to container port, but only expose what is actually needed.
    Leaving containers attached often leads to accidental service interruption during terminal disconnects. Systemd, CI/CD pipelines, and even plain SSH sessions expect background processes.

Selective Port Exposure: Avoid Security Headaches

docker run -d -p 5432:5432 postgres:15
  • Only expose necessary ports; excess exposure invites lateral movement in compromised hosts.
  • Gotcha: Port mapping conflicts generate:
Error response from daemon: driver failed programming external connectivity on endpoint...

Be explicit and check host port availability up front, especially if running multiple environments on a single host.


Environment Variables: -e for Configurability

docker run -d -e POSTGRES_USER=app -e POSTGRES_PASSWORD=s3cret postgres:15

Environment configuration is standard for 12-factor apps. Never put secrets in plaintext. Consider --env-file for bulk variables or Docker Secrets for sensitive values.


Persistent Data: Volumes and Bind Mounts

docker run -d \
  --name redis-cache \
  -v redis-data:/data \
  redis:7-alpine
  • Named volumes (managed by Docker) persist data across container lifecycle.
  • Bind mounts (e.g. -v /srv/postgres-data:/var/lib/postgresql/data) connect host paths; critical when migrating to persistent disk storage.

Known issue: Unversioned host directory mounts can introduce incompatibility when upgrading images, e.g., breaking Postgres upgrades.


Restart Behavior: --restart Policy

docker run -d --restart unless-stopped nginx:1.25

Policies:

PolicyEffect
no (default)Never restart
on-failure[:N]Restart on non-zero exit; optional attempts
alwaysRestart always, including daemon restarts
unless-stoppedRestart unless manually stopped

Critically, restart policies must be set explicitly: they aren’t enabled by default, causing avoidable downtime during host restarts.


Resource Constraints: Limiting CPU and RAM

Default containers can exhaust host resources, destabilizing workloads. Clamp usage:

docker run -d --cpus=2 --memory=1024m my-worker:4.7
  • --cpus=2 restricts to two cores.
  • --memory=1024m hard-limits to 1GB RAM.

Trade-off: Too tight a limit can crash applications under load (Killed message in logs).


Docker Networks: Isolated, Predictable Connectivity

Interdependent services (e.g., API and DB) benefit from isolated bridge networks:

docker network create app-net
docker run -d --network app-net --name db postgres:15
docker run -d --network app-net --name backend -e DB_HOST=db my-backend:1.8

DNS resolution by container name is automatic within the same custom network.


Entry Point Overrides: --entrypoint

Override the default executable for debugging or wrapped init:

docker run --entrypoint /bin/sh -it alpine:3.19

Useful when troubleshooting inconsistencies in entry scripts or injecting diagnostics.


Real-World Aggregation

Consolidated container run for a realistic node app:

docker run -d \
  --name front-v3 \
  --network app-net \
  -p 8080:80 \
  -e NODE_ENV=production \
  -v node-data:/var/app/data \
  --restart=always \
  --cpus=1.5 \
  --memory=768m \
  my-node-app:20.5.1
  • Ensures name, version pinning, environment, volume persistency, resource limits, networking, and restart resilience.

Operational Pitfalls and Lessons Learned

  • Orphan data: Containers with no volume mapping lose data on removal. Always bind or volume attach stateful workloads.
  • Resource starvation: No CPU/memory limits leads to runaway processes and host OOM kills. Unrestricted containers are rarely acceptable in production.
  • Inconsistent environments: Failing to pass or externalize environment configuration causes portability issues.
  • Port collisions: Attempting to map the same port to different containers yields Error starting userland proxy: listen tcp ...: address already in use.

Non-Obvious Production Tactic

For zero-downtime deployments, start new containers with the intended config, verify health, then hot-swap reverse proxy or load balancer targets. Never combine code rollout and infra change in the same run—rollback becomes complicated.


Summary

Docker container startup isn’t about memorizing defaults. It’s a deliberate act: configuring image, identity, resource isolation, restart strategy, data persistence, connectivity, and runtime configuration based on your workload’s reality. The command line grows with your requirements. For ephemeral tests, keep it short. For stateful production, every flag above matters.

Note: Don’t rely solely on Docker CLI for orchestration at scale. For more advanced scheduling and resilience, migrate your knowledge to Kubernetes or Nomad—but start by mastering individual container control.


Subscribe for practical engineering examples and operational Docker strategies.