Streamlining Deployment: Deploying Docker Containers Directly on AWS EC2
Heavyweight orchestration isn’t always the answer. For batch workloads, single-service deployments, or tight cost control, orchestrators like ECS, EKS, or Kubernetes might be excessive—adding operational toil and increasing cognitive load. Sometimes, running a plain Docker container on a hardened EC2 host is the most direct route to uptime.
Direct Deployment: Why Bother?
Is your deployment expected to handle limited concurrency, run predictable workloads, or serve as a staging environment? If yes, the complexity of orchestration platforms results in unnecessary overhead: steeper learning curves, higher latency for iteration, and less transparent billing. Direct EC2 deployment strips those out:
- Lowest Moving Parts: Fewer abstractions, minimal failure domains.
- Resource Control: Kernel tuning, file descriptors, and networking—fully available.
- Transparent Costing: No hidden fees, just the EC2 compute price and egress.
Required Setup
- AWS account with EC2 permissions.
- SSH key pair ready.
- Docker CLI installed on your development machine (v20.10.x or later).
- Familiarity with *nix shell.
1. Provisioning the EC2 Host
-
AMI Choice
For most workloads, an Ubuntu Server LTS image (e.g., 22.04 or newer) balances support and package freshness.
-
Instance Type
t3.micro
for prototyping (CPU credits add variance under sustained load).- For consistent/planned throughputs, consider
t3.small
or higher.
-
Security Group Configuration
Port Purpose Source 22 SSH Workspace IP only 80/8080 App As required -
Storage
- Stick with GP3 or GP2 EBS, min 16 GiB. Persistent data? Attach and mount a dedicated volume.
-
SSH Access
- Assign the key pair at instance launch.
Note: For production, enforce restrictions with NACLs, use bastion hosts, and never open SSH to 0.0.0.0/0.
2. Host Prep: Installing Docker Engine
SSH into your instance (ubuntu@<instance-public-ip>
):
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo usermod -aG docker $USER
Logout and reconnect for group memberships to activate.
Verify installation:
docker version
# If you see "Cannot connect to the Docker daemon", ensure your user is in group 'docker' and re-login.
3. Image Preparation
For a minimal Node.js web API:
# Dockerfile
FROM node:16.20-alpine
WORKDIR /srv/app
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Build and test locally:
docker build -t my-node-app:2024-06 .
docker run --rm -p 3000:3000 my-node-app:2024-06
Does it log expected output? Any binding errors or crashes? Fix these now, not later.
4. Registry Push
Option 1: Docker Hub
docker tag my-node-app:2024-06 yourdockerhub/my-node-app:2024-06
docker push yourdockerhub/my-node-app:2024-06
Option 2: AWS ECR (preferred for private/prod images)
aws ecr create-repository --repository-name my-node-app
aws ecr get-login-password | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com
docker tag my-node-app:2024-06 <ecr-repo-url>:2024-06
docker push <ecr-repo-url>:2024-06
AWS CLI v2+ required; credentials configured (aws configure
).
5. Deploy: Pull and Run on EC2
On your EC2 host:
docker pull yourdockerhub/my-node-app:2024-06
docker run -d --name mynodeapp -p 80:3000 --restart=always \
-e "NODE_ENV=production" \
yourdockerhub/my-node-app:2024-06
For ECR, replace repo path. Expose only ports required. To use host networking (--network host
), be aware of security and port conflicts.
Quick health check
curl -I http://localhost/
# HTTP/1.1 200 OK
Check logs:
docker logs mynodeapp
If the process exits immediately:
Error: listen EADDRINUSE: address already in use 0.0.0.0:3000
— indicates bad port mapping or a zombie process.
6. Resilience & Automation
- Auto-start: Use
--restart=always
ondocker run
for reboot persistence. - User Data: Bake Docker install + pull + run into launch user data for disposable infra. Example:
#!/bin/bash
apt-get update
# ... (docker install code above)
docker run -d --restart=always -p 80:3000 --name mynodeapp yourdockerhub/my-node-app:2024-06
- Log forwarding: Consider AWS CloudWatch Agent or rsyslog. For bursty workloads, local log rotation via
logrotate
.
7. Hardening and Real-World Ops
- Never run containers as root inside the container. Use
USER node
in Dockerfile. - Rotate SSH keys and audit instance logging.
- Use AWS Secrets Manager or SSM Parameter Store—never bake secrets into images or pass via ENV in production.
Note: EC2 hosts eventually need patching. Plan for AMI rebuilding and blue–green swaps if “pets” become “cattle” over time.
Non-Obvious Tip
If you expect multiple containers soon, use Docker Compose for local orchestration—but deploy with single-container docker run
initially. Upgrade later, avoid premature optimization now.
Direct-to-EC2 container deployment: not sexy, but fills a crucial gap between manual hand-rolled infra and automated fleets. Simple, surprisingly robust, and with fewer variables when diagnosing incidents.
For teams without heavy ops needs or those wanting to sidestep orchestrator maintenance, this route delivers minimal friction and clear accountability. There are better ways—when scale warrants. For all else, keep it minimal.
Known issue: if EC2 public IP changes (reboot or stop/start without elastic IP), DNS or client configs may break. Use Elastic IP or automate updates.
For automated CI/CD to EC2—including zero-downtime image swaps—see guides on GitHub Actions, SSM Run Command, or AWS CodeDeploy with Docker. Trade-offs abound.