Deploying Docker Containers on AWS EC2: Detailed Guidance for Direct Infrastructure Control
Developers often reach for ECS or EKS when containerizing workloads on AWS. Still, deploying Docker directly to EC2 instances remains relevant—especially when operational control or nuanced optimization outweighs managed service convenience. No abstractions, no orchestration layers—just you, the kernel, and a direct route to container runtime.
Direct-to-EC2: When Does It Make Sense?
- Full-stack OS control: Pin exact kernel modules, system limits, or custom monitoring agents not permitted under ECS.
- Cost-overhead avoidance: For small clusters or non-elastic workloads, EC2 (especially spot instances) can cut monthly invoices in half compared to ECS task charges.
- Legacy or tightly coupled workloads: Certain scenarios (verbose logging, non-standard volumes, privileged containers) are ill-suited to managed orchestrators.
EC2 Instance Launch: Selection and Gotchas
Performance, region, and AMI selection drive stability. Need a baseline? Start with a t3.micro on Amazon Linux 2, unless CPU-bound—then step to m5.large.
aws ec2 run-instances \
--image-id ami-0c02fb55956c7d316 \ # Confirm latest AL2 AMI in your region
--instance-type t3.micro \
--key-name my-ssh-key \
--security-group-ids sg-01234567 \
--subnet-id subnet-01234567
Side note: Always double-check EBS root volume encryption policies—unencrypted defaults still appear occasionally in custom AMIs.
Installing Docker: Quick, but Mind the User Group
SSH into the instance:
ssh -i my-ssh-key.pem ec2-user@<ec2-public-ip>
Official Docker in Amazon Linux 2:
sudo yum update -y
sudo amazon-linux-extras install docker -y
sudo systemctl enable --now docker
sudo usermod -aG docker ec2-user
Note: Docker runs as root by default. Remember to log out and reconnect so group permissions take effect.
Verify installation. Absence of Cannot connect to the Docker daemon
means the group update succeeded.
docker version
Running Your First Container: NGINX Minimal Example
Expose port 80. Security group inbound rule must explicitly allow 80/tcp.
docker run -d -p 80:80 --name nginx-server nginx:1.25-alpine
If the page hangs, check with docker ps
and verify logs:
docker logs nginx-server
Common error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint nginx-server...
Usually a port clash or missing iptables rule.
Packaging Custom Applications: Efficient Dockerfile, Registry Integration
Example: Minimal Node.js 18 Alpine Dockerfile
FROM node:18.20-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
Build and tag locally:
docker build -t myapp-node18:202406 .
Push to AWS ECR. Pre-req: repository must exist.
aws ecr get-login-password --region us-east-1 \
| docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com
docker tag myapp-node18:202406 <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/myapp-node18:202406
docker push <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/myapp-node18:202406
Pro tip: Forgetting the tag (e.g., pushing with the default ‘latest’) creates ambiguity under automated pipelines—be explicit.
Pulling and Running Your Image on EC2
Assume AWS CLI is available (Amazon Linux 2 AMI includes v2 by default).
aws ecr get-login-password --region us-east-1 \
| docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com
docker pull <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/myapp-node18:202406
docker run -d -p 3000:3000 <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/myapp-node18:202406
Access the service directly via http://<ec2-public-ip>:3000
.
Gotcha: EC2 default security groups do not expose port 3000. Add a rule or adjust the port mapping.
Scaling Manually: Automation Without Orchestration
Scaling here isn’t elastic by default; you script it, or rely on IaC/ASG patterns.
Recommended process:
- Bake a reusable AMI (Docker & dependencies pre-installed).
- Use EC2 Auto Scaling Groups with appropriate launch templates or User Data scripts.
Example User Data snippet launching your container:
#!/bin/bash
yum update -y
amazon-linux-extras install docker -y
systemctl enable --now docker
usermod -aG docker ec2-user
aws ecr get-login-password --region us-east-1 \
| docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com
docker pull <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/myapp-node18:202406
docker run -d -p 3000:3000 <aws_account_id>.dkr.ecr.us-east-1.amazonaws.com/myapp-node18:202406
Known issue: User Data scripts sometimes hit race conditions on network configuration. Add sleep 10
before docker pull
if ECR login fails.
Pair with an Application Load Balancer:
┌─────────┐ ┌───────────────┐
│ Client │──HTTP(S)──▶(ALB)──────▶│ EC2+Docker │
└─────────┘ └───────────────┘
Production Hardening Recommendations
- IAM: Assign an instance profile with fine-grained ECR and CloudWatch permissions; never use embedded static keys.
- Security Groups: Whitelist only necessary ports; restrict SSH to management IPs.
- Data: For ephemeral apps, bind-mount EFS or attach EBS volumes for persistence. Container local volumes will be destroyed on restart.
- Logs & Monitoring: Ship container logs to CloudWatch using the awslogs driver, or enable the CloudWatch Agent for host metrics. Prometheus Node Exporter is another option.
- Updates: Periodically update base images and Docker itself. Many CVEs target old docker-engine or core OS packages.
- Zero-downtime Deploy: Stagger container replacements manually, or sequence EC2 instance refreshes within the ASG.
- System Limits: For high-throughput apps, tune sysctl settings (e.g.,
vm.max_map_count
, open files) in the AMI itself.
Practical Example: Multi-instance Manual Blue/Green
- Launch two EC2 instances, each running the container.
- Register both with an ALB target group.
- During deploy, launch a third with the updated image, verify health, then deregister and terminate an old instance.
No external orchestrator required, but be vigilant with health checks and old-image cleanup.
Summary Table: Managed vs. Unmanaged Container Deployments
Criterion | EC2 + Docker | ECS/EKS |
---|---|---|
Control | Full (root) | Partial (configurable tasks) |
Scaling | Manual/AutoScale | Native, elastic |
OS Customization | Full | Restricted |
Logging | Manual | Integrated |
Overhead | Low | Medium-High |
Operational Burden | High | Low |
Final Note
Running Docker containers directly on EC2 is not an academic exercise—the method powers many production workloads where managed services don’t fit, or compliance and tuning demand deeper access. Not everything is slick or effortless; automation and maintenance are entirely your responsibility. But the trade-offs are often worth it.
Questions about tighter automation, blue/green, or integrating with CI/CD pipelines? Leave a comment or ping for a deeper dive.
Real-world DevOps means clarity around every moving part. Complexity is a choice; so is control.