Deploy Docker Container To Aws Ec2

Deploy Docker Container To Aws Ec2

Reading time1 min
#Cloud#DevOps#Containers#Docker#AWS#EC2

Streamlining Deployment: How to Deploy Docker Containers on AWS EC2 for Maximum Efficiency

Forget the hype around heavyweight orchestration platforms—sometimes, the fastest path to production is a lean Docker container on a well-configured EC2 instance. Let's cut through the noise and get straight to practical steps that save time and headache.


Why Deploy Docker Containers Directly on AWS EC2?

While container orchestration tools like Kubernetes and ECS offer incredible scalability and automation, they also introduce complexity that might be overkill for many projects. If your app needs fast deployment, predictable costs, and you want to maintain control without steep learning curves, deploying Docker containers directly on an AWS EC2 instance is often the most efficient approach.

By mastering this method, you enable:

  • Simplicity: Avoid complex orchestration layers.
  • Flexibility: Customize your environment fully.
  • Speed: Faster deploy-test cycles.
  • Cost Control: Pay for what you use without overhead.

Prerequisites

Before you begin:

  • An AWS account with access to EC2.
  • Basic familiarity with Linux command-line.
  • Docker installed locally (to build/test your container).
  • SSH access setup (key pair) for your EC2 instance.

Step 1: Launch an EC2 Instance

  1. Go to the AWS Management Console → EC2 → Instances → Launch Instance

    Choose an Amazon Machine Image (AMI):

    • For simplicity, select the latest Ubuntu Server LTS (e.g., Ubuntu 22.04 LTS).
  2. Instance Type

    Pick something lightweight like t3.micro or t3.small based on your app’s resource needs.

  3. Configure Security Group

    • Open port 22 for SSH.
    • Open your application port (e.g., 80 or 8080).
  4. Key Pair

    • Create or use an existing key pair to connect via SSH.

Step 2: Connect to Your EC2 Instance

Using terminal or SSH client:

ssh -i /path/to/your-key.pem ubuntu@ec2-public-ip

Replace /path/to/your-key.pem with your key file path and ec2-public-ip with your instance's public IP from AWS console.


Step 3: Install Docker on the EC2 Instance

Run these commands on the EC2 terminal:

sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io

# Add your user to docker group for convenience:
sudo usermod -aG docker ${USER}

# You will need to log out and back in for group changes to apply.
exit

Reconnect via SSH after this step.


Step 4: Prepare Your Docker Image Locally

Create a simple example app if you don’t have one:

Dockerfile

FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

Build and test locally:

docker build -t my-node-app .
docker run -p 3000:3000 my-node-app

Test in browser at http://localhost:3000.


Step 5: Push Your Docker Image to a Registry

You have two main options:

A) Use Docker Hub (public or private repository)

  1. Log in:
docker login

Enter credentials.

  1. Tag image:
docker tag my-node-app yourdockerhubusername/my-node-app:latest
  1. Push image:
docker push yourdockerhubusername/my-node-app:latest

B) Use AWS Elastic Container Registry (ECR)

ECR is private, scalable, and integrates well but requires setting up repositories and authentication tokens. For quick setups, Docker Hub suffices.


Step 6: Pull and Run Your Container on AWS EC2

Back in your EC2 instance terminal:

docker pull yourdockerhubusername/my-node-app:latest
docker run -d -p 80:3000 --name mynodeapp yourdockerhubusername/my-node-app:latest

This exposes your app’s internal port 3000 to external port 80, so browsing to http://ec2-public-ip shows the app.


Step 7: Verify & Automate Restart

Check running containers:

docker ps

Make sure it’s running smoothly.

For production-like resilience, enable container restart on failure/reboot:

docker update --restart=always mynodeapp

Or run with restart flag initially:

docker run -d --restart=always -p 80:3000 --name mynodeapp yourdockerhubusername/my-node-app:latest

Bonus Tips for Maximum Efficiency

  • Automated Deployment Scripts: Write shell scripts or use CI/CD tools like GitHub Actions to automate image builds, pushes, SSH connects, and deployment commands.

  • Use User Data Script: When launching EC2 instances, add a user data script that installs Docker & pulls your image automatically — making new instance spin-ups near-instant.

  • Log Management: Forward container logs using tools like CloudWatch agent or simple log rotation setups for troubleshooting.

  • Security Best Practices: Limit security group inbound rules strictly; consider vpn bastion hosts; store secrets securely (AWS Secrets Manager rather than baking into images).


Final Thoughts

Deploying Docker containers directly on AWS EC2 instances cuts out unnecessary infrastructure layers that can slow down deployment and add operational hassle. It’s ideal for small-to-medium apps requiring rapid iteration with clear control over environment tuning.

With just a handful of commands, you can spin up performant containers reachable globally — all while keeping costs predictable and complexity low.

Get hands-on today by launching an EC2 instance and deploying that first container! The faster path to production is often the simplest one.


If this guide helped you streamline deployment or you want me to cover setting up CI/CD pipelines for continuous docker deployments on EC2 next, drop a comment below!