Deploy Docker App To Azure

Deploy Docker App To Azure

Reading time1 min
#Cloud#DevOps#Containers#Azure#Docker#ACI

Seamless Docker App Deployment on Azure: Real-World Workflow for Container Instances

Conventional Kubernetes deployments become overhead for ephemeral services, batch jobs, or prototypes. Azure Container Instances (ACI) provides an alternative: containerized applications run isolated on demand—no VM orchestration or patch flipping.

Below is a live approach to getting a Dockerized application from local development onto Azure infrastructure, using transparent resource allocation and minimal cloud policy entanglement.


The Case for ACI

Not every scenario requires the weight of Azure Kubernetes Service (AKS) or VM-scale sets. For stateless services, CI/CD preview environments, or event-driven data processing, ACI offers:

  • Granular pay-for-use: billing stops when the container exits
  • No OS patching or guest maintenance
  • Integrated Azure Active Directory and VNet support (if needed)
  • Faster startup times than most managed VM clusters

Common trade-off: container scaling and orchestration features are intentionally limited compared to AKS.


Quick Inventory: Application, Container Registry, and Azure CLI

Assumptions:

  • Source: Node.js API, containerized via Docker. Example below:
  • Docker: v24.0 or higher
  • Azure CLI: version 2.53.0+ (az --version)

Dockerfile excerpt:

FROM node:18-alpine as base
WORKDIR /app
COPY package.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]

Gotcha: Always prefer npm ci over npm install for CI pipelines; recreates node_modules deterministically.


Step 1: Build and Publish the Container Image

Containers must be accessible to ACI via a registry endpoint. Not all images remain public—consider ACR for private deployments.

docker build -t mydockerhubuser/my-node-api:2024-06 .
docker push mydockerhubuser/my-node-api:2024-06

Replace mydockerhubuser appropriately.

Known issue:

Docker Hub throttling can cause toomanyrequests: Rate limit exceeded errors. For production, shift to Azure Container Registry or another authentified registry.


Step 2: Azure Resource Group Setup

Decouple application deployments from platform resources; group containers, networking, and diagnostics.

az login  # Use device login if behind corporate proxy

az group create --name demo-aci-group --location eastus2

Note: Resource groups are not region-bound, but ACI deployments are.


Step 3: Provision the Container Instance

Best practice: parameterize allocation, DNS labels, and limits for replicability.

export IMAGE_NAME=mydockerhubuser/my-node-api:2024-06
export CONTAINER_NAME=node-api-aci
export RG=demo-aci-group
export PORT=3000
export LABEL=nodeapi$RANDOM

az container create \
  --resource-group $RG \
  --name $CONTAINER_NAME \
  --image $IMAGE_NAME \
  --cpu 1 \
  --memory 1.5 \
  --ports $PORT \
  --dns-name-label $LABEL \
  --restart-policy Never \
  --query ipAddress.fqdn -o tsv
  • --restart-policy Never disables automatic restart—suitable for one-off jobs. Set to Always for microservices.
  • Diagnostic: If deployment fails, inspect container events:
    az container show --resource-group $RG --name $CONTAINER_NAME --query instanceView.events
    
    Example output:
    [
      {
        "count": 1,
        "firstTimestamp": "2024-06-12T21:00:07+00:00",
        "message": "Failed to pull image \"mydockerhubuser/my-node-api:2024-06\": unauthorized: access to the requested resource is not authorized",
        ...
      }
    ]
    
    Misconfigured registry credentials trigger unauthorized errors here.

Step 4: Validate Deployed Service

Test container health and connectivity:

curl http://$LABEL.eastus2.azurecontainer.io:$PORT/
# Expected response: API welcome or healthcheck message

If firewall or port error, verify no NSG restrictions on the subnet (manage if attached to VNet).


Step 5: Operational Lifecycle

Typical post-deployment management:

OperationCommand
View logsaz container logs --resource-group $RG --name $CONTAINER_NAME
Stream logsaz container attach --resource-group $RG --name $CONTAINER_NAME
Delete instanceaz container delete --resource-group $RG --name $CONTAINER_NAME
List running containersaz container list --resource-group $RG -o table

Note: Container survival across host reboots not guaranteed. ACI does not guarantee high availability or persistent execution (use AKS for SLAs).


Advanced Options

  • Virtual Networking: Attach to a VNet for secure backend services. Subnet delegation is mandatory. Subnet cannot have an NSG that blocks outbound traffic or needed inbound ports.
  • Persistent Storage: For stateful workloads, mount Azure Files shares:
    az container create ... --azure-file-volume-share-name <share> --azure-file-volume-account-name <acct> --azure-file-volume-account-key <key> --azure-file-volume-mount-path /data
    
  • Monitoring: Enable Azure Monitor or pipe logs to Log Analytics.

Quick Reference

ACI or AKS?
Use ACI for: ad hoc compute, ephemeral apps, scheduled job runners.
Use AKS for: sustained microservice architectures, high-availability, managed scaling.

Cost

  • Price per second of run time, prorated on CPU/memory (see ACI pricing).
  • No charge when containers are stopped or deleted.

Final Thoughts

ACI accelerates cloud container adoption where speed and isolation matter more than orchestration depth. Limitations exist—no advanced ingress controls, pod-level affinity, or daemonsets—but for straightforward, production-bound Docker images, it often suffices.

Tip: Automate everything above in a CI/CD pipeline for repeatability. For confidential workloads, use managed identities and avoid public registries.


References


Sometimes, the essential engineering choice is the simplest possible deployment. ACI may not be perfect, but for many real applications, it’s the shortest path to “cloud-native” without overengineering.