How To Install Nginx On Amazon Linux 2

How To Install Nginx On Amazon Linux 2

Reading time1 min
#Cloud#Linux#WebServer#NGINX#AmazonLinux2#AWS

Step-by-Step Guide: NGINX on Amazon Linux 2, Production-Ready

Basic NGINX installs work for staging. But unchecked defaults are a recurring cause of failures—memory spikes, stale connections, slow restarts—once real traffic hits. Here’s a professional baseline, as actually used for hardened deployments on Amazon EC2 running Amazon Linux 2.


Prerequisites

  • EC2 instance, Amazon Linux 2 (tested kernel: 4.14.330-204.539.amzn2.x86_64)
  • SSH access as a non-root user (ec2-user or equivalent) with sudo
  • Security Group with inbound HTTP/HTTPS allowed

Connect to EC2 via SSH

ssh -i $KEY_FILE ec2-user@$PUBLIC_IP

Substitute your private key path and the public IP. Always verify the SSH fingerprint; MITM happens in real environments.


System Update

Update base packages and the Amazon Linux extras repository. Skipping this step risks deploying on old OpenSSL or libc, which regularly triggers scanner alerts.

sudo yum update -y
sudo yum clean all

Install NGINX (amazon-linux-extras method)

Amazon Linux 2 rolls NGINX via amazon-linux-extras. As of 2024-01, only nginx1 (1.22.x) maintained by AWS is available.

sudo amazon-linux-extras enable nginx1
sudo yum clean metadata
sudo yum install nginx -y
nginx -v
# Output: nginx version: nginx/1.22.1

Note: Using the nginx repo directly gives you newer mainline (1.25.x), but for production, stick to distro repos unless actively tracking upstream changelogs.


Enable and Start NGINX

sudo systemctl enable --now nginx
sudo systemctl status nginx

If NGINX fails to start, check:

sudo tail -40 /var/log/nginx/error.log

Security Groups and Local Firewall

AWS Security Groups: Open TCP/80 and TCP/443. Example CIDR 0.0.0.0/0 suffices for public web, restrict as needed.

  • Never blindly open SSH/22 to 0.0.0.0/0; lock it down by source IP.

If you’ve enabled firewalld or iptables locally (rare), permit these ports:

sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload

Verify Default Installation

curl -I http://localhost

Look for:

HTTP/1.1 200 OK
Server: nginx/1.22.1

If accessing from a browser, hit http://$PUBLIC_IP—standard NGINX welcome page.


Minimal Server Block Configuration

Don’t touch /etc/nginx/nginx.conf blindly. Use /etc/nginx/conf.d/*.conf drop-ins for vhost/namespace clarity.

Example for example.com:

sudo mkdir -p /usr/share/nginx/html/example.com
echo '<h1>example.com ready</h1>' | sudo tee /usr/share/nginx/html/example.com/index.html

sudo tee /etc/nginx/conf.d/example.com.conf > /dev/null <<EOF
server {
    listen 80;
    server_name example.com www.example.com;

    root /usr/share/nginx/html/example.com;
    index index.html;

    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;

    location / {
        try_files \$uri \$uri/ =404;
    }
}
EOF

sudo nginx -t
sudo systemctl reload nginx

Heads-up: DNS for example.com must resolve to your public IP for server_name to be relevant.


Production-Grade Optimizations

Process and Connection Handling

Open /etc/nginx/nginx.conf:

worker_processes auto;
worker_rlimit_nofile 4096;    # Avoid fd exhaustion under load

events {
    worker_connections 2048;  # Increase if you expect thousands of concurrent clients
    multi_accept on;
    use epoll;
}
  • worker_processes auto;: One per vCPU = best throughput.
  • worker_rlimit_nofile: Exceeding /proc/sys/fs/file-max causes 502/504 under heavy concurrency.

Test config and reload:

sudo nginx -t
sudo systemctl reload nginx

Static Compression

Add to your http block or a dedicated /etc/nginx/conf.d/gzip.conf:

gzip on;
gzip_proxied any;
gzip_types text/plain text/css application/javascript application/json text/xml application/xml;
gzip_comp_level 5;
gzip_min_length 256;

Gotcha: Overly aggressive gzip_comp_level (>6) can spike CPU usage. Tune based on instance size.


Timeouts to Prevent Resource Hogs

Add (inside http {}):

client_body_timeout   12s;
client_header_timeout 12s;
keepalive_timeout     50s;
send_timeout          15s;

Shorter keepalives reduce resource lockup risks on overloaded VMs.


Hide Server Tokens

Add:

server_tokens off;

Removes “Server: nginx/1.22.1” from responses. Lowers your surface area against CVE scanners.


Optional: SELinux Mode

Amazon Linux 2 SELinux usually set to “permissive”:

sestatus

If enforcing, modifying contexts may be needed:

sudo semanage fcontext -a -t httpd_sys_content_t "/usr/share/nginx/html/example.com(/.*)?"
sudo restorecon -Rv /usr/share/nginx/html/example.com

HTTPS Termination With Let’s Encrypt

Install Certbot (Note: correct as of 2024):

sudo amazon-linux-extras install epel -y
sudo yum install certbot python3-certbot-nginx -y

Request and install certificate:

sudo certbot --nginx
  • Certbot will auto-detect the server_name from your /etc/nginx/conf.d/*.conf files
  • Accept prompts, make sure port 80 is open for HTTP challenge

By default, auto-renewal via systemd timer is enabled:

systemctl list-timers | grep certbot

Practical tip: Always test renewal with:

sudo certbot renew --dry-run

before trusting expiry automation.


Maintenance and Monitoring

  • Logrotate: /etc/logrotate.d/nginx covers /var/log/nginx/*log. Defaults rotate weekly, 52 keeps. Custom retention? Edit this file.
  • Service recovery: Harden against OOMKiller by adding to /etc/systemd/system/nginx.service.d/override.conf:
    [Service]
    Restart=always
    
    Then reload daemon:
    sudo systemctl daemon-reload
    
  • Metrics: Install CloudWatch agent or use node_exporter for Prometheus if integrating EC2 into a wider stack.
  • Backup: Track /etc/nginx/ (minus .pem files) in git or S3. Don’t trust a single volume.

Known Issues and Trade-Offs

  • Stock AWS NGINX build has HTTP2 enabled, but no Brotli compression without third-party modules.
  • Instance memory <1 GiB? Tune worker_connections down; or expect “worker_connections are not enough” errors.
  • Running containers? Be explicit with listen 0.0.0.0:80; versus listen [::]:80; for dual-stack. NGINX handles both, but logging binds for IPv6 unless explicitly disabled.

In sum: installation is not the bottleneck. Longevity and troubleshooting under load start with knowing your configuration files, log locations, and how NGINX threads interact with Linux kernel file descriptors.

If you skipped straight to here: test every step, automate what you can, and always check logs post-reboot.

For actual scale, pair this with a load balancer; NGINX alone doesn’t magically handle multi-AZ failover or DDoS at scale.