Mastering Secure SSH Access to AWS EC2: Beyond Basic Connection
Locking down SSH to AWS EC2 isn't just about running a single command; it's an ongoing process involving layered controls, detailed auditing, and regular key hygiene. Relying on basic ssh -i my-key.pem ec2-user@...
quickly becomes risky at scale, especially as your fleet, compliance requirements, or attack surface grows.
The Weakest Link: Default SSH Practices
Attack vectors stem less from cryptography and more from operational practices. Two examples:
- A shared private key emailed around.
0.0.0.0/0
left open on port 22 after rapid troubleshooting.
More subtle: silent agent forwarding on untrusted intermediaries, or overlooked updates to sshd
that leave bug CVEs sitting for weeks.
Key Management: Strong, Rotated, Isolated
Key Generation: Don't Cut Corners
Ed25519 keys are preferable for both speed and security (OpenSSH ≥ 6.5):
ssh-keygen -t ed25519 -a 100 -f ~/.ssh/ec2_ed25519 -C "prod ops"
chmod 600 ~/.ssh/ec2_ed25519
- The
-a 100
parameter increases key derivation function rounds. - For PCI-DSS/regulated environments, always store private keys on encrypted volumes.
Side note: AWS EC2 does not natively support hardware tokens (YubiKey/FIDO2) for SSH. That limitation may inform your overall policy.
Rotation and Scope
- Each human/operator gets their own key.
- Never reuse keys between bastion and production workloads.
- Revocation? Remove compromised public keys from
~/.ssh/authorized_keys
and invalidate cached credentials.
Security Groups: Restrict and Monitor
Lock down Security Groups to specific management sources—ideally your corporate NAT or VPN.
Rule | From | To | Note |
---|---|---|---|
Port 22, TCP | 203.0.113.0/29 (VPN) | EC2 | never 0.0.0.0/0 |
Port 22, TCP | Bastion VPC CIDR | EC2 | for private “backend” instances |
Management plane typically sits inside a dedicated subnet (e.g., 10.42.10.0/24
) with no routing to the public Internet except via egress proxy.
Gotcha: AWS console sometimes suggests generous defaults. Always audit rules after launch.
Bastion Hosts: The Secure Entry
Deploying a hardened bastion host (jump box) between the public Internet and your private EC2 endpoints is mandatory for most production networks. Minimal extra cost, substantial risk reduction.
Sample flow with OpenSSH ProxyJump
:
Host bastion
HostName bastion.example.net
User ec2-user
IdentityFile ~/.ssh/bastion_ed25519
Host db-internal
HostName 10.42.10.25
User ec2-user
IdentityFile ~/.ssh/prod_app_ed25519
ProxyJump bastion
- SSH agent forwarding (
ForwardAgent yes
) should be enabled only for strongly trusted bastions—consider omitting it entirely.
Known issue: If you see
channel 0: open failed: connect failed: Connection refused
double-check Security Group ingress from the bastion subnet, not just the bastion's public IP.
No Open SSH Port: SSM Session Manager
AWS Systems Manager (SSM) Session Manager can entirely eliminate the need for inbound port 22, which is transformative for production-grade environments.
- Requires SSM agent (Amazon Linux 2: preinstalled; Ubuntu:
sudo snap install amazon-ssm-agent
) - EC2 role requires
AmazonSSMManagedInstanceCore
attached
No local keys needed, no public IP required:
aws ssm start-session --target i-0aa2b1c2d3e4f5g6h --region us-east-1
- All sessions logged to CloudTrail; optional real-time audit log streaming.
- Some network tunnel limitations: port forwarding via SSM is available, but lower throughput and higher initial latency compared to direct SSH.
Auditing, Monitoring, and Compliance
Minimum: Enable auditd or script-based session recording on all jump hosts. For privileged workloads, dump logs to a remote SIEM or CloudWatch Logs stream.
Example session logging config:
yum install -y audit
# /etc/audit/rules.d/ssh.rules
-w /var/log/secure -p wa
- Regularly verify the presence and integrity of logs.
- For critical hosts: combine CloudTrail + SSM session logs for federated access review.
Patch Early, Patch Often
OpenSSH 8.2 (as of Amazon Linux 2, early 2024) and later patches many lingering security bugs.
Update cycles:
sudo yum update -y openssh* || sudo apt-get upgrade openssh-client openssh-server
- Test updated client and server versions before rolling to all fleet nodes.
- Keep client configuration files (
~/.ssh/config
) version-controlled (e.g., in a secure internal Git repo).
Example: End-to-End Hardened SSH Access
Scenario: Private web server sits at 10.42.1.20
(no public IP). Bastion at bastion.prod.myco.com
(public IP 18.217.x.x
).
Steps:
-
Generate unique keys:
ssh-keygen -t ed25519 -a 100 -f ~/.ssh/prod_bastion -C "bastion access" ssh-keygen -t ed25519 -a 100 -f ~/.ssh/prod_private -C "private access"
-
Provision:
- Install respective public keys to the right EC2 user accounts.
- Harden SSH on both nodes (
PasswordAuthentication no
,PermitRootLogin no
in/etc/ssh/sshd_config
).
-
Security Groups:
Bastion inbound port 22: only from office VPN (e.g., 198.51.100.17/32) Private server inbound port 22: only from bastion’s private subnet (10.42.10.0/24)
-
Configure SSH:
Host bastion-prod HostName bastion.prod.myco.com User ec2-user IdentityFile ~/.ssh/prod_bastion Host web-prod HostName 10.42.1.20 User ec2-user IdentityFile ~/.ssh/prod_private ProxyJump bastion-prod
-
Connect:
ssh web-prod
Bonus tip: For passwordless yet audited sessions, enforce command logging with a forced command in authorized_keys
, or use SSM as a second channel.
Final Notes
SSH to EC2 is never just about one command. It's designing a system where stolen keys alone are not enough, and where mistakes get caught—not propagated silently. SSM access is evolving rapidly; consider it for all non-legacy use cases. Don’t neglect key removal on role change or exit.
Non-obvious tip: For ephemeral workloads (CI runners, blue/green deploy targets), consider entirely short-lived instance profiles and ephemeral keypairs generated/disposed per pipeline run.
No approach here is perfect—trade-offs between convenience, auditability, and risk tolerance persist. Accept these, document them, and revisit as both AWS and SSH ecosystems advance.