How To Backup Mysql Database Linux

How To Backup Mysql Database Linux

Reading time1 min
#Database#Linux#Backup#MySQL#Cron

Reliable MySQL Backups on Linux: Steps, Scripts, Trade-offs

Unexpected disk failures, deployment mistakes, or a botched schema migration—if you manage any production MySQL workload, these are not hypothetical risks. Reliable backups become non-negotiable, yet manual processes quickly fall short. This guide cuts through the clutter: how to back up MySQL on Linux using mysqldump, compress and schedule offsite saves, and what to watch out for in real environments.


Why Back Up? A Brief Reminder

Critical events aren’t always preceded by warnings. Disk corruption, accidental DROP TABLE, or a misconfigured migration script can cause significant data loss. Backups aren’t just about compliance; they’re about minimizing RTO (Recovery Time Objective) and keeping downtime measured in minutes, not days.


Prerequisites

  • Linux host (Debian 10/11, Ubuntu 20.04+, RHEL/CentOS 7+, or similar)
  • MySQL Server (5.7/8.0; works for MariaDB with caveats)
  • Shell access with appropriate database privileges (RELOAD, LOCK TABLES required for consistent logical dumps, or global read lock for complex setups)
  • MySQL credentials (never hardcode passwords in scripts; see below)
  • Sufficient disk space (rule of thumb: allow for at least 2x the largest DB size per backup rotation window)

Dumping a Database with mysqldump

mysqldump remains the standard logical backup tool for most single-node workloads and smaller datasets (<50GB). Larger or heavily loaded databases may require snapshotting (e.g., Percona XtraBackup), but mysqldump is often faster for schema migrations or daily business continuity.

Typical usage:

mysqldump -u dbuser -p --single-transaction --routines --triggers mydb > mydb_backup_$(date +%F).sql

Key flags:

  • --single-transaction for transactional consistency (avoids locking full tables on InnoDB)
  • --routines --triggers ensures stored procedures, triggers aren’t omitted
  • No --events? Add it if using event scheduler objects.

Gotcha: Active write-heavy workloads may still result in non-point-in-time backups or potential inconsistency, especially with MyISAM tables (use --lock-all-tables if you must include those, but expect stalls).


Reduce Footprint: Compression

Text dumps grow quickly—10GB+ is not rare even for moderate apps. Compressing reduces storage and speeds up transfer, but complicates direct restores (must decompress before import).

mysqldump -u dbuser -p mydb \
  | gzip > /var/backups/mysql/mydb_$(date +%F).sql.gz

A sample directory after a week of rotation:

/var/backups/mysql/
  mydb_2024-06-08.sql.gz
  mydb_2024-06-09.sql.gz
  ...
  mydb_2024-06-15.sql.gz

Automation: Cron and a Realistic Shell Script

Manual backups are a guarantee you’ll forget them. Automate with cron. Avoid hardcoding passwords: use a .my.cnf client config owned by root (permissions 0600).

~/.my.cnf

[client]
user=dbuser
password=STRONG_PASSWORD   # Not perfect, but safer than env variables or script literals

/usr/local/bin/mysql_backup.sh

#!/bin/bash
set -euo pipefail

BACKUP_DIR="/var/backups/mysql"
DB_NAME="mydb"
DATE=$(date +%F)

mkdir -p "$BACKUP_DIR"

mysqldump --defaults-file=~/.my.cnf \
  --single-transaction --routines --triggers $DB_NAME \
  | gzip > "$BACKUP_DIR/${DB_NAME}_backup_${DATE}.sql.gz"

# Optionally, prune backups older than 7 days
find "$BACKUP_DIR" -name "*.sql.gz" -type f -mtime +7 -delete

chmod +x /usr/local/bin/mysql_backup.sh

Cron entry (root, or a unix user with access):

0 2 * * * /usr/local/bin/mysql_backup.sh

Restores muscle memory: 02:00 backup, rotates out older files, leaves you with a week’s snapshot history.


Restoring a Compressed Dump

To restore:

gunzip < mydb_backup_2024-06-12.sql.gz | mysql -u dbuser -p mydb

or, for an uncompressed dump:

mysql -u dbuser -p mydb < mydb_backup_2024-06-12.sql

Note: Database must already exist (schema only, or use --databases mydb with the original dump if you want to recreate).


Side Notes and Caveats

  • Credentials: .my.cnf files are safer than putting passwords in shell arguments; still, they’re a risk. Use access controls and audit. Advanced shops integrate with secrets managers (Vault, SOPS); did not cover here.
  • Binary logs: For true point-in-time recovery, archive MySQL binary logs alongside logical dumps. Not in this script, but important for financial or high-volume transactional workloads.
  • Large databases: For >50GB, downtime-sensitive production, look into Percona XtraBackup. Logical dumps are slow and require rebuilding indexes on restore.
  • Multiple databases: Add --databases db1 db2 to mysqldump if required.
  • Character set: Mismatches can cause invisible corruption. Always note character set in your documentation.

Example Output and Error Handling

Common error during dump if table is corrupted:

mysqldump: Got error: 1449: "The user specified as a definer ('xyz'@'%') does not exist" when using LOCK TABLES

Remediate by correcting definers or granting required privileges.


Final Thoughts

MySQL backup strategies aren’t glamorous, but a reliable, automated process prevents sleepless nights. Even robust cloud-managed services fail: local dumps and retention policies still matter.

For teams with strict compliance, add offsite copy (e.g., S3 sync), encrypt at rest, and regularly run test restores—no backup is real until you’ve restored from it.

If consistency across replicas, minimal write downtime, or point-in-time recovery are critical, consider alternatives that exploit filesystem snapshots or streaming backup tools. For many small- to medium-sized workloads, mysqldump with automation is a pragmatic, proven baseline.