mysqldump allows you to create logical backups of your databases and restore them.
A logical backup uses SQL statements to recreate the contents of the database -- tables, views, triggers -- and inserts the data using INSERT statements.
For example:
DROP TABLE IF EXISTS `users`;
CREATE TABLE `users` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`email` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`email_verified_at` timestamp NULL DEFAULT NULL,
`password` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
`two_factor_secret` text COLLATE utf8mb4_unicode_ci,
`two_factor_recovery_codes` text COLLATE utf8mb4_unicode_ci,
`remember_token` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`current_team_id` bigint unsigned DEFAULT NULL,
`profile_photo_path` varchar(2048) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`is_admin` tinyint(1) DEFAULT NULL,
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `users_email_unique` (`email`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
INSERT INTO `users` (`id`, `name`, `email`, `email_verified_at`, `password`, `two_factor_secret`, `two_factor_recovery_codes`, `remember_token`, `current_team_id`, `profile_photo_path`, `is_admin`, `created_at`, `updated_at`) VALUES
(1, 'Prof. Norwood Moen', 'moen@example.com', '2021-08-28 20:39:31', '$2y$10$92IXUNpkjO0rOQ5byMi.Ye4oKoEa3Ro9llC/.og/at2.uheWG/igi', NULL, NULL, 'Ji0Eqw70n1', NULL, NULL, 1, '2021-08-28 20:39:31', '2021-08-28 20:39:31');
This post covers how to back up MySQL using mysqldump efficiently.
Basics of mysqldump
You can back up your database with a single command:
mysqldump your_database > database.sql
To reduce the backup file size, compress it with gzip:
mysqldump mysql | gzip > database.sql.gz
gzip works, but it does not take advantage of multi-core systems.
pigz
A better alternative is pigz, which stands for parallel Implementation of gzip.
pigz exploits multiple processors and multiple cores when compressing data [Source]:
sudo apt-get install -y pigz
mysqldump mysql | pigz > database.sql.gz
Data Inconsistency
Consider a database with users and posts tables. If the users table is dumped, then a new user is created before the posts table is dumped, you end up with inconsistent data -- a post referencing a user that does not exist in the backup.
The fix is to run the export within a transaction using the --single-transaction flag:
mysqldump -u your_user -p --single-transaction \
--default-character-set=utf8mb4 \
your_database | pigz > database.sql.gz
Restore database
To restore the database:
gunzip < database_file.sql.gz | mysql -u your_user -p your_database
Summary
- pigz over gzip -- use
pigzfor parallel compression that actually leverages multi-core systems. - --single-transaction -- always use this flag to ensure data consistency by running the dump inside a transaction.
- Pipe to compression -- pipe
mysqldumpoutput directly topigzto compress on the fly without writing an uncompressed intermediate file. - Restore with gunzip -- decompress and pipe directly into
mysqlfor a straightforward restore process.