The importance of having periodic backups is a given in Database life. There are different flavors: binary ones (Percona XtraBackup), binlog backups, disk snapshots (lvm, ebs, etc) and the classic ones: logical backups, the ones that you can take with tools like mysqldump, mydumper, or mysqlpump. Each of them with a specific purpose, MTTRs, retention policies, etc.

Another given is the fact that taking backups can be a very slow task as soon as your datadir grows: more data stored, more data to read and backup. But also, another fact is that not only does data grow but also the amount of MySQL instances available in your environment increases (usually). So, why not take advantage of more MySQL instances to take logical backups in an attempt to make this operation faster?

Distributed Backups (or Using all the Slaves Available)

The idea is simple: instead of taking the whole backup from a single server, use all the servers available. This Proof of Concept is focused only on using the replicas on a Master/Slave(s) topology. One can use the Master too, but in this case, I’ve decided to leave it alone to avoid adding the backup overhead.

Tests!

On a Master/3-Slaves topology:

Graph from the Orchestrator GUI

Graph from the Orchestrator GUI

With a small datadir of around 64GB of data (without the index size) and 300 tables (schema “sb”):

Using the 3 replicas, the distributed logical backup with mysqldump took 6 minutes, 13 seconds:

Same backup type on a single replica took 11 minutes, 59 seconds:

In other words:

The distributed one was 48% faster!

And this is a fairly small dataset. Worth the shot. So, how does it work?

Concepts

The logic is simple and can be divided into stages. 

Stage 1: Preparation

  • Find out how many replicas there are available
  • Find out the number of tables in the schema you want to take a backup of
  • Divide the number of tables between all the available replicas. The resultant chunks will be the tables each replica will backup.

Stage 2: Guarantee Consistency

  • Prevent the Master from executing operations that change the binlog position. Typically this is done with FLUSH TABLES WITH READ LOCK, but this PoC is using the cool feature of LOCK BINLOG FOR BACKUP available on Percona Server for MySQL and is way less disruptive.
  • Find the most up-to-date replica
  • Make all the other replicas match the most up to date one with START SLAVE UNTIL
  • Fire up a mysqldump per replica with the correspondent chunk of tables and use –lock-for-backup (another Percona Server feature)

The full script can be found here:

https://github.com/nethalo/parallel-mysql-backup/blob/master/dist_backup.sh

Worth to note that the script has its own log that will describe every step, it looks like this:

Requirements

Some basic requirements:

  • Since the tool uses the command SHOW SLAVE HOSTS, it is mandatory to set the variable report_host, which if you are using Orchestrator, you most likely have it set already.
  • The host set in the “report_host” variable should be one that is accessible. For example, an IP or a host that can actually be resolved (DNS, editing /etc/hosts file).
  • No Replication Filters on any of the replicas involved. This to guarantee data consistency.
  • The script currently should be run locally in the Master server.
  • It only works on Percona Server due to the usage of Backup Locks.
  • MySQL user credentials are expected to be available in the home dir inside the .my.cnf file.

We Would Like Your Feedback!

Interesting or not?

  • Is this something that would come handy for your backup operations?
  • Is there something else you would like to see from the script?
  • Is there something missing?

With this being a Proof of Concept, it lacks features that eventually (if this becomes a more mature tool) will arrive, like:

    • Adding weights to the slaves so the distribution can be modified
    • Option to use the Master as one of the backup servers, if desired
    • Use FTWRL when the server is not Percona Server
    • Use MyDumper/MysqlPump with multi-threads instead of MySQLDump
    • Etc…

Let us know in the comments section!

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Andy Moore

Outside the box thinking from our friends at Percona ??

Brad Mickel

I really like this idea, but why do you wait until the backup is complete to unlock the binlog on the master? Couldn’t that be done after issuing the “START SLAVE UNTIL” commands? This way it allows the master to commit again, but since the slaves are stopped their backups would still be consistent.

Vaibhav

Simple and great idea.

Just to add few things as a suggestion ..

events and routines
to dump events , procedures and functions

also it would be great to have a hostname as part of each backup.

and last it would be great if there is a check for replication filters before execution of backup.

Eric

Hi,
interesting as approach to split the load, operation impact of “logical” backup done over multiple instance.

My point-of-view:
interesting
– when you don’t have advance storage capabilities, want to limit operational task impact on your production
– by using standard tool, architecture

But:
– doesn’t it make more complex ?
– implementation ?
– if you need to restore our master (user issue –> delete data –> replicated to other slave), you will need to restore your master and rebuild your replicated (DR more complex, RTO impacted)
– what is the cost on running multiple ‘host” and additional storage for (in this scenario) 3 replicas ?
– as you mention there is disk/Storage solution, certainly on modern one, they have advance feature can provide a other magnitude of service and in this context:
– snapshot : for TB, it takes seconds (only delta)
– clone : for TB, it takes seconds to have a clone of the running (real-time) master MySQL instance
– time efficiency: very fast
– space efficiency : for the read –> access the original block of the Master databases (shared resource). With your database of e.g. 1 TB you could have a clone with some MB/GB. You consume the same storage resource (cpu, disk, …) and to limit this you could do the clone from a older snapshot/backup.

Buchan

What you are really doing here is comparing a single serialized dump of all tables, to a parallel (n=3) dump of tables.

Could you test with mydumper ( https://github.com/maxbube/mydumper ), in order to differentiate between scaling via multiple connections dumping vs multiple servers?

Also, in my experience, parallelism in the dump part is less than half the problem, as restore from mysqldump typically scales much worse than creating the mysqldump.

Finally, the other problem that can occur is when, even though you have 100 tables, inevitably there will be one (or a few) table(s) that make up large percentage of the entire database size (e.g. an audit trail table or similar). mydumper seems to have features to help in this case, e.g. by chunking the tables into parts.