MySQL Active-Active Replication DilemmaMulti-writer replication has been a challenge in the MySQL ecosystem for years before truly dedicated solutions were introduced – first Galera (and so Percona XtradDB Cluster (PXC)) replication (around 2011), and then Group Replication (first GA in 2016).

Now, with both multi-writer technologies available, do we still need traditional asynchronous replication, set up in active-active topology? Apparently yes, there are still valid use cases. And you may need it not only when for some reason Galera/PXC or GR are not suitable, but also when you actually use them. Of course, the most typical case is to have a second cluster in a different geographic location, as Disaster Recovery. If you still wonder why you would need it, just read how a whole data center can disappear in the news a few weeks ago, about the OVH incident.

So, a DR site needs to replicate online from the primary cluster and be able to take over the workload very fast if needed. But also it is expected to be able to switch back effortlessly, hence very often the async replication channels are set up in both directions.

A very good writeup about this can be found here: How Not to do MySQL High Availability: Geographic Node Distribution with Galera-Based Replication Misuse.

Now, after seeing repeating problems with active-active setups falling over and over for years, I thought there are still too few warnings out there about the risks, so I decided to add one more little stone to the stack.

Before I continue, I have to mention this great webinar made last year by my colleague Sveta. You should definitely watch if you are interested in the subject: How Safe is Asynchronous Master-Master Setup in MySQL?.

 

Register for Percona Live ONLINE
A Virtual Event about Open Source Databases

 

Failure Test

So, let me demonstrate a simple test case, which may be an eye-opener to some.

First, let’s use the great dbdeployer tool to launch two MySQL instances with active-active replication in just one command:

Now, create a very simple table with one example data row:

So, at this point both nodes have the same data:

In the next step, let’s simulate some little replication lag by introducing a delay of one second, as it will allow reproducing the problem at will:

OK, so what if we send an update, very close in time, to both nodes, where the same row gets a different value:

As a result, both nodes have different column values!

Is Replication Broken?

You may think replication is now broken and some error will alert you about the situation? Not at all!

Or, did you hope enabling the GTID feature would prevent inconsistency from happening? Well, see the Executed_Gtid_Set on both nodes – it is identical, yet the nodes have different data.

What happens next? Well, it depends, maybe replication will eventually fail someday with an error if the same row is modified again, but also it is possible the inconsistency will spread further without you even notice it!

Lesson learned?

I hope this simple example emphasized the need for extra care when dealing with multi-primary replication topologies. The remedy though is usually quite simple:

1 – Make sure to allow only one node at a time for writes (super_read_only is here to help you).

2 – Check the data consistency regularly with appropriate tools (MySQL replication primer with pt-table-checksum and pt-table-sync).

Percona Distribution for MySQL is the most complete, stable, scalable, and secure, open-source MySQL solution available, delivering enterprise-grade database environments for your most critical business applications… and it’s free to use!

Download Percona Distribution for MySQL Today

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Eero Teerikorpi

Przemyslaw, you may a bit young to remember that the first two synchronous, MySQL multi-primary solutions were m/cluster (2004, a predecessor of Galera) and uni/cluster (2007) by Continuent. We abandoned from those solutions due the limitations of the synchronous replication for the geo-distributed needs, both active/passive and active/active (multi-primary). The best available (complete, proven, fully-tested) MySQL clustering solution for these geo-distributed needs is Tungsten Cluster (https://www.continuent.com/products/tungsten-clustering/topologies). Please take a look!

Przemysław, I understand and appreciate your position. Tungsten solutions are well adopted, by some of the largest geo-distributed web properties (for example powering Riot Games League of Legends geo-distributed across five continents). Also, it is honorable to push open source only solutions, but our position is to enable open source databases (MySQL, MariaDB and Percona Server) to serve business-critical apps that otherwise are not really feasible to do with open source only tools. We have the same cause, how to help the overall adoption of MySQL, after all. You offer this via paid services, we offer that with paid software.

Matthew Lang

You’ve correctly stated that asynchronous replication is necessary for any deployment at geo-scale or simply replication over a WAN. Marco has presented the physics of this several times in Percona Live! presentations. As for the remedies (Lesson learned), you CAN have active/active clusters at geo-scale. While using a tool to check data consistency is helpful, it’s not a remedy when using Active/Active deployments because it’s too late to identify data drift and decide which version of the data is correct. Data drift needs to be prevented by having intelligence built into the application about the active/active topology, such as having site affinity.

Also mixing of technologies (one tool for local clustering, another to connect sites via asynchronous replication, and yet another for MySQL proxying) adds complexity to the overall deployment without solving orchestration issues like site failover and failback. I’ve done a comparison between this kind of deployment and Tungsten Clustering (a complete geo-scale clustering solution for MySQL) here:

https://www.continuent.com/resources/blog/galera-high-noon-tungsten-clustering-mysql-mariadb-perconaxtradb

Eric Rasmussen

Another great danger of active-active replication is schema changes, which can either break replication or, worse, store data into the WRONG COLUMNS. See https://bugs.mysql.com/bug.php?id=88595. To the best of my knowledge, this affects all versions of MySQL using row-based replication, and would require a change to the protocol to fix it.

Maciek Brzeziński

I was recently involved in a project, where there was a need for active-active setup and eventually we’ve used replication with binlog. With correct setup of auto increments on both machines the setup works reliably under heavy load (we performance tested the solution for write load exciding several times the load we observed on production before introducing the active-active architecture).

It’s by no means “internet-scale” architecture, but in our situation it reliably does what it’s supposed to and (according to our tests) will perform correctly for years to come (in our project however there is no chance that the load will increase a 1000 times).