The Problem

Recover Percona XtraDB Cluster 5.7 Node Without SSTState Snapshot Transfer can be a very long and expensive process, depending on the size of your Percona XtraDB Cluster (PXC)/Galera cluster, as well as network and disk bandwidth. There are situations where it is needed though, like after long enough node separation, where the gcache on other members was too small to keep all the needed transactions.

Let’s see how we can avoid SST, yet recover fast and without even the need for doing a full backup from another node.

Below, I will present a simple scenario, where one of the cluster nodes was having a broken network for long enough that it will make Incremental State Transfer (IST) no longer possible.

For this solution to work, I am assuming that the cluster has binary logs with GTID mode enabled, and logs with missing transactions were not purged yet. Though it would be still possible without GTID, just slightly more complex.

My example PXC member, node3, gets separated from the cluster due to a network outage. Its last applied transaction status is:

However, other available active nodes in the cluster have already rotated the gcache further:

Hence, after the network is restored, it fails to re-join the cluster due to IST failure:

DONOR error log:

JOINER error log:

And node3 shuts down its service as a result.

The Solution

To avoid using full backup transfer from the donor, let’s try asynchronous replication here, to let the failed node catch up with the others so that IST should be possible later.

To achieve that, let’s modify the configuration file first on the separated node, and add these to avoid accidental writes during the operation:

and to disable PXC mode for the time, comment out the provider:

Now, after a restart, node3 becomes a standalone MySQL node, without Galera replication enabled. So, let’s configure async replication channel (repl user was created already on all nodes):

And then wait for it to catch up with the source node. Once this replica is fully up to date, let’s stop it, remove async channel configuration, and note its new GTID position:

Now, we have to find the corresponding cluster’s wsrep sequence, in the source binary log, like this:

With this position, the grastate.dat file on the failed node has to be updated, as follows:

The previous configuration file modifications must be now reverted, and the service restarted again.

This time, IST was finally possible:

And node3 joins back the cluster properly:

Summary

With the help of traditional asynchronous replication, we were able to restore the failed node back to the cluster faster and without all the overhead related to a full backup made by SST.

The only requirement for such a method to work is an enabled binary log, with a long enough rotation policy.

I have tested this on version:

Unfortunately, a similar solution does not work with Percona XtraDB Cluster 8.0.x, due to the modified way wsrep positions are kept in the storage engine, hence the trick with updating grastate.dat does not work as expected there.

I would like to also remind here, that in case some node is expected to stay separated from the cluster for too long, there is a way to preserve longer galera cache history for it. So by doing this, the solution I presented may not even be needed – check the relevant article: Want IST Not SST for Node Rejoins? We Have a Solution!

Subscribe
Notify of
guest

4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
La Cancellera Yoann

Great post thanks ! If GTID is not enabled, how would you do it ? I guess parsing binary logs, grep Xid= to get the next correct position ?

peterzaitsev

Does it only work for 5.7 or would similar procedure work for Percona XtraDB Cluster 8.0 too ?