One new feature in Percona XtraDB Cluster (PXC) in recent releases was the inclusion of the ability for an existing cluster to auto-bootstrap after an all-node-down event.  Suppose you lose power on all nodes simultaneously or something else similar happens to your cluster. Traditionally, this meant manually re-bootstrapping the cluster, but not any more.

How it works

Given the above all-down situation, if all nodes are able to restart and see each other such that they all agree what the state was and that all nodes have returned, then the nodes will make a decision that it is safe for them to recover PRIMARY state as a whole.

This requires:

  • All nodes went down hard — that is; a kill -9, kernel panic, server power failure, or similar event
  • All nodes from the last PRIMARY component are restarted and are able to see each other again.

Demonstration

Suppose I have a 3 node cluster in a stable state. I then kill all nodes simultaneously (simulating a power failure or similar event):

I can see that each node maintained a state file in its datadir called ‘gvwstate.dat’. This contains the last known view of the cluster:

This file will not exist on a node if it was shutdown cleanly, only if the mysqld was uncleanly terminated. This file should exist and be the same on all the nodes for the auto-recovery to work.

I can now restart all 3 nodes more or less at the same time. Note that none of these nodes are bootstrapping and all of the nodes have the wsrep_cluster_address set to a proper list of the nodes in the cluster:

I can indeed see that they all start successfully and enter the primary state:

Checking the logs, I can see this indication that the feature is working:

Changing this behavior

This feature is enabled by default, but you can toggle it off with the pc.recovery setting in the wsrep_provider_options

This feature helps cover an edge case where manual bootstrapping was necessary in the past to recovery properly. This feature was added in Percona XtraDB Cluster version 5.6.19, but was broken due to this bug.  It was fixed in PXC 5.6.21

9 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Antonio Kang

Hi Jay,

I was testing this feature out on my VMs and I having trouble with starting mysql on all 3 of the nodes consistently.

Some times, was able to start mysql on 3 nodes after using the killall command listed in the tutorial but other times, I was not able to start up mysql on the nodes.

Also, I was wondering what scenarios do you recommend using this feature?

Peter Zaitsev

Jay,

I wonder how do we find which node really was discovered to be latest and actually was PRIMARY and gave IST to others (hopefully)

I am testing this and I see:

NODE1:
2014-12-05 17:03:25 2241 [Note] WSREP: promote to primary component

NODE2:
2014-12-05 17:03:25 2194 [Note] WSREP: promote to primary component

What I’m doing is shutting boxes off with 5 seconds delay and I want to ensure the last box down is actually picked so we have indeed latest state

Morgan Jones

Jay,

You say that the cluster will recover if all nodes are started and they are able to recover the PRIMARY component. What will happen if they cannot recover the PRIMARY component for some reason? Will the nodes be left running, but not replicating? Will the user be able to access the database?

Thanks,

Brian Kruger

Been playing around with this. Adding some additional help for people if they come across this.

If you do lose your whole cluster, for this to work, all of the nodes need to come back on line that are listed in the gvwstate.dat .. If you manage to not have a machine comeback after a power outage for instance, you can edit gvwstate.dat and remove the dead host uuid and restart.

The big question is with going through this effort, is it easier to just re-bootstrap at that point ?

sanjay

Thanks Jay

I have followed steps but when I trying to restart it
service mysql start
ERROR! MySQL (Percona XtraDB Cluster) is not running, but PID file exists

So, I have to manually remove PID file in order to start it.