ProxySQL in its versions up to 1.x did not natively support Percona XtraDB Cluster (PXC). Instead, it relied on the flexibility offered by the scheduler. This approach allowed users to implement their own preferred way to manage the ProxySQL behaviour in relation to the Galera events.

From version 2.0 we can use native ProxySQL support for PXC.. The mechanism to activate native support is very similar to the one already in place for group replication.

In brief it is based on the table [runtime_]mysql_galera_hostgroups and the information needed is mostly the same:

  • writer_hostgroup: the hostgroup ID that refers to the WRITER
  • backup_writer_hostgroup: the hostgoup ID referring to the Hostgorup that will contain the candidate servers
  • reader_hostgroup: The reader Hostgroup ID, containing the list of servers that need to be taken in consideration
  • offline_hostgroup: The Hostgroup ID that will eventually contain the writer that will be put OFFLINE
  • active: True[1]/False[0] if this configuration needs to be used or not
  • max_writers: This will contain the MAX number of writers you want to have at the same time. In a sane setup this should be always 1, but if you want to have multiple writers, you can define it up to the number of nodes.
  • writer_is_also_reader: If true [1] the Writer will NOT be removed from the reader HG
  • max_transactions_behind: The number of wsrep_local_recv_queue after which the node will be set OFFLINE. This must be carefully set, observing the node behaviour.
  • comment: I suggest to put some meaningful notes to identify what is what.

Given the above let us see what we need to do in order to have a working galera native solution.
I will have three Servers:

As set of Hostgroup, I will have:

To set it up

Servers first:

Then the galera settings:

As usual if we want to have R/W split we need to define the rules for it:

Then another important variable… the server version, please do yourself a good service ad NEVER use the default.

Finally activate the whole thing:

One thing to note before we go ahead. In the list of servers I had:

  1. Filled only the READER HG
  2. Used the same weight

This because of the election mechanism ProxySQL will use to identify the writer, and the (many) problems that may be attached to it.

For now let us go ahead and see what happens when I load this information to runtime.

Before running the above commands:

After:

As we can see, ProxySQL had taken the nodes from my READER group and distribute them adding node 1 in the writer and node 2 as backup_writer.

But – there is a but – wasn’t my rule set with Active=0? Indeed it was, and I assume this is a bug (#Issue  1902).

The other thing we should note is that ProxySQL had elected as writer node 3 (192.168.1.231).
As I said before what should we do IF we want to have a specific node as preferred writer?

We need to modify its weight. So say we want to have node 1 (192.168.1.205) as writer we will need something like this:

Doing that will give us :

If you noticed, given we had set the WEIGHT in node 1 higher, this node will become also the most utilized for reads.
We probably do not want that, so let us modify the reader weight.

At this point if we trigger the failover, with set global wsrep_reject_queries=all; on node 1.
ProxySQL will take action and will elect another node as writer:

Node 3 (192.168.1.231) is the new writer and node 1 is in the special group for OFFLINE.
Let see now what will happen IF we put back node 1.

Ooops the READER has come back with the HIGHEST value and as such it will be the most used node, once more. To fix it, we need to re-run the update as before.

But there is a way to avoid this? In short the answer is NO!
This, in my opinion, is BAD and is worth a feature request, because this can really put a node on the knees.

Now this is not the only problem. There is another point that is probably worth discussion, which is the fact ProxySQL is currently doing FAILOVER/FAILBACK.

Failover, is obviously something we want to have, but failback is another discussion. The point is, once the failover is complete and the cluster has redistributed the incoming requests, doing a failback is an impacting operation that can be a disruptive one too.

If all nodes are treated as equal, there is no real way to prevent it, while if YOU set a node to be the main writer, something can be done, let us see what and how.
Say we have:

Let us put the node down
set global wsrep_reject_queries=all;

And check:

We can now manipulate the weight in the special OFFLINE group and see what happen:

Then I put the node up again:
set global wsrep_reject_queries=none;

That’s it, the node is back but with no service interruption.

At this point we can decide if make this node reader like the others, or wait and plan a proper time of the day when we can put it back as writer, while, in the meanwhile it has a bit of load to warm its bufferpool.

The other point – and important information – is what is ProxySQL is currently checking on Galera? From reading the code Proxy will trap the following:

  • read_only
  • wsrep_local_recv_queue
  • wsrep_desync
  • wsrep_reject_queries
  • wsrep_sst_donor_rejects_queries
  • primary_partition

Plus the standard sanity checks on the node.

Finally to monitor the whole situation we can use this:

As you can see above the log table keeps track of what is changed. In this case, it reports that node 1 has wsrep_reject_queries activated, and it will continue to log this until we set wsrep_reject_queries=none.

Conclusions

ProxySQL galera native integration is a useful feature to manage any Galera implementation, no matter whether it’s Percona PXC, MariaDB cluster or MySQL/Galera.

The generic approach is obviously a good thing, still it may miss some specific extension like we have in PXC with the performance_schema pxc_cluster_view table.

I’ve already objected about the failover/failback, and I am here again to remind you: whenever you do a controlled failover REMEMBER to change the weight to prevent an immediate failback.

This is obviously not possible in the case of a real failover, and, for instance, a simple temporary eviction will cause two downtimes instead only one. Some environments are fine with that others not so.

Personally I think there should be a FLAG in the configuration, such that we can decide if failback should be executed or not.

 

4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
René Cannaò

Hi Marco, thank you for the blog post!
I confirm bug 1902, we are working on it.

About 192.168.1.205 coming back with the highest weight in the reader hostgroup: this behavior is already configurable, in fact it is enough to set mysql_galera_hostgroups.writer_is_also_reader=0 , and the node will come back only as a writer and not as a reader. You can then manually add it as a reader with a different weight.

About the failover and failback, this is indeed a tricky point.
The reason why ProxySQL will perform the failback (with the given configuration) is that the algorithm to determine the writer must be deterministic.
A ProxySQL node witnessing a failover, a ProxySQL node that is network partitioned for some time, and a ProxySQL node that is just started, they all must converge to the same stable configuration.
Otherwise, you will end with different ProxySQL nodes each believing that the writer role should be assigned to different Galera nodes: this will lead to conflicting writes.
Somehow, the flag you are asking for it exists already, and it is read_only=1 in my.cnf on the Galera/MySQL node.
If you set read_only=1 in my.cnf (that, imho, it should always be the case), a restarted/recovered Galera node won’t come back online automatically as a writer.

dbennett455

From what I am reading, it sounds like there is still a strong use case for the scriptable scheduler in ProxySQL vs. the non-scriptable internal failover/failback is this correct?

I can think of other possible add-on scenarios such as creating PMM annotations on failover/failback that would require scripting the galera_checker in order to implement. This could not be easily implemented using the internal support.

dseirastackscale

Are there any roadmap to release proxysql v2 in the percona release repository? Actually there is until v1.4.14.