Comments on: ProxySQL Native Support for Percona XtraDB Cluster (PXC) https://www.percona.com/blog/proxysql-native-support-for-percona-xtradb-cluster-pxc/ Thu, 18 May 2023 17:50:14 +0000 hourly 1 https://wordpress.org/?v=6.5.2 By: dseirastackscale https://www.percona.com/blog/proxysql-native-support-for-percona-xtradb-cluster-pxc/#comment-10970512 Wed, 27 Mar 2019 12:42:31 +0000 https://www.percona.com/blog/?p=55297#comment-10970512 Are there any roadmap to release proxysql v2 in the percona release repository? Actually there is until v1.4.14.

]]>
By: dbennett455 https://www.percona.com/blog/proxysql-native-support-for-percona-xtradb-cluster-pxc/#comment-10970381 Fri, 01 Mar 2019 04:51:08 +0000 https://www.percona.com/blog/?p=55297#comment-10970381 From what I am reading, it sounds like there is still a strong use case for the scriptable scheduler in ProxySQL vs. the non-scriptable internal failover/failback is this correct?

I can think of other possible add-on scenarios such as creating PMM annotations on failover/failback that would require scripting the galera_checker in order to implement. This could not be easily implemented using the internal support.

]]>
By: Marco Tusa https://www.percona.com/blog/proxysql-native-support-for-percona-xtradb-cluster-pxc/#comment-10970334 Thu, 21 Feb 2019 11:02:24 +0000 https://www.percona.com/blog/?p=55297#comment-10970334 Hi Rene,
thanks for the comment and indeed you are right, using READ_ONLY=1 as BP, will help in mitigating the failback.
About the writer_is_also_reader, this will means we eventually need to add the server as reader manually which can be fine. Or write a scheduler action to deal with it.
Thanks for reviewing and clarifying the above!

]]>
By: René Cannaò https://www.percona.com/blog/proxysql-native-support-for-percona-xtradb-cluster-pxc/#comment-10970333 Thu, 21 Feb 2019 10:25:41 +0000 https://www.percona.com/blog/?p=55297#comment-10970333 Hi Marco, thank you for the blog post!
I confirm bug 1902, we are working on it.

About 192.168.1.205 coming back with the highest weight in the reader hostgroup: this behavior is already configurable, in fact it is enough to set mysql_galera_hostgroups.writer_is_also_reader=0 , and the node will come back only as a writer and not as a reader. You can then manually add it as a reader with a different weight.

About the failover and failback, this is indeed a tricky point.
The reason why ProxySQL will perform the failback (with the given configuration) is that the algorithm to determine the writer must be deterministic.
A ProxySQL node witnessing a failover, a ProxySQL node that is network partitioned for some time, and a ProxySQL node that is just started, they all must converge to the same stable configuration.
Otherwise, you will end with different ProxySQL nodes each believing that the writer role should be assigned to different Galera nodes: this will lead to conflicting writes.
Somehow, the flag you are asking for it exists already, and it is read_only=1 in my.cnf on the Galera/MySQL node.
If you set read_only=1 in my.cnf (that, imho, it should always be the case), a restarted/recovered Galera node won’t come back online automatically as a writer.

]]>