Manage ProxySQL ClusterIn this post, we will manage ProxySQL Cluster with “core” and “satellite” nodes. What does that mean? Well, in hundreds of nodes, only one change in one of those nodes will replicate to the entire cluster.

Any mistake or changes will replicate to all nodes, and this can make it difficult to find the most recently updated node or the node of true.

Before continuing, you need to install and configure ProxySQL Cluster. You can check my previous blogs for more information:

The idea to use “core” and “satellite” nodes is to limit only a few nodes as masters (aka core) and the rest of the nodes as slaves (aka satellite). Any change in the “core” nodes will be replicated to all core/satellite nodes, but any change in a “satellite” node will not be replicated. This is useful to manage big amount of nodes because we are minimizing manual errors and false/positive changes, doing the difficult task of finding the problematic node over all the nodes in the cluster.

This works in ProxySQL version 1.4 and 2.

How does it work?

When you configure a classic ProxySQL Cluster, all nodes listen to all nodes, but with this feature, all nodes will only listen to a couple of nodes or the nodes you want to use as “core” nodes.

Any change in one or more nodes not listed in the “proxysql_servers” table will not be replicated, due to the fact that there aren’t nodes listening in the admin port waiting for changes.

Each node opens one thread per server listed in the proxysql_server table and connects to the IP on admin port (default admin port is 6032), waiting for any change in four tables – mysql_servers, mysql_users, proxysql_servers, mysql_query_rules. The only relationship between a core and satellite node is a satellite node connects to the core node and it waits for any change.

How to configure

It’s easy, we will configure only the IPs of core nodes in all cluster nodes, including “core and satellite” nodes, into the proxysql_servers tables. If you read my previous posts and configured a ProxySQL Cluster, we will clean the previous config for the next tables to test from scratch:

Suppose we have 100 ProxySQL nodes, for example, and here is the list of hostnames and IPs from our instances:

And we want to configure and use only 3 core nodes, so we select the first 3 nodes from the cluster:

And the rest of the nodes will be the satellite nodes:

We will use the above IPs to configure the proxysql_servers table, with only those 3 IPs over all nodes. So all ProxySQL nodes (from proxysql_node1 to proxysql_node100) will listen for changes only on those 3 nodes.

Now all nodes from proxysql_node4 to proxysql_node100 are the satellite nodes.

We can see something like this in the proxysql.log file:

How to Test

I’ll create a new entry mysql_users table in the core node to test if the replication from core to satellite is working fine.

Connect to proxysql_node1 and run the next queries:

Now from any satellite node, for example, proxysql_node4, check the ProxySQL log file to find if there are updates. If this is working fine we see something like this:

Then check if the previous update exists in the mysql_users table on proxysql_node4 or any other satellite node. These updates should exist in the mysql_users and runtime_mysql_users tables.

Now the final test is to create a new MySQL user into a satellite node, connect to proxysql_node4, and run the next queries to create a new username:

From the proxysql log on proxysql_node4, we see the next output:

The last thing to check is the proxysql log file in the core node, to see if there are updates from the table mysql_users. Below is the output from proxysql_node1:

As you can see there are no updates, because the core nodes are not listening for changes from satellite nodes. Core nodes only listen for changes in other core nodes.

And finally, this feature is really useful when you have many servers to manage. Hope you can test this!

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
mike

“Native ProxySQL Clustering” function is still exprimental and not suitable for production usage ?

preethi

How to configure satellite nodes in a cluster? How does core nodes(listed in proxysql servers) know about satellite nodes which are not configured ?