maxscale monitors serversIn this post, we’ll address how MaxScale monitors servers. We saw in the

We saw in the previous post how we could deal with high availability (HA) and read-write split using MaxScale.

If you remember from the previous post, we used this section to monitor replication:

But what are we monitoring? We are monitoring the assignment of master and slave roles inside MaxScale according to the actual replication tree in the cluster using the default check from the mysqlmon monitoring modules.

There are other monitoring modules available with MaxScale:

So back to our setup. MaxScale monitors the roles of our servers involved in replication. We can see the status of every server like this:

Now if we stop the slave, we can see:

and in the MaxScale logs:

Now if the slave is lagging, nothing happens, and we will then keep sending reads to a slave that is not up to date 🙁

To avoid that situation, we can add to the “[Replication Monitor]” section the following parameter:

If we do so, MaxScale (if it has enough privileges) will create a schema maxscale_schema  with a table replication_heartbeat . This table will be used to verify the replication lag like pt-heartbeat does.

When enabled, after we restart MaxScale, we can see the slave lag:

Does this mean that now the node won’t be reached (no queries will be routed to it)?

Let’s check:

That doesn’t sound good…

We can see that there is 1 current connection .

How come? The monitoring actually works as expected, but we didn’t configure our Splitter Service  to not use that lagging slave.

We need to configure it like this:

And now, if the slave lags for 30 seconds or more, it won’t be used.

But what happen if for any reason we need to stop all the slaves (or if replication breaks)?

To find out, I performed a STOP SLAVE;  on percona2 and percona3. This what we see in the logs:

If there are no more slaves, the master is not a master anymore, and the routing doesn’t work. The service is unavailable!

As soon as we start a slave, the service is back:

Can we avoid this situation when all slaves are stopped?

Yes we can, but we need to add into the monitoring section the following line:

If we stop  the two slaves again, in MaxScale’s log we can now read:

And we can still connect to our service and use the single master.

Next time we will see how the read-write split works.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jordan

Great points here. It’s so important for people to understand how servers are monitored like this. Thanks for sharing this!