ProxySQL with AWS AuroraIn this post, we’ll look at how to implement ProxySQL with AWS Aurora.

Recently, there have been a few discussions and customer requests that focused on AWS Aurora and how to make the various architectures and solutions more flexible.

Flexible how, you may ask? Well, there are the usual expectations:

  • How do you improve resource utilization?
  • How can I filter (or block) things?
  • Can I shard with Aurora?
  • What is the best way to implement query caching?
  • … and more.

The inclusion of ProxySQL solves many of the points above. We in Consulting design the solutions for our customers by applying the different functionalities to better match customers needs. Whenever we deal with Aurora, we’ve had to exclude ProxySQL because of some limitations in the software.

Now, however, ProxySQL 2.0 supports Aurora, and it does it amazingly well.

This article shows you how to implement ProxySQL with AWS Aurora. The the next article Leveraging ProxySQL with AWS Aurora to Improve Performance will show you WHY.

The Problem

ProxySQL has two different ways to deal with backend servers. One is using replication mechanisms, like standard Async replication and Group Replication. The other is to use the scheduler, as in the case of Percona XtraDB Cluster, MariaDB Cluster, etc.

While we can use the scheduler as a solution for Aurora, it is not as immediate and well-integrated as the embedded support for replication, given that we normally opted not to use it in this specific case (Aurora).

But what WAS the problem with Aurora? An Aurora cluster bases its definition of Writer vs. Readers using the innodb_read_only variable. So, where is the problem? Well actually no problem at all, just that ProxySQL up to version 2 for replication only supported the generic variable READ_ONLY. As such, it was not able to correctly identify the Writer/Readers set.

The Solution

In October 2017, this issue was opened (https://github.com/sysown/proxysql/issues/1195 )and the result was, as usual, a quite simple and flexible solution.

Brainstorming, a possible solution could be to add another column in mysql_replication_hostgroups to specify what needs to be checked, either read_only or innodb_read_only, or even super_read_only

This lead to the ProxySQL team delivering (“commit fe2f16d6df15252f0107a6a224dad7b1efdb13f6”):

Added support for innodb_read_only and super_read_only  

Which in short means they added a new column to the mysql_replication_hostgroup table. ProxySQL continues to behave exactly the same and manages the servers and the replication groups as usual. No need for scripts or other crazy stuff.

Implement ProxySQL with AWS Aurora

Here we are, the HOW TO part. The first thing to keep in mind is that when you implement a new Aurora cluster, you should always consider having at least two instances in the same AZ and another instance in a remote AZ.

To implement ProxySQL, you should refer directly to the instances, NOT to the cluster entry-point. To be clear, you must take this for each instance:

The information is available in the web-admin interface, under the instance or using the command:

And filter the result for:

To run ProxySQL with RDS in general, you need to install it on an intermediate server or on the application box.

Once you decide which one fits your setup better, you must download or git clone ProxySQL v2.0+.

DO NOT use v1.4.x, as it does not contain these new features and will not work as expected.

Once you have all the Aurora instances up, it is time to configure ProxySQL. Below is an example of all the commands used during the installation:

The above will give you a ready-to-go ProxySQL setup that supports Aurora cluster, performing all the usual operations ProxySQL does, including proper W/R split and more for a user named ‘m8_test’.

The key is in passing the value ‘innodb_read_only’ for the column check_type in the table mysql_replication_hostgroups.  

To check the status of your ProxySQL, you can use this command (which gives you a snapshot of what is going to happen):

At this point, you can connect your application and see how ProxySQL allows you to perform much better than the native cluster entry point.

This will be expanded in the next article: Leverage AWS Aurora performance.

Conclusions

I had my first issue with the native Aurora connector a long time ago, but I had nothing to replace it. ProxySQL is a very good alternative to standard cluster access, with more options/controls and it also allows us to perform close-to-application caching, which is much more efficient than the remote MySQL one (http://www.proxysql.com/blog/scaling-with-proxysql-query-cache).

In the next article I will illustrate how, in a simple setup, ProxySQL can help in achieving better results than using the default Aurora cluster endpoint.

You May Also Like

A move to the cloud has implications for your company’s DBA staff. In Migrating to the Cloud Doesn’t Mean Lose Your DBA, we discuss those implications and how your team needs to adapt to the new cloud database environment.

There are several things to consider when planning for performance that scales. For example, your developers may be testing their code using a small database, which can mask scalability and performance issues when the application is meant to interact with a larger database.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Chris K

Is there a stable 2.x version? Looking through github page and seeing v2 topic branches but not clear on which is the main v2 branch or why there are no v2 tags or releases…

Will look through distro repository, but would be surprised to see it in there if there’s not a release on GH for it.

Thanks for this post; very interested in proxysql for our aurora cluster!