There are plenty of ways to run ProxySQL in Kubernetes (K8S). For example, we can deploy sidecar containers on the application pods, or run a dedicated ProxySQL service with its own pods.
We are going to discuss the latter approach, which is more likely to be used when dealing with a large number of application pods. Remember each ProxySQL instance runs a number of checks against the database backends. These checks monitor things like server-status and replication lag. Having too many proxies can cause significant overhead.
Creating a Cluster
For the purpose of this example, I am going to deploy a test cluster in GKE. We need to follow these steps:
1. Create a cluster
1 | gcloud container clusters create ivan-cluster --preemptible --project my-project --zone us-central1-c --machine-type n2-standard-4 --num-nodes=3 |
2. Configure command-line access
1 | gcloud container clusters get-credentials ivan-cluster --zone us-central1-c --project my-project |
3. Create a Namespace
1 | kubectl create namespace ivantest-ns |
4. Set the context to use our new Namespace
1 | kubectl config set-context $(kubectl config current-context) --namespace=ivantest-ns |
Dedicated Service Using a StatefulSet
One way to implement this approach is to have ProxySQL pods use persistent volumes to store the configuration. We can rely on ProxySQL Cluster mode to make sure the configuration is kept in sync.
For simplicity, we are going to use a ConfigMap with the initial config for bootstrapping the ProxySQL service for the first time.
Exposing the passwords in the ConfigMap is far from ideal, and so far the K8S community hasn’t made up its mind about how to implement Reference Secrets from ConfigMap.
1. Prepare a file for the ConfigMap
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 | tee proxysql.cnf <<EOF datadir="/var/lib/proxysql" admin_variables= { admin_credentials="admin:admin;cluster:secret" mysql_ifaces="0.0.0.0:6032" refresh_interval=2000 cluster_username="cluster" cluster_password="secret" } mysql_variables= { threads=4 max_connections=2048 default_query_delay=0 default_query_timeout=36000000 have_compress=true poll_timeout=2000 interfaces="0.0.0.0:6033;/tmp/proxysql.sock" default_schema="information_schema" stacksize=1048576 server_version="8.0.23" connect_timeout_server=3000 monitor_username="monitor" monitor_password="monitor" monitor_history=600000 monitor_connect_interval=60000 monitor_ping_interval=10000 monitor_read_only_interval=1500 monitor_read_only_timeout=500 ping_interval_server_msec=120000 ping_timeout_server=500 commands_stats=true sessions_sort=true connect_retries_on_failure=10 } mysql_servers = ( { address="mysql1" , port=3306 , hostgroup=10, max_connections=100 }, { address="mysql2" , port=3306 , hostgroup=20, max_connections=100 } ) mysql_users = ( { username = "myuser", password = "password", default_hostgroup = 10, active = 1 } ) proxysql_servers = ( { hostname = "proxysql-0.proxysqlcluster", port = 6032, weight = 1 }, { hostname = "proxysql-1.proxysqlcluster", port = 6032, weight = 1 }, { hostname = "proxysql-2.proxysqlcluster", port = 6032, weight = 1 } ) EOF |
2. Create the ConfigMap
1 | kubectl create configmap proxysql-configmap --from-file=proxysql.cnf |
3. Prepare a file with the StatefulSet
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 | tee proxysql-ss-svc.yml <<EOF apiVersion: apps/v1 kind: StatefulSet metadata: name: proxysql labels: app: proxysql spec: replicas: 3 serviceName: proxysqlcluster selector: matchLabels: app: proxysql updateStrategy: type: RollingUpdate template: metadata: labels: app: proxysql spec: restartPolicy: Always containers: - image: proxysql/proxysql:2.3.1 name: proxysql volumeMounts: - name: proxysql-config mountPath: /etc/proxysql.cnf subPath: proxysql.cnf - name: proxysql-data mountPath: /var/lib/proxysql subPath: data ports: - containerPort: 6033 name: proxysql-mysql - containerPort: 6032 name: proxysql-admin volumes: - name: proxysql-config configMap: name: proxysql-configmap volumeClaimTemplates: - metadata: name: proxysql-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2Gi --- apiVersion: v1 kind: Service metadata: annotations: labels: app: proxysql name: proxysql spec: ports: - name: proxysql-mysql nodePort: 30033 port: 6033 protocol: TCP targetPort: 6033 - name: proxysql-admin nodePort: 30032 port: 6032 protocol: TCP targetPort: 6032 selector: app: proxysql type: NodePort EOF |
4. Create the StatefulSet
1 | kubectl create -f proxysql-ss-svc.yml |
5. Prepare the definition of the headless Service (more on this later)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | tee proxysql-headless-svc.yml <<EOF apiVersion: v1 kind: Service metadata: name: proxysqlcluster labels: app: proxysql spec: clusterIP: None ports: - port: 6032 name: proxysql-admin selector: app: proxysql EOF |
6. Create the headless Service
1 | kubectl create -f proxysql-headless-svc.yml |
7. Verify the Services
1 2 3 4 5 | kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE proxysql NodePort 10.3.249.158 6033:30033/TCP,6032:30032/TCP 12m proxysqlcluster ClusterIP None 6032/TCP 8m53s |
Pod Name Resolution
By default, each pod has a DNS name associated in the form pod-ip-address.my-namespace.pod.cluster-domain.example.
The headless Service causes K8S to auto-create a DNS record with each pod’s FQDN as well. The result is we will have the following entries available:
proxysql-0.proxysqlcluster
proxysql-1.proxysqlcluster
proxysql-3.proxysqlcluster
We can then use these to set up the ProxySQL cluster (the proxysql_servers part of the configuration file).
Connecting to the Service
To test the service, we can run a container that includes a MySQL client and connect its console output to our terminal. For example, use the following command (which also removes the container/pod after we exit the shell):
1 | kubectl run -i --rm --tty percona-client --image=percona/percona-server:latest --restart=Never -- bash -il |
The connections from other pods should be sent to the Cluster-IP and port 6033 and will be load balanced. We can also use the DNS name proxysql.ivantest-ns.svc.cluster.local that got auto-created.
1 | mysql -umyuser -ppassword -h10.3.249.158 -P6033 |
Use port 30033 instead if the client is connecting from an external network:
1 | mysql -umyuser -ppassword -h10.3.249.158 -P30033 |
Cleanup Steps
In order to remove all the resources we created, run the following steps:
1 2 3 | kubectl delete statefulsets proxysql kubectl delete service proxysql kubectl delete service proxysqlcluster |
Final Words
We have seen one of the possible ways to deploy ProxySQL in Kubernetes. The approach presented here has a few shortcomings but is good enough for illustrative purposes. For a production setup, consider looking at the Percona Kubernetes Operators instead.
Hi, Great Article.. I got most of this working
but for some reason i can not add more users to the config-map
mysql_users =
(
{ username = “myuser”, password = “password”, default_hostgroup = 10, active = 1 },
{ username = “myuser2”, password = “password2”, default_hostgroup = 10, active = 1 }
)
Simply doesnt work.. it only ads the first user
ok i figuired it out.. it changed the config map , its just once the pc is made it no longer updates the config-map.. i need to delete all resources and then change the config-map before creating all again