Our recent releases of Percona Operator for MySQL based on Percona XtraDB Cluster 1.8 and Percona Operator for MongoDB 1.7 come with a new interesting feature: Support of custom sidecars.
What is special about this? There is often a need to customize the default installation, for example, one popular request is to support your favorite monitoring system, which is different from Percona Monitoring and Management (default Prometheus monitoring for Kubernetes being one of them). Another possibility is to install utilities or debugging tools on the same node with running mysqld or mongod process.
In this blog, I will show how to deploy sidecar with my own images which come with the standard Ubuntu OS and sysbench tool. This will allow us to make sure that sysbench is running on the same Kubernetes node as the Percona XtraDB Cluster (PXC) node.
To deploy our standard cluster in Kubernetes, I will use the example cr.yaml file here
https://github.com/percona/percona-xtradb-cluster-operator/blob/main/deploy/cr.yaml with the modification to deploy sidecar.
The relevant part in the PXC section is this:
1 2 3 4 5 | sidecars: - image: vadimtk/ubu-mysql-sysbench command: ["/bin/sh"] args: ["-c", "while true; do sleep 300; done;"] name: my-sidecar-1 |
After deployment let’s check that sidecar is running with:
1 | kubectl describe pod cluster4-pxc-0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | my-sidecar-1: Container ID: docker://236c7be9ed6bff210a5d302e9ca0bebf3a5ef99aea11162ac76236b83ed46d67 Image: vadimtk/ubu-mysql-sysbench Image ID: docker-pullable://vadimtk/ubu-mysql-sysbench@sha256:800bfccfb06a292b9ffd518fd671276ac8ccdf400cbeaf4daa234116d017197d Port: Host Port: Command: /bin/sh Args: -c while true; do sleep 300; done; State: Running Started: Tue, 18 May 2021 08:10:51 -0400 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jrskv (ro) |
Now we can grab the root password with:
1 2 | kubectl get secret internal-cluster4 -o jsonpath='{.data.root}' | base64 -d b0ou9ffLlb3NiVEM |
And login into our sidecar container as:
1 | kubectl exec -it cluster4-pxc-0 -c my-sidecar-1 -- bash |
And prepare sysbench workload:
1 2 3 4 5 6 7 8 | root@cluster4-pxc-0:/# mysql -hcluster4-pxc-0 -uroot -pb0ou9ffLlb3NiVEM -e "CREATE DATABASE sbtest" mysql: [Warning] Using a password on the command line interface can be insecure. root@cluster4-pxc-0:/# sysbench oltp_read_only --mysql-host=cluster4-pxc-0 --mysql-user=root --mysql-password=b0ou9ffLlb3NiVEM prepare sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2) Creating table 'sbtest1'... Inserting 10000 records into 'sbtest1' Creating a secondary index on 'sbtest1'... |
Now running sysbench workload in one terminal we even can get vmstat from another (it is also available in my ubuntu-sysbench image).
1 2 3 4 5 6 7 8 9 10 11 | root@cluster4-pxc-0:/# sysbench oltp_read_only --mysql-host=cluster4-pxc-0 --mysql-user=root --mysql-password=b0ou9ffLlb3NiVEM --time=600 --threads=50 run sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2) Running the test with following options: Number of threads: 50 Initializing random number generator from current time Initializing worker threads... Threads started! |
1 2 3 4 5 6 7 8 | root@cluster4-pxc-0:/# vmstat 5 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 57 0 0 172307392 455912 4503796 0 0 0 10 41296 1373500 38 15 46 0 0 56 0 0 172293664 455912 4503840 0 0 0 12 30399 1728549 48 19 34 0 0 56 0 0 172289248 455912 4503840 0 0 0 19 31383 1725409 47 19 34 0 0 57 0 0 172293040 455916 4503844 0 0 0 246 29899 1720786 48 19 34 0 0 57 0 0 172303056 455924 4503844 0 0 0 21 25381 1725959 48 18 34 0 0 |
Conclusion
Using sidecars provides extra customization and flexibility for our Operators deployments, give it a try! But make sure you are running verified images and do not introduce intrusive operations into database workloads.
The full cr.yaml for reference:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | apiVersion: pxc.percona.com/v1-8-0 kind: PerconaXtraDBCluster metadata: name: cluster4 finalizers: spec: crVersion: 1.8.0 secretsName: my-cluster-secrets vaultSecretName: keyring-secret-vault sslSecretName: my-cluster-ssl sslInternalSecretName: my-cluster-ssl-internal logCollectorSecretName: my-log-collector-secrets allowUnsafeConfigurations: false updateStrategy: SmartUpdate upgradeOptions: versionServiceEndpoint: https://check.percona.com apply: 8.0-recommended schedule: "0 4 * * *" pxc: size: 3 image: percona/percona-xtradb-cluster:8.0.22-13.1 autoRecovery: true sidecars: - image: vadimtk/ubu-mysql-sysbench command: ["/bin/sh"] args: ["-c", "while true; do sleep 300; done;"] name: my-sidecar-1 resources: requests: memory: 1G cpu: 600m affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 volumeSpec: persistentVolumeClaim: # storageClassName: standard # accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 6G gracePeriod: 600 haproxy: enabled: true size: 3 image: percona/percona-xtradb-cluster-operator:1.8.0-haproxy resources: requests: memory: 1G cpu: 600m affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" podDisruptionBudget: maxUnavailable: 1 gracePeriod: 30 proxysql: enabled: false size: 3 image: percona/percona-xtradb-cluster-operator:1.8.0-proxysql resources: requests: memory: 1G cpu: 600m affinity: antiAffinityTopologyKey: "kubernetes.io/hostname" volumeSpec: persistentVolumeClaim: resources: requests: storage: 2G podDisruptionBudget: maxUnavailable: 1 gracePeriod: 30 logcollector: enabled: true image: percona/percona-xtradb-cluster-operator:1.8.0-logcollector pmm: enabled: false image: percona/pmm-client:2.12.0 serverHost: monitoring-service serverUser: admin backup: image: percona/percona-xtradb-cluster-operator:1.8.0-pxc8.0-backup pitr: enabled: false storageName: STORAGE-NAME-HERE timeBetweenUploads: 60 storages: s3-us-west: type: s3 s3: bucket: S3-BACKUP-BUCKET-NAME-HERE credentialsSecret: my-cluster-name-backup-s3 region: us-west-2 fs-pvc: type: filesystem volume: persistentVolumeClaim: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 6G schedule: - name: "sat-night-backup" schedule: "0 0 * * 6" keep: 3 storageName: s3-us-west - name: "daily-backup" schedule: "0 0 * * *" keep: 5 storageName: fs-pvc |
The Percona Operators automate the creation, alteration, or deletion of members in your Percona Distribution for MySQL, MongoDB, or PostgreSQL environment.