Nowadays, more and more companies are thinking about the migration of their infrastructure to Kubernetes. Databases are no exception. There were a lot of k8s operators that were created to simplify the different types of deployments and also perform routine day-to-day tasks like making the backups, renewing certificates, and so on. If a few years ago nobody wanted to even listen about running databases in Kubernetes, everything has changed now.
At Percona, we created a few very featureful k8s operators for Percona Server for MongoDB, PostgreSQL, and MySQL databases. Today we will talk about using cross-site replication – a new feature that was added to the latest release of Percona Distribution for MySQL Operator. This feature is based on synchronous connection failover mechanism.
The cross-site replication involves configuring one Percona XtraDB Cluster or a single/several MySQL servers as Source, and another Percona XtraDB Cluster (PXC) as a replica to allow asynchronous replication between them. If an operator has several sources in custom resource (CR), it will automatically handle connection failure of the source DB.
This cross-site replication feature is supported only since MySQL 8.0.23, but you can read about migrating MySQL of earlier versions in this blog post.
The Goal
Migrate the MySQL database, which is deployed on-prem or in the cloud, to the Percona Distribution for MySQL Operator using asynchronous replication. This approach helps you reduce downtime and data loss for your application.
So, we have the following setup:
The following components are used:
1. MySQL 8.0.23 database (in my case it is Percona Server for MySQL) which is deployed in DO (as a Source) and Percona XtraBackup for the backup. In my test deployment, I use only one server as a Source to simplify the deployment. Depending on your topology of DB deployment, you can use several servers to use synchronous connection failover mechanism on the operator’s end.
2. Google Kubernetes Engine (GKE) cluster where Percona Distribution for MySQL Operator is deployed with PXC cluster (as a target).
3. AWS S3 bucket is used to save the backup from MySQL DB and then to restore the PXC cluster in k8s.
The following steps should be done to perform the migration procedure:
1. Make the MySQL database backup using Percona XtraBackup and upload it to the S3 bucket using xbcloud.
2. Perform the restore of the MySQL database from the S3 bucket into the PXC cluster which is deployed in k8s.
3. Configure asynchronous replication between MySQL server and PXC cluster managed by k8s operator.
As a result, we have asynchronous replication between MySQL server and PXC cluster in k8s which is in read-only mode.
Migration
Configure the target PXC cluster managed by k8s operator:
1. Deploy Percona Distribution for MySQL Operator on Kubernetes (I have used GKE 1.20).
1 2 3 4 5 6 | # clone the git repository git clone -b v1.9.0 https://github.com/percona/percona-xtradb-cluster-operator cd percona-xtradb-cluster-operator # deploy the operator kubectl apply -f deploy/bundle.yaml |
2. Create PXC cluster using the default custom resource manifest (CR).
1 2 3 4 5 | # create my-cluster-secrets secret (do no use default passwords for production systems) kubectl apply -f deploy/secrets.yaml # create cluster by default it will be PXC 8.0.23 kubectl apply -f deploy/cr.yaml |
3. Create the secret with credentials for the AWS S3 bucket which will be used for access to the S3 bucket during the restoration procedure.
1 2 3 4 5 6 7 8 9 10 | # create S3-secret.yaml file with following content, and use correct credentials instead of XXXXXX apiVersion: v1 kind: Secret metadata: name: aws-s3-secret type: Opaque data: AWS_ACCESS_KEY_ID: XXXXXX AWS_SECRET_ACCESS_KEY: XXXXXX |
1 2 | # create secret kubectl apply -f S3-secret.yaml |
Configure the Source MySQL Server
1. Install Percona Server for MySQL 8.0.23 and Percona XtraBackup for the backup. Refer to the Installing Percona Server for MySQL and Installing Percona XtraBackup chapters in the documentation for installation instructions.
NOTE:
You need to add the following options to my.cnf to enable GTID support; otherwise, replication will not work because it is used by the PXC cluster by default.
123 [mysqld]enforce_gtid_consistency=ONgtid_mode=ON
2. Create all needed users who will be used by k8s operator, the password should be the same as in deploy/secrets.yaml. Also, please note that the password for the root user should be the same as in deploy/secrets.yaml file for k8s the secret. In my case, I used our default passwords from deploy/secrets.yaml file.
1 2 3 4 5 6 7 8 9 10 11 12 13 | CREATE USER 'monitor'@'%' IDENTIFIED BY 'monitory' WITH MAX_USER_CONNECTIONS 100; GRANT SELECT, PROCESS, SUPER, REPLICATION CLIENT, RELOAD ON *.* TO 'monitor'@'%'; GRANT SERVICE_CONNECTION_ADMIN ON *.* TO 'monitor'@'%'; CREATE USER 'operator'@'%' IDENTIFIED BY 'operatoradmin'; GRANT ALL ON *.* TO 'operator'@'%' WITH GRANT OPTION; CREATE USER 'xtrabackup'@'%' IDENTIFIED BY 'backup_password'; GRANT ALL ON *.* TO 'xtrabackup'@'%'; CREATE USER 'replication'@'%' IDENTIFIED BY 'repl_password'; GRANT REPLICATION SLAVE ON *.* to 'replication'@'%'; FLUSH PRIVILEGES; |
2. Make the backup of MySQL database using XtraBackup tool and upload it to S3 bucket.
1 2 3 4 5 6 | # export aws credentials export AWS_ACCESS_KEY_ID=XXXXXX export AWS_SECRET_ACCESS_KEY=XXXXXX #make the backup xtrabackup --backup --stream=xbstream --target-dir=/tmp/backups/ --extra-lsndirk=/tmp/backups/ --password=root_password | xbcloud put --storage=s3 --parallel=10 --md5 --s3-bucket="mysql-testing-bucket" "db-test-1" |
Now, everything is ready to perform the restore of the backup on the target database. So, let’s get back to our k8s cluster.
Configure the Asynchronous Replication to the Target PXC Cluster
If you have a completely clean source database (without any data), you can skip the points connected with backup and restoration of the database. Otherwise, do the following:
1. Restore the backup from the S3 bucket using the following manifest:
1 2 3 4 5 6 7 8 9 10 11 12 13 | # create restore.yml file with following content apiVersion: pxc.percona.com/v1 kind: PerconaXtraDBClusterRestore metadata: name: restore1 spec: pxcCluster: cluster1 backupSource: destination: s3://mysql-testing-bucket/db-test-1 s3: credentialsSecret: aws-s3-secret region: us-east-1 |
1 2 | # trigger the restoration procedure kubectl apply -f restore.yml |
As a result, you will have a PXC cluster with data from the source DB. Now everything is ready to configure the replication.
2. Edit custom resource manifest deploy/cr.yaml to configure spec.pxc.replicationChannels section.
1 2 3 4 5 6 7 8 9 10 11 | spec: ... pxc: ... replicationChannels: - name: ps_to_pxc1 isSource: false sourcesList: - host: <source_ip> port: 3306 weight: 100 |
1 2 | # apply CR kubectl apply -f deploy/cr.yaml |
Verify the Replication
In order to check the replication, you need to connect to the PXC node and run the following commands:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | kubectl exec -it cluster1-pxc-0 -c pxc -- mysql -uroot -proot_password -e "show replica status G" *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: <ip-of-source-db> Master_User: replication Master_Port: 3306 Connect_Retry: 60 Master_Log_File: binlog.000004 Read_Master_Log_Pos: 529 Relay_Log_File: cluster1-pxc-0-relay-bin-ps_to_pxc1.000002 Relay_Log_Pos: 738 Relay_Master_Log_File: binlog.000004 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 529 Relay_Log_Space: 969 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_UUID: 9741945e-148d-11ec-89e9-5ee1a3cf433f Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 3 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 9741945e-148d-11ec-89e9-5ee1a3cf433f:1-2 Executed_Gtid_Set: 93f1e7bf-1495-11ec-80b2-06e6016a7c3d:1, 9647dc03-1495-11ec-a385-7e3b2511dacb:1-7, 9741945e-148d-11ec-89e9-5ee1a3cf433f:1-2 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: ps_to_pxc1 Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0 Network_Namespace: |
Also, you can verify the replication by checking that the data is changing.
Promote the PXC Cluster as a Primary
As soon as you are ready (your application was reconfigured and ready to work with the new DB) to stop the replication and promote the PXC cluster in k8s to be a primary DB, you need to perform the following simple actions:
1. Edit the deploy/cr.yaml and comment the replicationChannels
1 2 3 4 5 6 7 8 9 10 11 | spec: ... pxc: ... #replicationChannels: #- name: ps_to_pxc1 # isSource: false # sourcesList: # - host: <source_ip> # port: 3306 # weight: 100 |
2. Stop mysqld service on the source server to be sure that no new data is written.
1 | systemctl stop mysqld |
3. Apply a new CR for k8s operator.
1 2 | # apply CR kubectl apply -f deploy/cr.yaml |
As a result, replication is stopped and the read-only mode is disabled for the PXC cluster.
Conclusion
Technologies are changing so fast that a migration procedure to k8s cluster, seeming very complex at first sight, turns out to be not so difficult and nor time-consuming. But you need to keep in mind that significant changes were made. Firstly, you migrate the database to the PXC cluster which has some peculiarities, and, of course, Kubernetes itself. If you are ready, you can start the journey to Kubernetes right now.
The Percona team is ready to guide you during this journey. If you have any questions, please raise the topic in the community forum.
The Percona Kubernetes Operators automate the creation, alteration, or deletion of members in your Percona Distribution for MySQL, MongoDB, or PostgreSQL environment.
Learn More About Percona Kubernetes Operators