In our last post, we looked into the lifecycle of applications in Kubernetes. We see that Kubernetes doesn’t handle database backups itself.
This is where Kubernetes Operators come into action. They add additional functions to Kubernetes, enabling it to set up, configure, and manage complex applications like databases within a Kubernetes environment for the user.
In this blog post, we will focus on the Percona Operator for MySQL. This operator is built on the Percona XtraDB Cluster and shows how operators can make certain tasks easier for us. We will go through the steps of backing up our database to Amazon S3 and restoring it together.
Prerequisites:
- To manage and deploy applications on Kubernetes, you will need the kubectl tool. If it’s not installed, follow the official installation instructions.
- You will need a Kubernetes environment. For testing, you can set it up on Minikube or choose any cloud provider. See our list of officially supported platforms. I have a Google Kubernetes Engine (GKE) cluster with three nodes for this demo.
- You need to install and deploy our Percona XtraDB Cluster Operator. You can use our Quick Install.
- Don’t forget to clone our Percona Operator for MySQL based on Percona XtraDB Cluster from GitHub.
12 git clone https://github.com/percona/percona-xtradb-cluster-operator.gitcd percona-xtradb-cluster-operator
We will cover four steps for this demo:
- Connect to the MySQL instance in the Percona XtraDB Cluster.
- Add sample data to the database.
- Set up and carry out a logical backup.
- Restore the database.
1. Connect to the MySQL instance in Percona XtraDB Cluster
To connect to the Percona XtraDB Cluster, we need the password from the root user. This password is kept in the Secrets object. Let’s list the Secrets objects with kubectl:
1234 kubectl get secretsNAME TYPEDATA AGEcluster1-secrets. Opaque 69h
Now, let’s get the password from the root user by using the following commands:
123 kubectl get secret cluster1-secrets -o yaml -o jsonpath='{.data.root}' | base64 --decode | tr 'n' ' ' && echo " "^J$X0n[(IbQT$Q7* # <<=== This is the output, which is the password of the user "root"
Now, we start a container with the MySQL tool and connect its console output to your terminal. Use this command to do it, and name the new Pod percona-client:
1 kubectl run-i --rm --tty percona-client --image=percona:8.0 --restart=Never -- bash -il
Once we are inside the pod, to connect to the Percona XtraDB Cluster, run the MySQL tool in the ‘percona-client’ command shell. Use your cluster name and the password you got from the secret.
123 [mysql@percona-client /]$ mysql -h cluster1-haproxy -uroot -p'^J$X0n[(IbQT$Q7*'mysql>
2. Add sample data to the database
Now that we have set up Percona XtraDB Cluster with MySQL let’s create a new database and insert some data for our experiments:
1234567891011121314 mysql> CREATE DATABASE mydb;use mydb;Query OK, 1 row affected (0.02 sec)Database changedmysql>mysql> CREATE TABLE extraordinary_gentlemen (id int NOT NULL AUTO_INCREMENT,name varchar(255) NOT NULL,occupation varchar(255),PRIMARY KEY (id));Query OK, 0 rows affected (0.03 sec)
123456789 mysql> INSERT INTO extraordinary_gentlemen (name, occupation)VALUES("Allan Quartermain","hunter"),("Nemo","fish"),("Dorian Gray", NULL),("Tom Sawyer", "secret service agent");Query OK, 4 rows affected (0.00 sec)Records: 4Duplicates: 0Warnings: 0
Let´s see what we have in our extraordinary_gentlemen table:
12345678910111213141516171819 mysql> SELECT * FROM extraordinary_gentlemen;+----+-------------------+----------------------+| id | name | occupation |+----+-------------------+----------------------+| 1 | Allan Quartermain | hunter || 4 | Nemo | fish || 7 | Dorian Gray | NULL || 10 | Tom Sawyer | secret service agent |+----+-------------------+----------------------+4 rows in set (0.00 sec)
3. Set up and make a physical backup
For this demo, ensure you have a bucket in Amazon S3. Additionally, your AWS user account should have the ‘AmazonS3FullAccess’ permission to fully manage S3 buckets.
This is what my S3 configuration looks like on Amazon.
In the repository we cloned earlier, there is a folder named ‘deploy’. Inside this, we will find a file called cr.yaml. Now, we will use this file to set up the backup.
Find the ‘storage’ section in the file.
1 2 3 4 5 6 7 8 | storages: s3-demo: # <<=== Name of your preference for your storage type: s3 # <<=== type for S3 AWS backups verifyTLS: true s3: bucket: community-bucket-demo # <<=== Name of your S3 backet credentialsSecret: my-cluster-name-backup-s3 # <<=== Name of your cretential files region: us-east-1 # <<=== Region where you bucket is located |
After setting up the cr.yaml, let’s move on to configuring the backup files.
Navigate to the deploy/backup directory in the same repository we cloned. Here, we find the necessary files for backing up and restoring.
Begin with the backup-secret-s3.yaml file, which holds your AWS access keys and secret keys data. Here’s an example of what it should look like:
123456789101112131415 apiVersion: v1kind: Secretmetadata:name: my-cluster-name-backup-s3 # <<== This is name is call in cr.yamltype: Opaquedata:AWS_ACCESS_KEY_ID: QUtJVEJNBLAHDSJHFBSJDH=AWS_SECRET_ACCESS_KEY: K3dlksdjaBLALLUHUYSHSeFQ4RVl6T2EydTVHLw==
Replace backup-secret-s3.yaml with your own access key and secret access key. Before adding them to this file, make sure to encrypt them using these commands:
123456789 # User this for Macecho -n 'AWS_ACCESS_KEY_ID' | base64echo -n 'AWS_SECRET_ACCESS_KEY' | base64# use this for Linuxecho -n 'AWS_ACCESS_KEY_ID' | base64 --wrap=0echo -n 'AWS_SECRET_ACCESS_KEY' | base64 --wrap=0
Now, let’s proceed with the backup process itself. In the same directory, find the file named backup.yaml.
We can modify the fields: name, pxcCluster, and storageName in the backup.yaml file. Make sure ‘storageName’ matches the data you previously entered in the cr.yaml file.
12345678910111213 apiVersion: pxc.percona.com/v1kind: PerconaXtraDBClusterBackupmetadata:name: demo-backup-1 # This is the name of you backupspec:pxcCluster: cluster1 # <<=== keep this with the name of your clusterstorageName: s3-demo # <<=== THis is the same of the storage in the cr.ymal
After configuring these three files, let’s apply them using the kubectl command in the following order.
12345 Kubelct apply -f backup-secret-s3.yamlKbuectl apply -f cr.yamlKubectl apply -f backup.yaml
To confirm if our backup was successful, we can list the backups.
12345 kubectl get pxc-backupNAME CLUSTER STORAGE DESTINATION STATUS COMPLETED AGEdemo-backup-1 cluster1 s3-demo s3://community-bucket-demo/cluster1-2024-01-28-17:06:41-full Succeeded 118s 2m39s
If the status shows succeeded, it means our backup was successful! Now, let’s take a look at our bucket on Amazon S3. Wohoo!! We have a backup ready in S3!
4. Restore the database
Restoring is quite straightforward. In the same directory, we can find a file named restore.yaml. Open this file and update the backupName field. Our backup was named demo-backup-1, so let’s change it to that.
12345678910111213 apiVersion: pxc.percona.com/v1kind: PerconaXtraDBClusterRestoremetadata:name: restore1spec:pxcCluster: cluster1 # <<=== keep this with the name of your clusterbackupName: demo-backup-1 # <<=== Name of our backup
For testing purposes, you can delete the database we created earlier by using the command drop database mydb (The database we initially created).
After your file is ready, let’s apply the changes:
1 kubectl apply -f restore.yaml
This will take the backup from S3 and restore our data to its previous state.
To verify if the restore was successful, let’s use this command:
12345 kubectl get pxc-restoreNAME CLUSTER STATUS COMPLETED AGErestore1 cluster1 Succeeded. 22s 4m37s
If we list your databases again, we should see our database again, along with all the data we initially inserted. It means our restore worked perfectly!
123456789101112131415161718192021 mysql> show databases;+--------------------+| Database|+--------------------+| information_schema || mydb|| mysql|| performance_schema || sys|+--------------------+5 rows in set (0.00 sec)
Conclusion
Kubernetes operators are incredible for automating database operations, tasks that are often complex and not directly handled by Kubernetes itself. Operators make it easy for us; they take care of essential tasks like backups and restores, which are crucial in database management.
Did this demo work well for you? If you have any issues, please reach out on our community forum at Forums.percona.com. We’re always here to help.
If you are new to Kubernetes Operators, read how Cluster Status works in Percona Kubernetes Operators.
And if you are looking for a version with a graphical interface, we have Percona Everest, our cloud-native database platform, currently in Alpha stage.
See you in our next blog post!
The Percona Kubernetes Operators lets you easily create and manage highly available, enterprise-ready MySQL, PostgreSQL, and MongoDB clusters on Kubernetes. Experience hassle-free database management and provisioning without the need for manual maintenance or custom in-house scripts.
Learn More About Percona Kubernetes Operators