Using Percona Kubernetes Operators With K3sRecently Peter provided an extensive tutorial on how to use Percona Kubernetes Operators on minikube: Exploring MySQL on Kubernetes with Minikube. Minikube is a great option for local deployment and to get familiar with Kubernetes on a single developer’s box.

But what if we want to get experience with setups that are closer to production Kubernetes and use multiple servers for deployments?

I think there is an alternative that is also easy to set up and easy to start. I am talking about K3s. In fact, it is so easy that this part one will be very short — there is just one requirement that we need to resolve, but it is also easy.

So let’s assume we have four servers we want to use for our Kubernetes deployments: one master and three worker nodes. In my case:

beast-node7-ubuntu
beast-node8-ubuntu (master)
beast-node5-ubuntu
beast-node6-ubuntu

For step one, on master we execute:

For step two, we need to prepare a script, and we need two parameters: the IP address of the master and the token for the master. Finding the token is probably the most complicated part of this setup and it is stored in the file /var/lib/rancher/k3s/server/node-token.

Having these parameters, the script for other nodes is:

After executing this script on other nodes we will have our Kubernetes running:

This is sufficient for a basic Kubernetes setup, but for our Kubernetes Operators, we need an extra step: Dynamic Volume Provisioning, because our Operators request volumes to store data.

Actually, after further research, it appears that Operators will use the default local-path provisioner, which satisfies Operator requirements.

After this, the K3s cluster should be ready to deploy our Operators, which we will review in the next part of this series. Stay tuned!

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments