Avoid vendor lock-in, provide a private Database-as-a-Service for internal teams, quickly deploy-test-destroy databases with CI/CD pipeline – these are some of the most common use cases for running databases on Kubernetes with operators. Percona Operator for PostgreSQL enables users to do exactly that and more.
Pulumi is an infrastructure-as-a-code tool, which enables developers to write code in their favorite language (Python, Golang, JavaScript, etc.) to deploy infrastructure and applications easily to public clouds and platforms such as Kubernetes.
This blog post is a step-by-step guide on how to deploy a highly-available PostgreSQL cluster on Kubernetes with our Percona Operator and Pulumi.
Desired State
We are going to provision the following resources with Pulumi:
- Google Kubernetes Engine cluster with three nodes. It can be any Kubernetes flavor.
- Percona Operator for PostgreSQL
- Highly available PostgreSQL cluster with one primary and two hot standby nodes
- Highly available pgBouncer deployment with the Load Balancer in front of it
- pgBackRest for local backups
Pulumi code can be found in this git repository.
Prepare
I will use the Ubuntu box to run Pulumi, but almost the same steps would work on macOS.
Pre-install Packages
gcloud and kubectl
1 2 3 4 | echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - sudo apt-get update sudo apt-get install -y google-cloud-sdk docker.io kubectl jq unzip |
python3
Pulumi allows developers to use the language of their choice to describe infrastructure and applications. I’m going to use python. We will also pip (python package-management system) and venv (virtual environment module).
1 | sudo apt-get install python3 python3-pip python3-venv |
Pulumi
Install Pulumi:
1 | curl -sSL https://get.pulumi.com | sh |
On macOS, this can be installed view Homebrew with brew install pulumi
You will need to add .pulumi/bin to the $PATH:
1 | export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/percona/.pulumi/bin |
Authentication
gcloud
You will need to provide access to Google Cloud to provision Google Kubernetes Engine.
1 2 3 | gcloud config set project your-project gcloud auth application-default login gcloud auth login |
Pulumi
Generate Pulumi token at app.pulumi.com. You will need it later to init Pulumi stack:
Action
This repo has the following files:
- Pulumi.yaml – identifies that it is a folder with Pulumi project
- __main__.py – python code used by Pulumi to provision everything we need
- requirements.txt – to install required python packages
Clone the repo and go to the pg-k8s-pulumi folder:
1 2 | git clone https://github.com/spron-in/blog-data cd blog-data/pg-k8s-pulumi |
Init the stack with:
1 | pulumi stack init pg |
You will need the key here generated before on app.pulumi.com.
__main__.py
Python code that Pulumi is going to process is in __main__.py file.
Lines 1-6: importing python packages
Lines 8-31: configuration parameters for this Pulumi stack. It consists of two parts:
- Kubernetes cluster configuration. For example, the number of nodes.
- Operator and PostgreSQL cluster configuration – namespace to be deployed to, service type to expose pgBouncer, etc.
Lines 33-80: deploy GKE cluster and export its configuration
Lines 82-88: create the namespace for Operator and PostgreSQL cluster
Lines 91-426: deploy the Operator. In reality, it just mirrors the operator.yaml from our Operator.
Lines 429-444: create the secret object that allows you to set the password for pguser to connect to the database
Lines 445-557: deploy PostgreSQL cluster. It is a JSON version of cr.yaml from our Operator repository
Line 560: exports Kubernetes configuration so that it can be reused later
Deploy
At first, we will set the configuration for this stack. Execute the following commands:
1 2 3 4 5 6 7 8 9 | pulumi config set gcp:project YOUR_PROJECT pulumi config set gcp:zone us-central1-a pulumi config set node_count 3 pulumi config set master_version 1.21 pulumi config set namespace percona-pg pulumi config set pg_cluster_name pulumi-pg pulumi config set service_type LoadBalancer pulumi config set pg_user_password mySuperPass |
These commands set the following:
- GCP project where GKE is going to be deployed
- GCP zone
- Number of nodes in a GKE cluster
- Kubernetes version
- Namespace to run PostgreSQL cluster
- The name of the cluster
- Expose pgBouncer with LoadBalancer object
Deploy with the following command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | $ pulumi up Previewing update (pg) View Live: https://app.pulumi.com/spron-in/percona-pg-k8s/pg/previews/d335d117-b2ce-463b-867d-ad34cf456cb3 Type Name Plan Info + pulumi:pulumi:Stack percona-pg-k8s-pg create 1 message + ├─ random:index:RandomPassword pguser_password create + ├─ random:index:RandomPassword password create + ├─ gcp:container:Cluster gke-cluster create + ├─ pulumi:providers:kubernetes gke_k8s create + ├─ kubernetes:core/v1:ServiceAccount pgoPgo_deployer_saServiceAccount create + ├─ kubernetes:core/v1:Namespace pgNamespace create + ├─ kubernetes:batch/v1:Job pgoPgo_deployJob create + ├─ kubernetes:core/v1:ConfigMap pgoPgo_deployer_cmConfigMap create + ├─ kubernetes:core/v1:Secret percona_pguser_secretSecret create + ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding pgo_deployer_crbClusterRoleBinding create + ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole pgo_deployer_crClusterRole create + └─ kubernetes:pg.percona.com/v1:PerconaPGCluster my_cluster_name create Diagnostics: pulumi:pulumi:Stack (percona-pg-k8s-pg): E0225 14:19:49.739366105 53802 fork_posix.cc:70] Fork support is only compatible with the epoll1 and poll polling strategies Do you want to perform this update? yes Updating (pg) View Live: https://app.pulumi.com/spron-in/percona-pg-k8s/pg/updates/5 Type Name Status Info + pulumi:pulumi:Stack percona-pg-k8s-pg created 1 message + ├─ random:index:RandomPassword pguser_password created + ├─ random:index:RandomPassword password created + ├─ gcp:container:Cluster gke-cluster created + ├─ pulumi:providers:kubernetes gke_k8s created + ├─ kubernetes:core/v1:ServiceAccount pgoPgo_deployer_saServiceAccount created + ├─ kubernetes:core/v1:Namespace pgNamespace created + ├─ kubernetes:core/v1:ConfigMap pgoPgo_deployer_cmConfigMap created + ├─ kubernetes:batch/v1:Job pgoPgo_deployJob created + ├─ kubernetes:core/v1:Secret percona_pguser_secretSecret created + ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRole pgo_deployer_crClusterRole created + ├─ kubernetes:rbac.authorization.k8s.io/v1:ClusterRoleBinding pgo_deployer_crbClusterRoleBinding created + └─ kubernetes:pg.percona.com/v1:PerconaPGCluster my_cluster_name created Diagnostics: pulumi:pulumi:Stack (percona-pg-k8s-pg): E0225 14:20:00.211695433 53839 fork_posix.cc:70] Fork support is only compatible with the epoll1 and poll polling strategies Outputs: kubeconfig: "[secret]" Resources: + 13 created Duration: 5m30s |
Verify
Get kubeconfig first:
1 | pulumi stack output kubeconfig --show-secrets > ~/.kube/config |
Check if Pods of your PG cluster are up and running:
1 2 3 4 5 6 7 8 9 10 11 12 13 | $ kubectl -n percona-pg get pods NAME READY STATUS RESTARTS AGE backrest-backup-pulumi-pg-dbgsp 0/1 Completed 0 64s pgo-deploy-8h86n 0/1 Completed 0 4m9s postgres-operator-5966f884d4-zknbx 4/4 Running 1 3m27s pulumi-pg-787fdbd8d9-d4nvv 1/1 Running 0 2m12s pulumi-pg-backrest-shared-repo-f58bc7657-2swvn 1/1 Running 0 2m38s pulumi-pg-pgbouncer-6b6dc4564b-bh56z 1/1 Running 0 81s pulumi-pg-pgbouncer-6b6dc4564b-vpppx 1/1 Running 0 81s pulumi-pg-pgbouncer-6b6dc4564b-zkdwj 1/1 Running 0 81s pulumi-pg-repl1-58d578cf49-czm54 0/1 Running 0 46s pulumi-pg-repl2-7888fbfd47-h98f4 0/1 Running 0 46s pulumi-pg-repl3-cdd958bd9-tf87k 1/1 Running 0 46s |
Get the IP-address of pgBouncer LoadBalancer:
1 2 3 4 | $ kubectl -n percona-pg get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE … pulumi-pg-pgbouncer LoadBalancer 10.20.33.122 35.188.81.20 5432:32042/TCP 3m17s |
You can connect to your PostgreSQL cluster through this IP-address. Use pguser password that was set earlier with pulumi config set pg_user_password:
1 | psql -h 35.188.81.20 -p 5432 -U pguser pgdb |
Clean up
To delete everything it is enough to run the following commands:
1 2 | pulumi destroy pulumi stack rm |
Tricks and Quirks
Pulumi Converter
kube2pulumi is a huge help if you already have YAML manifests. You don’t need to rewrite the whole code, but just convert YAMLs to Pulumi code. This is what I did for operator.yaml.
apiextensions.CustomResource
There are two ways for Custom Resource management in Pulumi:
- apiextensions.CustomResource
- crd2pulumi
crd2pulumi generates libraries/classes out of Custom Resource Definitions and allows you to create custom resources later using these. I found it a bit complicated and it also lacks documentation.
apiextensions.CustomResource on the other hand allows you to create Custom Resources by specifying them as JSON. It is much easier and requires less manipulation. See lines 446-557 in my __main__.py.
True/False in JSON
I have the following in my Custom Resource definition in Pulumi code:
1 2 3 4 5 6 7 8 9 | perconapg = kubernetes.apiextensions.CustomResource( … spec= { … "disableAutofail": False, "tlsOnly": False, "standby": False, "pause": False, "keepData": True, |
Be sure that you use boolean of the language of your choice and not the “true”/”false” strings. For me using the strings turned into a failure as the Operator was expecting boolean, not the strings.
Depends On…
Pulumi makes its own decisions on the ordering of provisioning resources. You can enforce the order by specifying dependencies
For example, I’m ensuring that Operator and Secret are created before the Custom Resource:
1 | },opts=ResourceOptions(provider=k8s_provider,depends_on=[pgo_pgo_deploy_job,percona_pg_cluster1_pguser_secret_secret]) |