This blog discusses installing Percona Monitoring and Management on Google Container Engine.
I am working with a client that is on Google Cloud Services (GCS) and wants to use Percona Monitoring and Management (PMM). They liked the idea of using Google Container Engine (GKE) to manage the docker container that pmm-server uses.
The regular install instructions are here: https://www.percona.com/doc/percona-monitoring-and-management/install.html
Since Google Container Engine runs on Kubernetes, we had to do some interesting changes to the server install instructions.
First, you will want to get the gcloud shell. This is done by clicking the gcloud shell button at the top right of your screen when logged into your GCS project.
Once you are in the shell, you just need to run some commands to get up and running.
Let’s set our availability zone and region:
1 2 3 | manjot_singh@googleproject:~$ gcloud config set compute/zone asia-east1-c Updated property [compute/zone]. |
Then let’s set up our auth:
1 2 3 4 | manjot_singh@googleproject:~$ gcloud auth application-default login ... These credentials will be used by any library that requests Application Default Credentials. |
Now we are ready to go.
Normally, we create a persistent container called pmm-data to hold the data the server collects and survive container deletions and upgrades. For GCS, we will create persistent disks, and use the minimum (Google) recommended size for each.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-prom-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-prom-data-pv]. NAME ZONE SIZE_GB TYPE STATUS pmm-prom-data-pv asia-east1-c 200 pd-standard READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-consul-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-consul-data-pv]. NAME ZONE SIZE_GB TYPE STATUS pmm-consul-data-pv asia-east1-c 200 pd-standard READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-mysql-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-mysql-data-pv]. NAME ZONE SIZE_GB TYPE STATUS pmm-mysql-data-pv asia-east1-c 200 pd-standard READY manjot_singh@googleproject:~$ gcloud compute disks create --size=200GB --zone=asia-east1-c pmm-grafana-data-pv Created [https://www.googleapis.com/compute/v1/projects/googleproject/zones/asia-east1-c/disks/pmm-grafana-data-pv]. NAME ZONE SIZE_GB TYPE STATUS pmm-grafana-data-pv asia-east1-c 200 pd-standard READY |
Ignoring messages about disk formatting, we are ready to create our Kubernetes cluster:
1 2 3 4 5 6 7 | manjot_singh@googleproject:~$ gcloud container clusters create pmm-server --num-nodes 1 --machine-type n1-standard-2 Creating cluster pmm-server...done. Created [https://container.googleapis.com/v1/projects/googleproject/zones/asia-east1-c/clusters/pmm-server]. kubeconfig entry generated for pmm-server. NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS pmm-server asia-east1-c 1.4.6 999.911.999.91 n1-standard-2 1.4.6 1 RUNNING |
You should now see something like:
1 2 3 | manjot_singh@googleproject:~$ gcloud compute instances list NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS gke-pmm-server-default-pool-73b3f656-20t0 asia-east1-c n1-standard-2 10.14.10.14 911.119.999.11 RUNNING |
Now that our container manager is up, we need to create 2 configs for the “pod” we are creating to run our container. One will be used only to initialize the server and move the container drives to the persistent disks and the second one will be the actual running server.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | manjot_singh@googleproject:~$ vi pmm-server-init.json { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "pmm-server", "labels": { "name": "pmm-server" } }, "spec": { "containers": [{ "name": "pmm-server", "image": "percona/pmm-server:1.0.6", "env": [{ "name":"SERVER_USER", "value":"http_user" },{ "name":"SERVER_PASSWORD", "value":"http_password" },{ "name":"ORCHESTRATOR_USER", "value":"orchestrator" },{ "name":"ORCHESTRATOR_PASSWORD", "value":"orch_pass" } ], "ports": [{ "containerPort": 80 } ], "volumeMounts": [{ "mountPath": "/opt/prometheus/d", "name": "pmm-prom-data" },{ "mountPath": "/opt/c", "name": "pmm-consul-data" },{ "mountPath": "/var/lib/m", "name": "pmm-mysql-data" },{ "mountPath": "/var/lib/g", "name": "pmm-grafana-data" }] } ], "restartPolicy": "Always", "volumes": [{ "name":"pmm-prom-data", "gcePersistentDisk": { "pdName": "pmm-prom-data-pv", "fsType": "ext4" } },{ "name":"pmm-consul-data", "gcePersistentDisk": { "pdName": "pmm-consul-data-pv", "fsType": "ext4" } },{ "name":"pmm-mysql-data", "gcePersistentDisk": { "pdName": "pmm-mysql-data-pv", "fsType": "ext4" } },{ "name":"pmm-grafana-data", "gcePersistentDisk": { "pdName": "pmm-grafana-data-pv", "fsType": "ext4" } }] } } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | manjot_singh@googleproject:~$ vi pmm-server.json { "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "pmm-server", "labels": { "name": "pmm-server" } }, "spec": { "containers": [{ "name": "pmm-server", "image": "percona/pmm-server:1.0.6", "env": [{ "name":"SERVER_USER", "value":"http_user" },{ "name":"SERVER_PASSWORD", "value":"http_password" },{ "name":"ORCHESTRATOR_USER", "value":"orchestrator" },{ "name":"ORCHESTRATOR_PASSWORD", "value":"orch_pass" } ], "ports": [{ "containerPort": 80 } ], "volumeMounts": [{ "mountPath": "/opt/prometheus/data", "name": "pmm-prom-data" },{ "mountPath": "/opt/consul-data", "name": "pmm-consul-data" },{ "mountPath": "/var/lib/mysql", "name": "pmm-mysql-data" },{ "mountPath": "/var/lib/grafana", "name": "pmm-grafana-data" }] } ], "restartPolicy": "Always", "volumes": [{ "name":"pmm-prom-data", "gcePersistentDisk": { "pdName": "pmm-prom-data-pv", "fsType": "ext4" } },{ "name":"pmm-consul-data", "gcePersistentDisk": { "pdName": "pmm-consul-data-pv", "fsType": "ext4" } },{ "name":"pmm-mysql-data", "gcePersistentDisk": { "pdName": "pmm-mysql-data-pv", "fsType": "ext4" } },{ "name":"pmm-grafana-data", "gcePersistentDisk": { "pdName": "pmm-grafana-data-pv", "fsType": "ext4" } }] } } |
Then create it:
1 2 | manjot_singh@googleproject:~$ kubectl create -f pmm-server-init.json pod "pmm-server" created |
Now we need to move data to persistent disks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | manjot_singh@googleproject:~$ kubectl exec -it pmm-server bash root@pmm-server:/opt# supervisorctl stop grafana grafana: stopped root@pmm-server:/opt# supervisorctl stop prometheus prometheus: stopped root@pmm-server:/opt# supervisorctl stop consul consul: stopped root@pmm-server:/opt# supervisorctl stop mysql mysql: stopped root@pmm-server:/opt# mv consul-data/* c/ root@pmm-server:/opt# chown pmm.pmm c root@pmm-server:/opt# cd prometheus/ root@pmm-server:/opt/prometheus# mv data/* d/ root@pmm-server:/opt/prometheus# chown pmm.pmm d root@pmm-server:/var/lib# cd /var/lib root@pmm-server:/var/lib# mv mysql/* m/ root@pmm-server:/var/lib# chown mysql.mysql m root@pmm-server:/var/lib# mv grafana/* g/ root@pmm-server:/var/lib# chown grafana.grafana g root@pmm-server:/var/lib# exit manjot_singh@googleproject:~$ kubectl delete pods pmm-server pod "pmm-server" deleted |
Now recreate the pmm-server container with the actual configuration:
1 2 | manjot_singh@googleproject:~$ kubectl create -f pmm-server.json pod "pmm-server" created |
It’s up!
Now let’s get access to it by exposing it to the internet:
1 2 3 | manjot_singh@googleproject:~$ kubectl expose deployment pmm-server --type=LoadBalancer service "pmm-server" exposed |
You can get more information on this by running:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | manjot_singh@googleproject:~$ kubectl describe services pmm-server Name: pmm-server Namespace: default Labels: run=pmm-server Selector: run=pmm-server Type: LoadBalancer IP: 10.3.10.3 Port: <unset> 80/TCP NodePort: <unset> 31757/TCP Endpoints: 10.0.0.8:80 Session Affinity: None Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 22s 22s 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer |
To find the public IP of your PMM server, look under “EXTERNAL-IP”
1 2 3 4 | manjot_singh@googleproject:~$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.3.10.3 <none> 443/TCP 7m pmm-server 10.3.10.99 999.911.991.91 80/TCP 1m |
That’s it, just visit the external IP in your browser and you should see the PMM landing page!
One of the things we didn’t resolve was being able to access the pmm-server container within the vpc. The client had to go through the open internet and hit PMM via the public IP. I hope to work on this some more and resolve this in the future.
I have also talked to our team about making mounts for persistent disks easier so that we can use less mounts and make the configuration and setup easier.
Hi – if you’re deploying on google with kubernetes, did you consider using sysdig for monitoring? That would provide visibility into the entire infrastructure, containers, network, as well as application-specific metrics. https://www.sysdig.com
Also – it looks like your first paragraph got cut off.
Manjot – It would be awesome if you could update this setup to use kubernetes/helm + StatefulSets. That should be able to make it a one-command install.
+1
When I run:
kubectl create -f pmm-server-init.json
I get:
error: json: line 1: invalid character ‘Â’ looking for beginning of object key string
seems like an issue with copy and paste on your platform. did you try retyping it?
Same here… I tried saving the file locally and uploading but same error as @Elisha Kramer
How did you resolve this issue?