It’s a common scenario to have a Percona Monitoring and Management (PMM) server running on Kubernetes and also desire to monitor databases that are running outside the Kubernetes cluster. The Ingress NGINX Controller is one of the most popular choices for managing the inbound traffic to K8s. It acts as a reverse proxy and load balancer and is well-known for its performance and scalability. Since PMM uses gRPC traffic for communication between the client and server, we need to make sure that it’s allowed; otherwise, we will get connection issues:
1 2 | Mar 27 01:35:24 pmm-client pmm-agent: time="2024-03-27T01:35:24.600+00:00" level=error msg="Failed to establish two-way communication channel: context canceled." component=client Mar 27 01:35:24 pmm-client pmm-agent: time="2024-03-27T01:35:24.600+00:00" level=error msg="Client error: failed to receive message: rpc error: code = Canceled desc = context canceled" |
For more information on ports and protocols used by PMM (both the server and the client), check the online documentation manual:
https://docs.percona.com/percona-monitoring-and-management/setting-up/server/network.html
Installing PMM server
Installing a PMM server on K8s is as easy as executing the following commands. First, we need to create the secret:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | $ cat <<EOF | kubectl create -f - > apiVersion: v1 > kind: Secret > metadata: > name: pmm-secret > labels: > app.kubernetes.io/name: pmm > type: Opaque > data: > # base64 encoded password > # encode some password: `echo -n "admin" | base64` > PMM_ADMIN_PASSWORD: YWRtaW4= > EOF secret/pmm-secret created |
Then add the Percona Helm repo:
1 2 | $ helm repo add percona https://percona.github.io/percona-helm-charts/ "percona" has been added to your repositories |
Finally, issue the Helm install command:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | $ helm install pmm > --set secret.create=false > --set secret.name=pmm-secret > percona/pmm NAME: pmm LAST DEPLOYED: Tue Mar 26 15:59:57 2024 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Percona Monitoring and Management (PMM) An open source database monitoring, observability and management tool Check more info here: https://docs.percona.com/percona-monitoring-and-management/index.html Get the application URL: export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services monitoring-service) export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}") echo https://$NODE_IP:$NODE_PORT Get password for the "admin" user: export ADMIN_PASS=$(kubectl get secret pmm-secret --namespace default -o jsonpath='{.data.PMM_ADMIN_PASSWORD}' | base64 --decode) echo $ADMIN_PASS |
The Helm list command should show us our PMM server correctly deployed:
1 2 3 | $ helm list NAME NAMESPACE REVISION UPDATED ... STATUS CHART APP VERSION pmm default 1 2024-03-26 ... deployed pmm-1.3.13 2.41.2 |
For a complete guide, refer to the online documentation:
Routing traffic to Kubernetes
A typical use case is routing traffic to a specific K8s backend service based on the hostname. The following link shows an example of this use case:
https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/
To install the ingress-nginx controller either using Helm or a YAML manifest, we can follow the below quick start guide:
https://kubernetes.github.io/ingress-nginx/deploy/#quick-start
Below, we have created the ingress-controller using the corresponding YAML file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.0/deploy/static/provider/cloud/deploy.yaml namespace/ingress-nginx configured serviceaccount/ingress-nginx created serviceaccount/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged rolebinding.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged configmap/ingress-nginx-controller created service/ingress-nginx-controller created service/ingress-nginx-controller-admission created deployment.apps/ingress-nginx-controller created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created ingressclass.networking.k8s.io/nginx unchanged validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured |
As a result, the ingress-nginx controller on K8s will have a load balancer service listening on ports 80 and 443 with an assigned external IP:
1 2 3 | $ kubectl get services ingress-nginx-controller -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.0.204.245 34.133.89.211 80:30548/TCP,443:30689/TCP 35s |
The PMM server will have its corresponding service:
1 2 3 | $ kubectl get services monitoring-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE monitoring-service NodePort 10.0.196.85 <none> 443:31616/TCP,80:31636/TCP 15m |
A “DNS A” record pointing to the NGINX external IP should be added to the DNS to let NGINX route the traffic to the PMM service based on the hostname.
We’re now ready to create the Ingress to route external traffic to the PMM Server:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | $ cat << EOF > pmm-ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: pmm-ingress namespace: default annotations: nginx.ingress.kubernetes.io/backend-protocol: "GRPCS" spec: ingressClassName: nginx rules: - host: pmm-dev.hopto.org http: paths: - path: / pathType: Prefix backend: service: name: monitoring-service port: number: 443 EOF |
NGINX will automatically discover the ingress with the kubernetes.io/ingress.class: “nginx” annotation or where ingressClassName: nginx is present. The ingress object must be created inside the same namespace as the backend resource. The ingress should redirect the external IP traffic to the PMM service, which is the monitoring-service.
Additionally, the PMM ingress should have the nginx.ingress.kubernetes.io/backend-protocol: “GRPCS” annotation to ensure gRPC over HTTP/2 with TLS encryption is allowed. This will correctly route packets between the PMM client and the PMM server through the ingress-nginx controller.
Configuring the PMM client
Finally, we can configure our PMM client that is external to the K8s cluster to be monitored by the K8s PMM server:
1 2 3 4 5 6 7 8 9 10 | Checking local pmm-agent status... pmm-agent is running. Registering pmm-agent on PMM Server... Registered. Configuration file /usr/local/percona/pmm2/config/pmm-agent.yaml updated. Reloading pmm-agent configuration... Configuration reloaded. Checking local pmm-agent status... pmm-agent is running. |
At this point, the OS metrics exporter will be running, and we’ll start seeing new incoming metrics from this node in the PMM dashboards.
Conclusion
Since the PMM server and client communication require the use of the gRPC framework, we need to make sure that it’s enabled when adding the NGINX ingress controller to our Kubernetes setup. We can easily do so by applying the changes shown on the pmm-ingress.yaml file above.
Percona Monitoring and Management is a best-of-breed open source database monitoring solution tool for use with MySQL, PostgreSQL, MongoDB, and the servers on which they run. Monitor, manage, and improve the performance of your databases no matter where they are located or deployed.
Download Percona Monitoring and Management Today