This blog post is about how you can gain detailed information in Percona Monitoring and Management (PMM2) of PostgreSQL disk usage. We take a look at using a catalog called pg_class, which catalogs tables and most everything else that has columns or is otherwise similar to a table. This includes indexes, sequences, views, materialized views, composite types, and toast tables.
It will use some custom queries in order to fetch such information, so we need to create a file with a custom query. Postgresql exporter is processing such files by default, so you should place your file into one of the paths referred to below, to the desired scraping interval and restart the exporter.
/usr/local/percona/pmm2/collectors/custom-queries/postgresql/low-resolution
/usr/local/percona/pmm2/collectors/custom-queries/postgresql/medium-resolution
/usr/local/percona/pmm2/collectors/custom-queries/postgresql/high-resolution
1 2 3 | # /etc/init.d/pmm-agent restart Stopping pmm-agent: [ OK ] Starting pmm-agent: [ OK ] |
FYI, the metrics scraping interval can be set on the tab “PMM Settings”:
Setting up Query Monitoring in PMM for Postgres
First of all, we have to enable the extension dblink. It’s a PostgreSQL contrib extension that allows users to connect to other databases and run arbitrary queries in them.
1 2 3 4 5 6 7 8 9 10 11 | postgres=# CREATE EXTENSION dblink; CREATE EXTENSION postgres=# \dx List of installed extensions Name | Version | Schema | Description --------------------+---------+------------+-------------------------------------------------------------- dblink | 1.2 | public | connect to other PostgreSQL databases from within a database pg_stat_statements | 1.6 | public | track execution statistics of all SQL statements executed plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language (3 rows) |
The next step is to create a query for collecting data from tables. We propose using our bash script that generates a query and forms other fields for a custom query file. You have to specify the database name, or names, that will be monitored. Pg_class catalog is generated for each database so it’s not possible to create a general query. You can store the result of the script in a separate file or can extend an existing file with queries.
1 | $ ./postgres_query_pg_class_generator.sh sbtest sbtest2 >> queries-postgres-2.yml |
Now it’s time to restart exporter and check the newly collected metrics.
- pg_class_table_rows
- pg_class_disk_usage_table_bytes
- pg_class_disk_usage_index_bytes
- pg_class_disk_usage_toast_bytes
Also, you can upload our dashboard in order to checks queries through the Grafana interface. It has got all PMM2 elements such as variables, menu, annotations.
Here is a screenshot from our test installation.
@Vadim Tkachenko, when using https://github.com/wrouesnel/postgres_exporter , I was able to gather stats for table size per DB with setting like this
# pg_table_size_size_bytes
pg_table_size:
query: 'SELECT current_database() datname, nspname AS "schemaname", relname, pg_relation_size(C.oid) AS "size_bytes" FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname NOT IN (''pg_catalog'', ''information_schema'', ''pg_toast'') ORDER BY pg_relation_size(C.oid)'
metrics:
- datname:
usage: "LABEL"
description: "Name of the database"
- schemaname:
usage: "LABEL"
description: "Name of the schema that this table is in"
- relname:
usage: "LABEL"
description: "Name of this table"
- size_bytes:
usage: "GAUGE"
description: "Disk space used by the table"
database autodiscovery needs to be turned on – was possible without special bash script/dblink, hope it will help you as well.