This latest and greatest version of PostgreSQL, version 13, has been here since last summer and continues the tradition of gains in performance, administration, application development, and security enhancements.
Of the many new features and enhancements that have been made, some of the more notable ones include:
- Space savings and performance gains from de-duplication of B-tree index entries
- Improved performance for queries that use aggregates or partitioned tables
- Better query planning when using extended statistics
- Parallelized vacuuming of indexes
- Incremental sorting
TIP: More detailed information can be found in the Release Notes here.
I learned long ago that it can be something of a challenge keeping up to date with all this new stuff. Therefore, I’d like to cover a small subset of these new advances by demonstrating simple use cases which I hope you will find constructive and are related to pgbench, logical replication, streaming replication fail-over, and re-provisioning a deprecated PRIMARY as a STANDBY using pg_rewind.
About pgbench
As you may recall, pgbench is a simple program for benchmark tests on PostgreSQL by running the same sequence of SQL commands over and over in multiple concurrent database sessions. Pgbench finds its most common use, as the name implies, as a benchmarking tool testing hardware and configuration runtime parameters for the purposes of performance tuning.
The latest iteration of pgbench has a number of new capabilities and includes:
- A new set of options chooses data generation either from the client or the server side i.e. ‘g’, and ‘G’. Whereas previous versions generated data on the pgbench client and then sent it to the server, one now has the ability to test data generation exclusively on the server itself without being affected by network latency.
- The ability to create a partitioned “accounts” table using either range or hash partitioning although the default is range partitioning.
- A new option ‘–show-script’ which echoes the actual code of any built-in script name on stderr.
Example 1: Hash and Range Partitioning
The first step is to of course initialize the benchmarking, in this case, we start out using hash partitioning:
1 2 | # hash partitioning table public.pgbench_accounts pgbench -i --partition-method=hash --partitions=5 |
1 2 3 4 5 6 7 8 | ... creating tables... creating 5 partitions... generating data (client-side)... 100000 of 100000 tuples (100%) done (elapsed 0.04 s, remaining 0.00 s) vacuuming... creating primary keys... done in 0.79 s (drop tables 0.02 s, create tables 0.17 s, client-side generate 0.13 s, vacuum 0.26 s, primary keys 0.22 s). |
Here’s what it should look like:
1 2 3 4 5 6 7 8 9 10 11 | Schema | Name | Type | Owner | Persistence | Size --------+--------------------+-------------------+----------+-------------+---------+ public | pgbench_accounts | partitioned table | postgres | permanent | 0 bytes public | pgbench_accounts_1 | table | postgres | permanent | 2656 kB public | pgbench_accounts_2 | table | postgres | permanent | 2656 kB public | pgbench_accounts_3 | table | postgres | permanent | 2656 kB public | pgbench_accounts_4 | table | postgres | permanent | 2656 kB public | pgbench_accounts_5 | table | postgres | permanent | 2656 kB public | pgbench_branches | table | postgres | permanent | 40 kB public | pgbench_history | table | postgres | permanent | 0 bytes public | pgbench_tellers | table | postgres | permanent | 40 kB |
And here’s the partitioned pgbench_accounts table definition:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | Partitioned table "public.pgbench_accounts" Column | Type | Collation | Nullable | Default | Storage | Stats target Description ----------+---------------+-----------+----------+---------+----------+-------------- aid | integer | | not null | | plain | bid | integer | | | | plain | abalance | integer | | | | plain | filler | character(84) | | | | extended | Partition key: HASH (aid) Indexes: "pgbench_accounts_pkey" PRIMARY KEY, btree (aid) Foreign-key constraints: "pgbench_accounts_bid_fkey" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid) Referenced by: TABLE "pgbench_history" CONSTRAINT "pgbench_history_aid_fkey" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid) Partitions: pgbench_accounts_1 FOR VALUES WITH (modulus 5, remainder 0), pgbench_accounts_2 FOR VALUES WITH (modulus 5, remainder 1), pgbench_accounts_3 FOR VALUES WITH (modulus 5, remainder 2), pgbench_accounts_4 FOR VALUES WITH (modulus 5, remainder 3), pgbench_accounts_5 FOR VALUES WITH (modulus 5, remainder 4) |
Alternatively, we can just as easily partition using range partitioning:
1 2 | # range partitioning public.pgbench_accounts pgbench -i --partition-method=range --partitions=5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | Partitioned table "public.pgbench_accounts" Column | Type | Collation | Nullable | Default | Storage | Stats target | Description ----------+---------------+-----------+----------+---------+----------+--------------+ aid | integer | | not null | | plain | | bid | integer | | | | plain | | abalance | integer | | | | plain | | filler | character(84) | | | | extended | | Partition key: RANGE (aid) Indexes: "pgbench_accounts_pkey" PRIMARY KEY, btree (aid) Partitions: pgbench_accounts_1 FOR VALUES FROM (MINVALUE) TO (20001), pgbench_accounts_2 FOR VALUES FROM (20001) TO (40001), pgbench_accounts_3 FOR VALUES FROM (40001) TO (60001), pgbench_accounts_4 FOR VALUES FROM (60001) TO (80001), pgbench_accounts_5 FOR VALUES FROM (80001) TO (MAXVALUE) |
For those people curious to see the actual commands creating the partition(s) just update the runtime parameter and look in your postgres log:
1 2 3 4 5 6 7 8 9 10 11 | postgres=# alter system set log_statement = 'ddl'; ALTER SYSTEM postgres=# select pg_reload_conf(); pg_reload_conf ---------------- t postgres=# show log_statement; log_statement --------------- ddl |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | LOG: statement: drop table if exists pgbench_accounts, pgbench_branches, pgbench_history, pgbench_tellers LOG: statement: create table pgbench_history(tid int,bid int,aid int,delta int,mtime timestamp,filler char(22)) LOG: statement: create table pgbench_tellers(tid int not null,bid int,tbalance int,filler char(84)) with (fillfactor=100) LOG: statement: create table pgbench_accounts(aid int not null,bid int,abalance int,filler char(84)) partition by range (aid) LOG: statement: create table pgbench_branches(bid int not null,bbalance int,filler char(88)) with (fillfactor=100) LOG: statement: create table pgbench_accounts_1 partition of pgbench_accounts for values from (minvalue) to (20001) with (fillfactor=100) LOG: statement: create table pgbench_accounts_2 partition of pgbench_accounts for values from (20001) to (40001) with (fillfactor=100) LOG: statement: create table pgbench_accounts_3 partition of pgbench_accounts for values from (40001) to (60001) with (fillfactor=100) LOG: statement: create table pgbench_accounts_4 partition of pgbench_accounts for values from (60001) to (80001) with (fillfactor=100) LOG: statement: create table pgbench_accounts_5 partition of pgbench_accounts for values from (80001) to (maxvalue) with (fillfactor=100) LOG: statement: alter table pgbench_branches add primary key (bid) LOG: statement: alter table pgbench_tellers add primary key (tid) LOG: statement: alter table pgbench_accounts add primary key (aid) |
Example 2: Benchmarking
Part I
Let’s initialize a database with the following conditions:
- install pg_stat_statements to see our results
- initialize benchmarking
- generate data solely on the server-side
- create primary keys
- create foreign key constraints
- range partition table “accounts” creating five child tables
Open file postgresql.conf and edit the following runtime parameters. Restart the server and add extension pg_stat_statement to any database using the command create pg_stat_statement.
1 2 3 4 | # pg_conftool 13 main postgresql.conf edit shared_preload_libraries = 'pg_stat_statements' pg_stat_statements.max = 10000 pg_stat_statements.track = all |
1 | systemctl restart postgresql@13-main |
Initialize an empty database:
1 2 3 | export PGHOST=pg1-POC13 PGPORT=5432 PGDATABASE=db01 PGUSER=postgres PGPASSWORD=mypassword createdb db01 pgbench -i -s 10 -I dtGvpf -F 90 --partition-method=hash --partitions=5 |
This query should return a nice summary of the commands thus far executed:
1 2 3 4 5 | SELECT query, calls, total_exec_time::int, rows, (100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0))::int AS hit_percent FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 5; |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | -[ RECORD 1 ] ----------------+------------------------------------------------------------------------------------ query | SELECT abalance FROM pgbench_accounts WHERE aid = $1 calls | 1560162 total_exec_time | 73958 rows | 1560162 hit_percent | 94 -[ RECORD 2 ] ----------------+------------------------------------------------------------------------------------ query | insert into pgbench_accounts(aid,bid,abalance,filler) select aid, (aid - $1) / $2 + $3, $4, $5 from generate_series($7, $8) as aid calls | 1 total_exec_time | 2250 rows | 1000000 hit_percent | 100 -[ RECORD 3 ] ----------------+------------------------------------------------------------------------------------ query | create database db03 calls | 1 total_exec_time | 2092 rows | 0 hit_percent | 90 -[ RECORD 4 ] ----------------+------------------------------------------------------------------------------------ query | vacuum analyze pgbench_accounts calls | 1 total_exec_time | 1591 rows | 0 hit_percent | 92 -[ RECORD 5 ] ----------------+------------------------------------------------------------------------------------ query | alter table pgbench_accounts add primary key (aid) calls | 1 total_exec_time | 1086 rows | 0 hit_percent | 59 |
Part II
Execute the benchmarking for a duration of five minutes. Just to make things clearer, you can reset the stats in view pg_stat_statements.
1 2 3 | # clear the stats before starting the benchmarking # SELECT pg_stat_statements_reset(); pgbench -c 40 -j 7 -T 300 -b tpcb-like db01 -P 60 |
New SQL statements representing the DML operations are now listed:
1 2 3 4 5 | SELECT query, calls, total_exec_time::int, rows, (100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0))::int AS hit_percent FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 5; |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | -[ RECORD 1 ] ---+----------------------------------------------------------------------------------------------------------- query | UPDATE pgbench_branches SET bbalance = bbalance + $1 WHERE bid = $2 calls | 42767 total_exec_time | 6203809 rows | 42767 hit_percent | 100 -[ RECORD 2 ] ---+----------------------------------------------------------------------------------------------------------- query | UPDATE pgbench_tellers SET tbalance = tbalance + $1 WHERE tid = $2 calls | 42767 total_exec_time | 3146175 rows | 42767 hit_percent | 100 -[ RECORD 3 ] ---+----------------------------------------------------------------------------------------------------------- query | UPDATE pgbench_accounts SET abalance = abalance + $1 WHERE aid = $2 calls | 42767 total_exec_time | 28281 rows | 42767 hit_percent | 95 -[ RECORD 4 ] ---+----------------------------------------------------------------------------------------------------------- query | INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES ($1, $2, $3, $4, CURRENT_TIMESTAMP) calls | 42767 total_exec_time | 22797 rows | 42767 hit_percent | 100 -[ RECORD 5 ] ---+----------------------------------------------------------------------------------------------------------- query | SELECT $2 FROM ONLY "public"."pgbench_branches" x WHERE "bid" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x calls | 42767 total_exec_time | 2347 rows | 42767 hit_percent | 100 |
Notice how the child tables have grown in size:
1 2 3 4 5 6 7 8 9 10 11 12 | List of relations Schema | Name | Type | Owner | Persistence | Size | Description --------+--------------------+-------------------+----------+-------------+---------+ public | pgbench_accounts | partitioned table | postgres | permanent | 0 bytes | public | pgbench_accounts_1 | table | postgres | permanent | 28 MB | public | pgbench_accounts_2 | table | postgres | permanent | 29 MB | public | pgbench_accounts_3 | table | postgres | permanent | 28 MB | public | pgbench_accounts_4 | table | postgres | permanent | 28 MB | public | pgbench_accounts_5 | table | postgres | permanent | 28 MB | public | pgbench_branches | table | postgres | permanent | 168 kB | public | pgbench_history | table | postgres | permanent | 2384 kB | public | pgbench_tellers | table | postgres | permanent | 272 kB | |
About Logical Replication
We now explore another very useful addition to pg version 13 which is the transparent replication of partitioned tables via logical decoding. No longer is it necessary to expend valuable time manually adding the triggers and supplementary instruction replicating the child tables.
Example
Using the aforementioned pgbench example, execute the following. Note that one must update table history, by adding a primary key, otherwise, replication is not possible with this table:
1 2 3 4 5 | # UPDATE TABLE, ADD PK alter table public.pgbench_history add primary key (tid,bid,aid,mtime); # CREATE DATABASE ON SUBSCRIBER NODE createdb -h pg4-POC13 -U postgres db |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | # COPY DATABASE SCHEMA pg_dump -h pg1-POC13 -U postgres -s db01 | psql 'host=pg4-POC13 user=postgres dbname=db01' # # PROVIDER pg1-POC13: DB01 # psql 'host=pg1-POC13 user=postgres password=mypassword dbname=db01' <<_eof_ \set ON_ERROR_STOP create publication publication1 for all tables; _eof_ # # SUBSCRIBER pg4-POC13: DB01 # psql 'host=pg4-POC13 user=postgres password=mypassword dbname=db01' <<_eof_ \set ON_ERROR_STOP create subscription subscript_set1 connection 'host=pg1-POC13 dbname=db01 user=postgres password=mypassword' publication publication1 with (copy_data=true, create_slot=true, enabled=true, slot_name=myslot1); _eof_ |
And here we see the child accounts tables have been correctly replicated:
1 2 3 4 5 6 7 8 9 10 11 12 | List of relations Schema | Name | Type | Owner | Persistence | Size | Description --------+--------------------+-------------------+----------+-------------+------------+ public | pgbench_accounts | partitioned table | postgres | permanent | 0 bytes | public | pgbench_accounts_1 | table | postgres | permanent | 28 MB | public | pgbench_accounts_2 | table | postgres | permanent | 29 MB | public | pgbench_accounts_3 | table | postgres | permanent | 28 MB | public | pgbench_accounts_4 | table | postgres | permanent | 28 MB | public | pgbench_accounts_5 | table | postgres | permanent | 28 MB | public | pgbench_branches | table | postgres | permanent | 8192 bytes | public | pgbench_history | table | postgres | permanent | 2352 kB | public | pgbench_tellers | table | postgres | permanent | 8192 bytes | |
About Streaming Replication
Because of the large number of new features added to streaming replication, I’m limiting the focus on two enhancements:
- Restarting a STANDBY in order to point it to a newly promoted PRIMARY, after updating primary_conninfo and primary_slot_name, is no longer necessary as a reload will suffice.
- pg_rewind has a new option, –write-recovery-conf, that simplifies the steps of reprovisioning a failed PRIMARY as a viable STANDBY, similarly to the one found in pg_basebackup.
Recall the three-node streaming replication cluster we’ve been using:
1 2 3 | pg1-POC13: PRIMARY (read-write) pg2-POC13: REPLICA (read-only, streaming) pg3-POC13: REPLICA (read-only, streaming) |
Example 1: STANDBY, pg3, Points to New PRIMARY pg2
Suppose a failover promotes pg2. This new feature adds robustness to the entire database cluster. REPLICA pg3 continues service without interruption as it is redirected from pg1 to pg2. Previous versions of postgres required host pg3 to be restarted in order to effect new primary_conninfo parameters.
Step 1
pg3-POC13; run a benchmark of SELECTS on host pg3
1 | pgbench -h pg3-POC13 -U postgres -c 40 -j 7 -T 300 -b select-only db01 -P 5 |
Step 2
pg2-POC13; promote the host while the bench-marking on pg3 is active
1 2 3 4 5 6 7 8 | -- create a new slot for pg3 select * from pg_create_physical_replication_slot('pg3'); -- confirm slot is inactive select * from pg_get_replication_slots(); -- promote host select pg_promote(): -- confirm read-write state select pg_is_in_recovery(); |
TIP: it’s understood that runtime parameter wal_keep_size is a non-zero value and is set sufficiently retaining WALs as pg3 is redirected away from pg1 towards pg2.
Step 3
pg3-POC13; point to pg2-POC13
1 2 3 4 5 6 | -- make the updates alter system set primary_conninfo = 'user=postgres host=10.231.38.112'; alter system set primary_slot_name = 'pg3'; select pg_reload_conf(); -- confirm replication is active select * from pg_stat_wal_receiver; |
Step 4
pg2-POC13; validate replication from pg2->pg3
1 2 3 4 | -- confirm slot is active select * from pg_get_replication_slots(); -- confirm replication is active select * from pg_stat_replication; |
Example 2
When re-provisioning a failed PRIMARY, ie pg1, as a new STANDBY, it used to be necessary to edit the requisite recovery runtime configuration parameters after executing pg_rewind but before starting the host.
PostgreSQL version 13 now simplifies the exercise by providing the switch –write-recovery-conf and updating postgresql.auto.conf with the correct runtime parameters.
TIP: pg_rewind requires runtime parameter wal_log_hints=on before re-provisioning takes place on the failed PRIMARY
Step 1
pg2-POC13; add a slot on PRIMARY pg2 for the new STANDBY pg1
1 2 3 | -- as postgres create a new slot select * from pg_create_physical_replication_slot('pg1'); select * from pg_get_replication_slots(); |
Step 2
pg1-POC13; re-provision pg1
1 2 3 4 5 6 | # as postgres, perform the rewind /usr/lib/postgresql/13/bin/pg_rewind \ --target-pgdata=/var/lib/postgresql/13/main \ --source-server="host=pg2-POC13 port=5432 user=postgres dbname=postgres " \ --write-recovery-conf \ --progress |
You should get messaging similar to the following:
1 2 3 4 5 6 7 8 9 10 11 | pg_rewind: connected to server pg_rewind: servers diverged at WAL location 0/6CDCEA88 on timeline 1 pg_rewind: rewinding from last common checkpoint at 0/6CDCE9D8 on timeline 1 pg_rewind: reading source file list pg_rewind: reading target file list pg_rewind: reading WAL in target pg_rewind: need to copy 206 MB (total source directory size is 422 MB) 211594/211594 kB (100%) copied pg_rewind: creating backup label and updating control file pg_rewind: syncing target data directory pg_rewind: Done! |
Now you can restart host pg1-POC13 and bring it back into service:
1 2 | # as root, restart the server systemctl start postgresql@13-main |
Login pg1 and confirm replication:
1 2 | # confirm replication is active select * from pg_stat_wal_receiver; |
Caveat
In the case of failure, check the following:
- pg2-POC13: As the PRIMARY you will want to check that you have a slot installed and ready before starting up the reprovisioned pg1 as a new standby
- pg1-POC13:
- check to see if he server started up i.e. ‘netstat -tlnp’
- check postgresql.auto.conf if you get a FATAL server is starting up message
- check the slot name
- check connectivity info is correct
Conclusion
Although not germane to features and capabilities, I was a little curious to see the differences between PostgreSQL 13 and earlier versions. Here’s a table with metrics creating a data cluster, database, and time required initializing pgbench. It’s interesting to see how performance times have improved over the previous versions:
1 2 3 4 5 6 7 | initdb datacluster binaries createdb pgbench -i ver time size(bytes) size(bytes) time time 9.6 0m0.889s 38,877,134 38,844,934 0m0.311s 0m0.236s 10 0m0.729s 39,598,542 42,054,339 0m0.725s 0m0.240s 11 0m0.759s 40,844,747 41,336,566 0m0.683s 0m0.212s 12 0m0.592s 41,560,196 43,853,282 0m0.179s 0m0.213s 13 0m0.502s 41,266,877 65,652,665 0m0.188s 0m0.168s |
Frankly, I find it amazing how the size of both binaries and the data cluster has remained so compact over the years. I remember, way back in 2000, I was working with a proprietary RDBMS that upon initialization created an empty instance of 1GB in size – I wonder how big it gets these days??
Happy Hacking!
References:
Our white paper “Why Choose PostgreSQL?” looks at the features and benefits of PostgreSQL and presents some practical usage examples. We also examine how PostgreSQL can be useful for companies looking to migrate from Oracle.