Replication sits at the core of database high availability (HA). While on-disk data can be replicated through various methods, whether hardware or software, disk, file, or block-based solutions, without an online copy of the database, HA setup cannot be achieved.

This blog digs into more detail the challenges/conflicts that impact PostgreSQL replication, but before we do that, let’s understand some key concepts and the evolution of replication in PostgreSQL.

Evolution of replication in PostgreSQL

It comes as no surprise that the PostgreSQL community over the years has invested in building solid replication features that help deliver both HA and DR. Following is a timeline of how the replication core evolved over the years starting with the Write-Ahead Logging (WAL) and finally discussing synchronous streaming replication.

Write-Ahead Logging (WAL)

Write-Ahead Logging (WAL) was added to PostgreSQL in 2001 as part of the 7.1 release. That was a major step in ensuring durability as it ensured the “D” in ACID compliance. Beyond durability, this also improved performance and response times for transactions by reducing the need to write the same page multiple times.

Replication features

It was quite a few years after the introduction of WAL that the first major replication feature, warm standby, was implemented in PostgreSQL.

Warm standby or warm standby log shipping

The warm standby feature was introduced in the 8.2 release in 2006. This feature brought in the concept of continuously archiving and continuously recovering.

On standby, the server would be continuously recovering. This means that although the server is online, it will not be able to accept any connections. All it does is process the WAL file segments that it receives from the primary server’s continuously archiving process.

This was asynchronous replication. Multiple WAL records were packaged into a file and shipped out the standby. Data loss was possible in case of a primary server crash, some WAL records may not have been shipped.

So this was PostgreSQL’s first step toward an HA solution!

Warm to hot: Streaming replication

With the 9.0 release in 2010, streaming replication and hot standbys were introduced. Two new postmaster processes were added; walsender and walreceiver. This was essentially the same concept as continuously archiving and continuously recovering, but implemented through background processes. However, this enabled the standby to now honor incoming connections and run read-only queries.

These background processes were also responsible for flushing WAL files to disk on the primary and standby servers.

Two functions for monitoring replication were added as part of this release; pg_last_xlog_receive_location and pg_last_xlog_replay_location.

However, the replication remained asynchronous and data loss was possible in case of a primary server crash.

Synchronous streaming replication

The data loss problem potential was eliminated with the 9.1 release in 2011 with synchronous streaming replication. This ensured strict data integrity guarantees that are suitable for an HA system. The synchronous mechanism could be made asynchronous for a transaction block by issuing “SET LOCAL synchronous_replication TO OFF”.

Additionally, this release introduced the concepts of quorum, priorities, connected, and active standbys.

Setting up streaming replication

Setting up streaming replication is actually rather simple. It’s a straightforward five-step process:

  • Create a user for replication. We’ll be using rep_user for this example.

  • Add replication user to pg_hba.conf on the primary server.

  • Reload the pg_hba.conf on the primary server either through “pg_ctl reload”, SQL function pg_reload_conf(), or through “kill -HUP”.
  • Take a backup of the primary database using pg_basebackup with “-R” option to write the replication information in the new data directory.
  • Start the standby server.

And viola, the streaming replication is enabled. The PostgreSQL official documentation explains the streaming replication configuration in more detail here.

Replication views and configurations

In terms of configuring replication, the process becomes a little tricky as one has to configure the primary, sending, and standby servers optimally to meet the performance and uptime requirements.

A sending server is any PostgreSQL server in a replication setup that is shipping WAL segments to standby servers. This may be the primary server or an intermediate standby server in cascade replication configuration.

Configuration

There are about 20 different configuration options for primary, sending, and standby servers. These configurations are for WAL, vacuum, connection, promotion, replication delays, and replication feedback.

Views

​​pg_stat_replication and pg_stat_wal_receiver views provide information about replication on the primary and standby servers respectively. 

On the primary server, ​​pg_stat_replication contains one row per walsender process connected to standbys. Each for the given standby also shows the application and client connection information along with the state of replication, write, flush, replay LSNs, and intervals. For the primary server, it shows sent LSN.

On the standby server, ​​pg_stat_replication exposes very similar information as ​​pg_stat_replication for connection and LSNs, but additionally also shows replication slot if any. It only has a single row for the walreceiver process.

Replication conflicts

Replication conflicts only occur on the standby servers. There are no conflicts on the primary server. The conflicts occur on standby as the walreceiver process initiates the WAL redo operation. WAL redo operation requires exclusive locks, which may be held by any long-running queries or transactions. The redo operation waits until a timeout is reached, after which any backends holding locks that prevent WAL processing are terminated. And these terminations are classified as replication conflicts. The nature of these conflicts can be very different.

PostgreSQL has pg_stat_database_conflicts view that tells us what type of conflicts and how many have occurred.

We can conveniently ignore the pg_stat_database view as it only has a single BigInt field for maintaining conflicts count. It doesn’t really provide any insights into the nature of these conflicts.

pg_stat_database_conflicts

On a standby server, pg_stat_database_conflicts has one row per database showing database-wide query cancellation stats due to recovery. It’s a fairly simple view with datid, datname, and five bigints for counting different types of conflicts:

  • Tablespace,
  • Lock,
  • Snapshot,
  • Bufferpin, and
  • Deadlock.

Tablespace conflict

Tablespace conflicts are likely to occur if an expected temp_tablespaces is dropped. temp_tablespaces is used for temporary tables, indexes, and sort operations. 

These types of conflicts are somewhat uncommon as these operations will be far and few between.

Lock conflict

Lock conflicts may occur when you query a table for contents on the standby and the table is remotely altered on the primary.

This will lead to locking contention as the walreceiver process wants to make changes by acquiring an ACCESS EXCLUSIVE lock on an index/relation. However, another query is holding ACCESS SHARE lock.

Snapshot conflict

Snapshop conflicts are generated while and when a backend process tries to access rows on the standby server that have been vacuumed out on the primary.

Bufferpin conflict

These are mostly caused by hot chain pruning or perhaps version duplicate churn for btree indexes in version 14 as it requires an exclusive lock on a page.

These types of conflicts are not that common.

Deadlock conflict

These are conflicts due to query cancellations occurring because of deadlocks on standby.

Other conflicts

There may be other types of conflicts, but those aren’t captured by pg_stat_database_conflicts view. For example, one may experience a database conflict where a database is dropped on the primary but standby may have active connections using it.

Survey: Issues faced with PostgreSQL replication

Based on a survey conducted amongst friends and colleagues, I see that more than 50% of the respondents have experienced issues with PostgreSQL replication.

The type of issues and conflicts experienced also provide us with an insight into what to focus on while configuring the replication.

If you wish to share your experience with PostgreSQL replication with me, then you may do so using this Google form.

Conclusion

There will be more blogs coming from me as I’ll discuss how some of these replication conflicts occur, and how the standby server internally reacts. The key really is to get a better under-the-hood perspective so that we can improve replication configuration to avoid such conflicts leading to improved performance, better uptime, and simpler debugging.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments