FLUSH and LOCK HandlingIn this blog post, we’ll look at how Percona XtraDB Cluster (PXC) executes FLUSH and LOCK handling.

Introduction

Percona XtraDB Cluster is a multi-master solution that allows parallel execution of the transactions on multiple nodes at the same point in time. Given this semantics, it is important to understand how Percona XtraDB Cluster executes statements regarding FLUSH and LOCK handling (that operate at node level).

The section below enlist different flavors of these statements and their PXC semantics

FLUSH TABLE WITH READ LOCK

  • FTWRL is normally used for backup purposes.
  • Execution of this command establishes a global level read lock.
  • This read lock is non-preemptable by the background running applier thread.
  • PXC causes the node to move to DESYNC state (thereby blocking emission of flow-control) and also pauses the node.

  • Other nodes of the cluster continue to process the workload.
  • DESYNC and pause node continue to see the replication traffic. Though it doesn’t process the write-sets, they are appended to Galera cache for future processing.
  • Fallback: When FTWRL is released (through UNLOCK TABLES), and if the workload is active on other nodes of the cluster, FTWRL executed node may start emitting flow-control to cover the backlog. Check details here.

FLUSH TABLE <tablename> (WITH READ LOCK|FOR EXPORT)

  • It is meant to take global level read lock on the said table only. This lock command is not replicated and so pxc_strict_mode = ENFORCING blocks execution of this command.
  • This read lock is non-preemptable by the background running applier thread.
  • Execution of this command will cause the node to pause.
  • If the flush command executing node is same as workload processing node, then the node will pause immediately
  • If the flush command executing node is different from workload processing node, then the write-sets are queued to the incoming queue and flow-control will cause the pause.
  • End-result is cluster will stall in both cases.

  • Once the lock is released (through UNLOCK TABLES), node resumes apply of write-sets.

LOCK TABLE <tablename> READ/WRITE

  • LOCK TABLE command is meant to lock the said table in the said mode.
  • Again, the lock established by this command is non-preemptable.
  • LOCK is taken at node level (command is not replicated) so pxc_strict_mode = ENFORCING blocks this command.
  • There is no state change in PXC on the execution of this command.
  • If the lock is taken on the table that is not being touched by the active workload, the workload can continue to progress. If the lock is taken on the table that is part of the workload, said transaction in the workload will wait for the lock to get released, in turn, will cause complete workload to halt.
GET_LOCK
  • It is named lock and follows same semantics as LOCK TABLE for PXC. (Base semantics of MySQL are slightly different that you can check here).
LOCK TABLES FOR BACKUP
  • As the semantics goes, this lock is specially meant for backup and blocks non-transactional changes (like the updates to non-transactional engine = MyISAM and DDL changes).
  • PXC doesn’t have any special add-on semantics for this command
LOCK BINLOG FOR BACKUP
  • This statement blocks write to binlog. PXC always generates a binlog (persist to disk is controlled by the log-bin setting). If you disable log-bin, then PXC enables emulation-based binlogging.
  • This effectively means this command can cause the cluster to stall.

Tracking active lock/flush

  • If you have executed a flush or lock command and wanted to find out, it is possible using the com_% counter. These counters are connection specific, so execute these commands from the same client connection. Also, these counters are aggregate counters and incremental only.

Conclusion

By now, we can conclude that the user should be a bit more careful when executing local lock commands (understanding the semantics and the effect). Careful execution of these commands can help serve your purpose.