Hi and thanks to all who attended my webinar on Tuesday, January 26th titled Using PMM to Identify & Troubleshoot Problematic MySQL Queries!
Like we do after all our webinars, we compile the list of questions that were answered verbally and also those that were posed yet remained unanswered since we ran out of time during the broadcast. Before we get to the questions, I wanted to make sure to include a link to the RED Method for MySQL Queries by Peter Zaitsev, Percona’s CEO:
https://grafana.com/grafana/dashboards/12470
Hi Michael, you suggested that table create and update times should be ignored. Surely these values come from information_schema.tables? Does that not reflect what I would see if I do ls -l in datadir?
Yes, I did make this suggestion, but after further research, I ought to qualify my response. TLDR; you will only see useful information in the CREATE_TIME field.
As per the MySQL Manual for SHOW TABLE STATUS which defines the fields CREATE_TIME
, UPDATE_TIME
, and CHECK_TIME
, you will find that only CREATE_TIME
for InnoDB tables provides accurate information for when the table was originally created. You will see either NULL
or a recent-ish timestamp value for UPDATE_TIME
, but this cannot be trusted as features such as InnoDB Change Buffering will skew this value, and thus the timestamp will not necessarily reflect when the SQL write happened, but only when the delayed write to the ibd file occurred. Further, if you have your table stored in the system tablespace (like the example below) you will continue to see NULL
for the UPDATE_TIME
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | mysql> select * from information_schema.tables where table_name = 't1'\G *************************** 1. row *************************** TABLE_CATALOG: def TABLE_SCHEMA: michael TABLE_NAME: t1 TABLE_TYPE: BASE TABLE ENGINE: InnoDB VERSION: 10 ROW_FORMAT: Dynamic TABLE_ROWS: 0 AVG_ROW_LENGTH: 0 DATA_LENGTH: 16384 MAX_DATA_LENGTH: 0 INDEX_LENGTH: 0 DATA_FREE: 0 AUTO_INCREMENT: NULL CREATE_TIME: 2021-01-28 19:26:27 UPDATE_TIME: NULL CHECK_TIME: NULL TABLE_COLLATION: utf8mb4_0900_ai_ci CHECKSUM: NULL CREATE_OPTIONS: TABLE_COMMENT: 1 row in set (0.00 sec) |
To your point about ls -l
on the datadir, or the stat
command: you cannot rely on this information at any level of accuracy. Since ls -l
is equivalent to the Modify field of the output of stat, we’ll use this command to show the behaviour once you create the table, and what it reports after you restart mysqld on your datadir. So let’s see this in action via an example.
Before restarting you’ll notice that Access time is equivalent to what Percona Server for MySQL reports for CREATE_TIME
:
1 2 3 4 5 6 7 8 9 | $ stat /var/lib/mysql/michael/t1.ibd File: /var/lib/mysql/michael/t1.ibd Size: 114688 Blocks: 160 IO Block: 4096 regular file Device: fd01h/64769d Inode: 30016418 Links: 1 Access: (0640/-rw-r-----) Uid: ( 1001/ mysql) Gid: ( 1001/ mysql) Access: 2021-01-28 19:26:27.571903770 +0000 Modify: 2021-01-28 19:28:07.488597476 +0000 Change: 2021-01-28 19:28:07.488597476 +0000 Birth: - |
However after you restart mysqld
, you will no longer be able to tell the create time as MySQL will have updated the Access time on disk, and now the values don’t have very much material relevance as to the access patterns on the table.
1 2 3 4 5 6 7 8 9 | $ stat /var/lib/mysql/michael/t1.ibd File: /var/lib/mysql/michael/t1.ibd Size: 114688 Blocks: 160 IO Block: 4096 regular file Device: fd01h/64769d Inode: 30016418 Links: 1 Access: (0640/-rw-r-----) Uid: ( 1001/ mysql) Gid: ( 1001/ mysql) Access: 2021-01-28 19:30:08.557438038 +0000 Modify: 2021-01-28 19:28:07.488597476 +0000 Change: 2021-01-28 19:28:07.488597476 +0000 Birth: - |
Can I use Percona Monitoring and Management (PMM) with an external bare-metal server of Clickhouse?
PMM leverages an instance of Clickhouse inside the docker container (or your AMI, or your OVF destination) for storage of MySQL query data. At this time we are shipping PMM as an appliance and therefore we don’t provide instructions on how to connect Query Analytics to an external instance of Clickhouse.
If the question is about “can I monitor Clickhouse database metrics using PMM” the answer is Yes absolutely you can! In fact, PMM will work with any of the Prometheus Exporters and the way to enable this is via the feature we call External Services – take a look at our Documentation for the correct syntax to use! Usage of External Services will get you pretty metrics, whereas Grafana (which is what we use in PMM to provide the visuals) already contains a native Clickhouse datasource which you can use to run SQL queries from within PMM against Clickhouse. Simply define the datasource and you’re done!
All PMM2 features are compatible with MySQL 8?
The latest release of PMM 2.14 (January 28th, 2021) supports MySQL 8 and Percona Server for MySQL 8. PMM now supports not only traditional asynchronous replication but also MySQL InnoDB Group Replication, and of course Percona’s own Percona XtraDB Cluster (PXC) write-set replication (aka wsrep via Galera). When using Query Analytics with Percona Server for MySQL or PXC, you’ll also benefit from the Extended Slow Log Format, which provides for a very detailed view of activity at the InnoDB storage engine level:
I added several dbs to PMM, however the QAN shows only few and not all. What could be an issue? How do I approach you for Percona support on such things?
There could be a few things going on here that you’ll want to review from Percona’s Documentation:
- Do you have a user provisioned with appropriate access permissions in MySQL?
- If sourcing from PERFORMANCE_SCHEMA, is P_S actually enabled & properly configured?
- Is long_query_time and other slow log settings properly configured to write events?
Slow Log Configuration
These are the recommended settings for the slow log on Percona Server for MySQL. I prefer the slow log vs P_S because you get the InnoDB storage engine information along with other extended query properties (which are not available in upstream MySQL, nor in RDS, or via PERFORMANCE_SCHEMA):
1 2 3 4 5 6 7 8 9 10 11 12 | log_output=file slow_query_log=ON long_query_time=0 log_slow_rate_limit=100 log_slow_rate_type=query log_slow_verbosity=full log_slow_admin_statements=ON log_slow_slave_statements=ON slow_query_log_always_write_time=1 slow_query_log_use_global_control=all innodb_monitor_enable=all userstat=1 |
User Permissions
You’ll want to use this permissions for the PMM user:
1 2 | CREATE USER 'pmm'@'localhost' IDENTIFIED BY 'pass' WITH MAX_USER_CONNECTIONS 10; GRANT SELECT, PROCESS, SUPER, REPLICATION CLIENT, RELOAD ON *.* TO 'pmm'@'localhost'; |
PERFORMANCE_SCHEMA
Using PERFORMANCE_SCHEMA
is less detailed but comes with the benefit that you’re writing and reading from an in-memory only object, so you’re saving IOPS to disk. Further if you’re in AWS or other DBaaS you generally don’t get raw access to the on-disk slow log, so PERFORMANCE_SCHEMA
can be your only option.
PERFORMANCE_SCHEMA turned on
By default, the latest versions of Percona Server for MySQL and Community MySQL ship with PERFORMANCE_SCHEMA
enabled by default, but sometimes users disable it. If you find it is disabled, a restart of mysqld is required in order to enable.
You want to make sure your my.cnf includes:
1 | performance_schema=ON |
You can check via a running MySQL instance by executing:
1 2 3 4 5 6 7 8 | mysql> show global variables like 'performance_schema'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | performance_schema | ON | +--------------------+-------+ 1 row in set (0.04 sec) |
PERFORMANCE_SCHEMA configuration
You’ll need to make sure you enable the following consumers so that mysqld writes events to the relevant P_S tables:
1 2 3 4 5 6 7 8 9 10 11 12 | select * from setup_consumers WHERE ENABLED='YES'; +----------------------------------+---------+ | NAME | ENABLED | +----------------------------------+---------+ | events_statements_current | YES | | events_statements_history | YES | | global_instrumentation | YES | | thread_instrumentation | YES | | statements_digest | YES | +----------------------------------+---------+ 5 rows in set (0.00 sec) |
Are there any specific limitations when using PMM for monitoring AWS Aurora?
The most significant limitation is that you cannot access the Slow Log and thus must configure for PERFORMANCE_SCHEMA
as the query datasource. See the previous section on how to configure PERFORMANCE_SCHEMA
as needed for PMM Query Analytics.
One great feature of PMM is our native support for AWS Aurora. We have a specific dashboard for those Aurora-only features:
Thanks for attending!
If you attended (or watched the video), please share via comments any takeaways or further questions you may have! And let me know if you enjoyed my jokes 🙂