InnoDB TablesIn this post, we will discuss the best way to update an InnoDB table manually.

As a support engineer, I often see situations when the cardinality of a table is not correct. When InnoDB calculates the cardinality of an index, it does not scan the full table by default. Instead it looks at random pages, as determined by options innodb_stats_sample_pages, innodb_stats_transient_sample_pages and innodb_stats_persistent_sample_pages, or by the  CREATE TABLE option STATS_SAMPLE_PAGES. The default value for persistent statistics is 20. This approach works fine when the number of unique values in your secondary key grows in step with the size of the table. But what if you have a column that has a comparatively small number of unique values? This could be a common service, many-to-many relationship table, for example, or just a table containing a list of sell orders that belong to one of a dozen shops owned by the company. Such tables could grow up to billions of rows with a small (less than 100) number of unique shop IDs.

At some point, InnoDB will report the wrong values for such indexes. Really! If 20 pages have 100 unique shop IDs, how many unique shop IDs would 20000 pages have? 100 times 1000? This seems logical, and after a certain number of rows such indexes will have extraordinarily large cardinality values.

ANALYZE TABLE will not help, because it uses the same algorithm. Increasing the number of “stats” sample pages would help, but it has its own downside: the more pages you have to examine, the slower ANALYZE TABLE runs. While this command is not blocking, it still creates side effects as described in this blog post. And the longer it runs, the less control you have.

Another issue with InnoDB statistics: even if it is persistent and STATS_AUTO_RECALC is set to 0, it still adds values for secondary indexes as shown in lp:1538765. Eventually, after you insert million of rows, your statistics get corrupted. ANALYZE TABLE  can fix it only if you specify a very large number of “stats” sample pages.

Update InnoDB Table Manually

InnoDB stores statistics in the “mysql” database, in the tables innodb_table_stats and innodb_index_stats. Since they are regular MySQL tables, privileged users can access them. We can update them and modify statistics as we like. And these statistics are used by the Optimizer!

I created a small example showing how to do this trick. I used Percona Server for MySQL version 5.7.19, but the trick will work on any supported MySQL and Percona Server for MySQL version.

First, let’s create test tables. The first table has shops, with a few shop profiles with the shop ID and name:

The second table refers to the “shops” table:

Let’s check how many unique shops we have:

With 100 distinct shops, and a key on (shop_id, create_date), we expect cardinality in table goods to be not much different than this query result:

However,  SHOW INDEX returns dramatically different values for the column shop_id:

ANALYZE TABLE does not help:

As a result, if we join the two tables, Optimizer chooses the wrong JOIN order and query execution plan:

If compared to  STRAIGHT_JOIN order:

The time difference for a small 8M row table is around six times! For a big table with many columns, it would be even larger.

Is STRAIGHT_JOIN the only solution for this case?

No! It’s also not a great solution because if the query is complicated and involves more than two tables, it may be affected by bug fixes and improvements in the Optimizer code. Then the query order might not be optimal for new versions and updates. Therefore, you’ll need to test such queries at each upgrade, including minor ones.

So why does ANALYZE TABLE not work? Because the default number of pages it uses to calculate statistics is too small for the difference. You can increase the table option STATS_SAMPLE_PAGES  until you find a proper one. The drawback is that the greater you set  STATS_SAMPLE_PAGES, the longer it takes for  ANALYZE TABLE to finish. Also, if you update a large portion of the table, you are often affected by lp:1538765. At some point, the statistics will again be inaccurate.

Now let’s try our manual statistics update trick

InnoDB stores its persistent statistics in the tables mysql.innodb_table_stats  and mysql.innodb_index_stats:

And we can update these tables directly:

I took index values from earlier, as calculated by this query:

Now the statistics are up to date, but not used:

To finalize the changes, we need to run FLUSH TABLE goods:

Now everything is good.

But FLUSH TABLE is a blocking operation, right? Won’t it block queries and create a worse scenario than described for ANALYZE TABLE in this post?

At first glance this is true. But we can use the same trick Percona Toolkit uses: set lock_wait_timeout to 1 and call FLUSH in a loop. To demonstrate how it works, I use a similar scenario as described in the ANALYZE TABLE blog post.

First, let’s reset the statistics to ensure our FLUSH works as expected:

And then update mysql.innodb_*_stats tables manually. Then check that Optimizer still sees outdated statistics:

Now let’s start a long running query in one session that blocks our FLUSH TABLE command:

And let’s run FLUSH TABLE in a loop:

Now let’s ensure we can access the table:

We cannot! We cannot even connect to the database where the table is stored:

The reason for this is that while the  FLUSH TABLE command was killed due to the metadata lock wait timeout, it also requested table lock for flushing and blocked other incoming queries.

But we can enclose FLUSH TABLE into LOCK TABLE ... WRITE; ... UNLOCK TABLES; operations. In this case, the  LOCK TABLE command gets blocked until all queries release metadata lock on the table. Then it exclusively locks the table, FLUSH TABLE runs and then the script immediately unlocks the table. Since closing the session causes an implicit unlock, I used a PHP one-liner to have everything in a single session:

We can confirm if a parallel session can access the table:

After the PHP script finishes its job, statistics are corrected:

Conclusion

We can manually update persistent InnoDB statistics to fix Optimizer plans for our queries, with almost no impact on a live server.

5 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Fernando Mattera

Very useful article Sveta, but I prefer rdbms to manage statistics
I hope every new MySQL version improve this, and/or give ANALYZE command more options to do this well.

Leonid Sokolovskiy

I am new to MySql but to me this looks like a “hacky” way to handle statistics. Is it recommended approach ?
I agree with Fernando that it should be engine’s responsibility to do this.
From the other point of view there maybe situations you will always end up with bad plan if you do not hack statistics 🙁

Randy Meisner

How does this work if Partitioned tables are involved? Do you treat each partition as a separate table and hope that MySQL can add all of the individual index statistics properly?