Comments on: MySQL Memory Management, Memory Allocators and Operating System https://www.percona.com/blog/mysql-memory-management-memory-allocators-and-operating-system/ Mon, 16 Sep 2019 19:04:36 +0000 hourly 1 https://wordpress.org/?v=6.5.2 By: Sveta Smirnova https://www.percona.com/blog/mysql-memory-management-memory-allocators-and-operating-system/#comment-10970834 Mon, 27 May 2019 14:41:29 +0000 https://www.percona.com/blog/?p=56463#comment-10970834 @james We reported a bug at https://bugs.mysql.com/bug.php?id=95065 This is now up to Oracle how to fix it.

]]>
By: james https://www.percona.com/blog/mysql-memory-management-memory-allocators-and-operating-system/#comment-10970824 Fri, 24 May 2019 08:55:21 +0000 https://www.percona.com/blog/?p=56463#comment-10970824 jemalloc did not fix my issue.
please suggest option 1 to Oracle

thanks

]]>
By: Simon Mudd https://www.percona.com/blog/mysql-memory-management-memory-allocators-and-operating-system/#comment-10970740 Thu, 02 May 2019 17:36:26 +0000 https://www.percona.com/blog/?p=56463#comment-10970740 Hi Sveta,

I have bumped into a similar issue over the last few months that is very similar if not the same.
The related bug: https://bugs.mysql.com/94647 also reported as not a bug (https://jira.mariadb.org/browse/MDEV-14050).

In my case a server upgrading from MariaDB 10.1 to 10.3 suddenly started exhibiting higher memory usage, triggering OOM conditions. Changing the memory allocator resolved the issue. (I used tcmalloc.)

What somewhat surprised me was this happened on the same underlying OS so clearly the access patterns to glibc’s malloc() must have changed to trigger this. (as queries did not change and 10.1 had been working fine). The symptoms were recognised as being true, but the “problem was the memory allocator” so “not an issue”.

Oracle MySQL, Percona Server or MariaDB users want to use as much memory _as possible_ on their systems. They want that memory usage to be stable, and possibly capped at a value to prevent possible out of memory issues, as the consequences of not doing that lead to OOM situations or to the user having to under-utilise memory to protect his system from dying unexpectedly.

In the specific bug I mention there’s a reference to try malloc_trim() which would free memory as the brief look I made of the malloc code indicated that free() is lazy (by design). That’s fine for many apps but not for memory hungry applications like a database server, and why this wasn’t tickled by older versions of MySQL or MariaDB I’m not sure.

If we accept that we have to use a memory allocator to prevent such issues then basically we are saying that glibc is broken for database usage. Having to manually configure the server additionally to use such “custom” memory allocators is more work and effort and entirely unhelpful, and it seems that maybe MySQL or MariaDB should be shipped with a pre-built alternative allocator “which works”, while still allowing people to override that if they so choose.

I haven’t had time to try to see if “appropriate usage of malloc_trim()” would resolve the issue, as perhaps that is something that might be worth considering. However, it has not happened yet “as there’s no bug…”, and I find that disappointing.

Coming back to InnoDB, people think that memory usage is managed by the size of the buffer pool. Clearly you show that’s not the case as InnoDB has other memory regions which are not part of the buffer pool which can change in size and that change could be significant. So I would really like to see all InnoDB memory usage managed together, with a size limit, so that the issues you describe and other similar ones I have seen would also be controlled.

Longer term I guess it makes sense to have a single allocator, usable by MySQL/MariaDB, with a configurable memory cap, and even external plugins and storage engines should use that. Then memory sizing will be under control.

That clearly then brings up all the problems of what to do under memory pressure but that’s something I think that when you have lots of cache that the system should be able to handle, and free “less important” areas to make space, or simply to report an error “insufficient memory to complete request” or equivalent. Reporting an error is betting than the server being killed by the kernel or systemd.

Until we do that we’ll bump into these issues and they’ll be frustrating and take up a lot of our time: users, people in support, the original upstream developers checking their code and their managers figuring out if there’s a “bug” and what to do about it.

I do hope that progress will be made in managing total memory usage, and think that doing that in InnoDB, where most memory is already used, seems like a good starting point. Extending that to the server as a whole would make all of our lives easier and enable us most efficiently use the memory we want to use in MySQL for caching data and providing the best performance possible.

Filing these things as “not a bug” may be true technically, but doing so doesn’t really answer the underlying problem the user is experiencing and wants to resolve.

]]>