MySQL Memory ErrorTroubleshooting crashes is never a fun task, especially if MySQL does not report the cause of the crash. For example, when a MySQL memory issue shows up. Peter Zaitsev wrote a blog post in 2012: Troubleshooting MySQL Memory Usage with lots of useful tips. With the new versions of MySQL (5.7+) and performance_schema, we have the ability to troubleshoot MySQL memory allocation much more easily.

MySQL Memory Error

First of all, there are 3 major cases when MySQL will crash due to running out of memory:

  1. MySQL tries to allocate more memory than available because we specifically told it to do so. For example: you did not set innodb_buffer_pool_size correctly. This is very easy to fix.
  2. There is some other process(es) on the server that allocates RAM. It can be the application (java, python, php), web server or even the backup (i.e. mysqldump). When the source of the problem is identified, it is straightforward to fix.
  3. Memory leaks in MySQL. This is a worst-case scenario, and we need to troubleshoot.

Where to start troubleshooting MySQL memory leaks

Here is what we can start with (assuming it is a Linux server):

Part 1: Linux OS and config check
  1. Identify the crash by checking mysql error log and Linux log file (i.e. /var/log/messages or /var/log/syslog). You may see an entry saying that OOM Killer killed MySQL. Whenever MySQL has been killed by OOM “dmesg” also shows details about the circumstances surrounding it.
  2. Check the available RAM:
    • free -g
    • cat /proc/meminfo
  3. Check what applications are using RAM: “top” or “htop” (see the resident vs virtual memory)
  4. Check mysql configuration: check /etc/my.cnf or in general /etc/my* (including /etc/mysql/* and other files). MySQL may be running with the different my.cnf (run ps  ax| grep mysql )
  5. Run vmstat 5 5 to see if the system is reading/writing via virtual memory and if it is swapping
  6. For non-production environments we can use other tools (like Valgrind, gdb, etc) to examine MySQL usage
Part 2:  Checks inside MySQL

Now we can check things inside MySQL to look for potential MySQL memory leaks.

MySQL allocates memory in tons of places. Especially:

  • Table cache
  • Performance_schema (run: show engine performance_schema status  and look at the last line). That may be the cause for the systems with small amount of RAM, i.e. 1G or less
  • InnoDB (run show engine innodb status  and check the buffer pool section, memory allocated for buffer_pool and related caches)
  • Temporary tables in RAM (find all in-memory tables by running: select * from information_schema.tables where engine='MEMORY' )
  • Prepared statements, when it is not deallocated (check the number of prepared commands via deallocate command by running show global status like ‘ Com_prepare_sql';show global status like 'Com_dealloc_sql'  )

The good news is: starting with MySQL 5.7 we have memory allocation in performance_schema. Here is how we can use it

  1. First, we need to enable collecting memory metrics. Run:
  2. Run the report from sys schema:
  3. Usually this will give you the place in code when memory is allocated. It is usually self-explanatory. In some cases we can search for bugs or we might need to check the MySQL source code.

For example, for the bug where memory was over-allocated in triggers (https://bugs.mysql.com/bug.php?id=86821) the select shows:

The largest chunk of RAM is usually the buffer pool but ~3G in stored procedures seems to be too high.

According to the MySQL source code documentation, sp_head represents one instance of a stored program which might be of any type (stored procedure, function, trigger, event). In the above case, we have a potential memory leak.

In addition, we can get a total report for each higher level event if we want to see from the bird’s eye view what is eating memory:

I hope these simple steps can help release memory in MySQL.

You May Also Like

As your applications grow, so too must your database. Consequently, database performance and availability are vital to your business’s success. When your team doesn’t understand database performance well enough, application performance issues can be the result. This lack of knowledge could also prevent issue resolution in a timely manner.

Startup organizations and small businesses need a proven database architecture that is not only easy to set up but includes failover and basic continuity components. Our brief describes such a solution. The solution is based on Percona Server for MySQL and it is ideal for on-premise environments.

Links to more resources that might be of interest

3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Peter Zaitsev

If you ever wondered if it is MySQL which consumed a lot of memory consider looking at Per Process Memory Usage as described in this blog post https://www.percona.com/blog/2018/05/21/capturing-per-process-metrics-with-percona-monitoring-and-management-pmm/

I’ve seen many times when other processes were consuming a lot of memory but MySQL got blamed.

doublemarket

Hello, thanks for the useful post about how we should start investigating memory issues!

Since this is nice, I’ve translated this into Japanese and posted it on my site yakst.com/ja/posts/5212

If it’s a problem, please let me know!

Andre

Hi,

according to the sentence “The largest chunk of RAM is usually the buffer pool but ~3G in stored procedures seems to be too high.”

How can I find the cause of the high memory usage of stored procedures?

Greetings
Andre