This post builds on the How to obtain the “LES” (Last Executed Statement) from an Optimized Core Dump? post written about a year ago.

A day after that post was released, Shane Bester wrote an improved version, How to obtain all executing queries from a core file on his blog. Reading that post is key to understanding what follows.

I am faced with some complex bugs which would do well with SQL testcases. Extracting the last executed statement (and maybe all queries running at the time of the crash/asserts) is crucial to generate testcases well. E.g. you may have a full SQL trace from RQG or elsewhere, but what if the actual crashing statement never made it to the logs? This is often the case, especially if you use an older version of RQG or use –log-output=FILE logging.

The idea here is to extract all running queries (failing/crashing mysqld or not) from a core file, and then add these SQL statements to the end of a testcase to ensure the crash/assert/Valgrind issue is reproduced.

Now, I found that not all mysqld core files have the global threads variable available, which rules out using Shane’s script as a universal automated solution. So, looks like we need another solution to get all queries out which works in all cases.

Using Shane’s idea of macro’s seemed like the way forward (i.e. loop through threads and frames for each thread), but the next problem was that gdb stops as soon as an error is found. This is a known shortcoming in gdb’s macro language. The outcome is that by frame x in thread y gdb will stop execution because “there is no such thread/frame”. And, one does not know upfront which frame number the do_command frame will have or even if there is one. Either will give an error and cause gdb macro’s to abort immediately. Not usable thus.

There seemed to be only one other solution mentioned online; to use python for gdb as this has a “try:” function. However, this is not really a good general solution either as it requires gdb to be compiled with –with-python, and so it requires per-server customization of gdb.

Looking further, I found a workaround here

One can specify the function name like this:
print do_command::thd->query_string.string.str

This is a great way to avoid having to cycle through all frames when checking statements in an automated script. It also allows one to run this query for all threads by sending the following input to gdb (and you can do so using the < shell redirector):


thread 1
print do_command::thd->query_string.string.str
thread 2
print do_command::thd->query_string.string.str
thread 3
etc...

The idea here is that the redirector into gdb keeps executing statements one after the other, even if an error occurs. The result: a nice list of all running queries at the time of the assert/crash.

For a full version of the script, branch lp:percona-qa (bzr branch lp:percona-qa) and checkout the extract_query.gdb file. For an example on how to post-process results, you could use something similar to these shell statements:


gdb ${BIN} ${CORE} >/dev/null 2>&1 < extract_query.gdb
grep '^$' /tmp/gdb_PARSE.txt | sed 's/^[$0-9a-fx =]*"//;s/[" t]*$//;s/$/;/' | grep -v '^$' > ./queries.txt

In this example, ${BIN} is a BIN variable pointing to the mysqld binary, ${CORE} points to the core file, and /tmp/gdb_PARSE.txt is generated by the extract_query.gdb gdb script. The queries.txt file will contain all queries, nicely terminated by the usual “;” SQL terminator used in MySQL.

Enjoy!

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
kris

1. I need query for 5 fields, out of 10 fields

Michael Mior

Is this script actually posted anywhere? It seems like it would quite useful but I couldn’t find a link.

Nickolay Ihalainen

Missing backslash in first grep, correct command is:

If you have empty output, check /tmp/gdb_PARSE.txt
it could say that thd variable is optimized yet.
In this case you should find where thd value is defined, for example in Percona Server 5.6.38-83.0 running on CentOS 7.4:

How to easily test the extract_query.gdb script:
docker pull percona:latest;docker run –privileged –rm -it –name p57 -e MYSQL_ALLOW_EMPTY_PASSWORD=1 percona:latest # replace latest with specific version if needed
docker exec -it p57 bash # login to just created docker container:

Good luck and have your database instrumented before you will get a crash!