Slow MySQL QueriesIn this blog post, we’ll discuss how to improve the performance of slow MySQL queries using Apache Spark.

In my previous blog post, I wrote about using Apache Spark with MySQL for data analysis and showed how to transform and analyze a large volume of data (text files) with Apache Spark. Vadim also performed a benchmark comparing the performance of MySQL and Spark with Parquet columnar format (using Air traffic performance data). That works great, but what if we don’t want to move our data from MySQL to another storage (i.e., columnar format), and instead want to use “ad hock” queries on top of an existing MySQL server? Apache Spark can help here as well.

Speed up Slow MySQL Queries

Using Apache Spark on top of the existing MySQL server(s) (without the need to export or even stream data to Spark or Hadoop), we can increase query performance more than ten times. Using multiple MySQL servers (replication or Percona XtraDB Cluster) gives us an additional performance increase for some queries. You can also use the Spark cache function to cache the whole MySQL query results table.

The idea is simple: Spark can read MySQL data via JDBC and can also execute SQL queries, so we can connect it directly to MySQL and run the queries. Why is this faster? For long-running (i.e., reporting or BI) queries, it can be much faster as Spark is a massively parallel system. MySQL can only use one CPU core per query, whereas Spark can use all cores on all cluster nodes. In my examples below, MySQL queries are executed inside Spark and run 5-10 times faster (on top of the same MySQL data).

In addition, Spark can add “cluster” level parallelism. In the case of MySQL replication or Percona XtraDB Cluster, Spark can split the query into a set of smaller queries (in the case of a partitioned table it will run one query per each partition for example) and run those in parallel across multiple slave servers of multiple Percona XtraDB Cluster nodes. Finally, it will use map/reduce the type of processing to aggregate the results.

I’ve used the same “Airlines On-Time Performance” database as in previous posts. Vadim created some scripts to download data and upload it to MySQL. You can find the scripts here: https://github.com/Percona-Lab/ontime-airline-performance. I’ve also used Apache Spark 2.0, which was released July 26, 2016.

Apache Spark Setup

Starting Apache Spark in standalone mode is easy. To recap:

  1. Download the Apache Spark 2.0 and place it somewhere.
  2. Start master
  3. Start slave (worker) and attach it to the master
  4. Start the app (in this case spark-shell or spark-sql)

Example:

To connect to Spark we can use spark-shell (Scala), pyspark (Python) or spark-sql. Since spark-sql is similar to MySQL cli, using it would be the easiest option (even “show tables” works). I also wanted to work with Scala in interactive mode so I’ve used spark-shell as well. In all the examples I’m using the same SQL query in MySQL and Spark, so working with Spark is not that different.

To work with MySQL server in Spark we need Connector/J for MySQL. Download the package and copy the mysql-connector-java-5.1.39-bin.jar to the spark directory, then add the class path to the conf/spark-defaults.conf:

Running MySQL queries via Apache Spark

For this test I was using one physical server with 12 CPU cores (older Intel(R) Xeon(R) CPU L5639 @ 2.13GHz) and 48G of RAM, SSD disks. I’ve installed MySQL and started spark master and spark slave on the same box.

Now we are ready to run MySQL queries inside Spark. First, start the shell (from the Spark directory, /usr/local/spark in my case):

Then we will need to connect to MySQL from spark and register the temporary view:

So we have created a “datasource” for Spark (or in other words, a “link” from Spark to MySQL). The Spark table name is “ontime” (linked to MySQL ontime.ontime_part table) and we can run SQL queries in Spark, which in turn parse it and translate it in MySQL queries.

partitionColumn” is very important here. It tells Spark to run multiple queries in parallel, one query per each partition.

Now we can run the query:

MySQL Query Example

Let’s go back to MySQL for a second and look at the query example. I’ve chosen the following query (from my older blog post):

The query will find the total number of delayed flights per each airline. In addition, the query will calculate the smart “ontime” rating, taking into consideration the number of flights (we do not want to compare smaller air carriers with the large ones, and we want to exclude the older airlines who are not in business anymore).

The main reason I’ve chosen this query is that it is hard to optimize it in MySQL. All conditions in the “where” clause will only filter out ~70% of rows. I’ve done a basic calculation:

Table structure:

Even with a “covered” index, MySQL will have to scan ~70M-100M of rows and create a temporary table:

What is the query response time in MySQL:

19 minutes is definitely not great.

SQL in Spark

Now we want to run the same query inside Spark and let Spark read data from MySQL. We will create a “datasource” and execute the query:

spark-shell does not show the query time. This can be retrieved from Web UI or from spark-sql. I’ve re-run the same query in spark-sql:

So the response time of the same query is almost 10x faster (on the same server, just one box). But now how was this query translated to MySQL queries, and why it is so much faster? Here is what is happening inside MySQL:

Inside MySQL

Spark:

MySQL:

Spark is running 26 queries in parallel, which is great. As the table is partitioned it only uses one partition per query, but scans the whole partition:

In this case, as the box has 12 CPU cores / 24 threads, it efficiently executes 26 queries in parallel and the partitioned table helps to avoid contention issues (I wish MySQL could scan partitions in parallel, but it can’t at the time of writing).

Another interesting thing is that Spark can “push down” some of the conditions to MySQL, but only those inside the “where” clause. All group by/order by/aggregations are done inside Spark. It needs to retrieve data from MySQL to satisfy those conditions and will not push down group by/order by/etc to MySQL.

That also means that queries without “where” conditions (for example “select count(*) as cnt, carrier from ontime group by carrier order by cnt desc limit 10”) will have to retrieve all data from MySQL and load it to Spark (as opposed to MySQL will do all group by inside). Running it in Spark might be slower or faster (depending on the amount of data and use of indexes) but it also requires more resources and potentially more memory dedicated for Spark. The above query is translated to 26 queries, each does a “select carrier from ontime_part where (yearD >= N AND yearD < N)”

Pushing down the whole query into MySQL

If we want to avoid sending all data from MySQL to Spark we have the option of creating a temporary table on top of a query (similar to MySQL’s create temporary table as select …). In Scala:

In Spark SQL:

Please note:

  1. We do not want to use “partitionColumn” here, otherwise we will see 26 queries like this in MySQL: “SELECT yeard, count(*) FROM (select yeard, count(*) from ontime_part group by yeard) tmp where (yearD >= N AND yearD < N)” (obviously not optimal)
  2. This is not a good use of Spark, more like a “hack.” The only good reason to do it is to be able to have the result of the query as a source of an additional query.

Query cache in Spark

Another option is to cache the result of the query (or even the whole table) and then use .filter in Scala for faster processing. This requires sufficient memory dedicated for Spark. The good news is we can add additional nodes to Spark and get more memory for Spark cluster.

Spark SQL example:

Here we cache partitions p2013 and p2014 in Spark. This retrieves the data from MySQL and loads it in Spark. After that all queries run on the cached data and will be much faster.

With Scala we can cache the result of a query and then use filters to only get the information we need:

Using Spark with Percona XtraDB Cluster

As Spark can be used in a cluster mode and scale with more and more nodes, reading data from a single MySQL is a bottleneck. We can use MySQL replication slave servers or Percona XtraDB Cluster (PXC) nodes as a Spark datasource. To test it out, I’ve provisioned Percona XtraDB Cluster with three nodes on AWS (I’ve used m4.2xlarge Ubuntu instances) and also started Apache Spark on each node:

  1. Node1 (pxc1): Percona Server + Spark Master + Spark worker node + Spark SQL running
  2. Node2 (pxc2): Percona Server + Spark worker node
  3. Node3 (pxc3): Percona Server + Spark worker node

All the Spark worker nodes use the memory configuration option:

Then I can start spark-sql (also need to have connector/J JAR file copied to all nodes):

When creating a table, I still use localhost to connect to MySQL (url “jdbc:mysql://localhost:3306/ontime?user=root&password=xxx”). As Spark worker nodes are running on the same instance as Percona Cluster nodes, it will use the local connection. Then running a Spark SQL will evenly distribute all 26 MySQL queries among the three MySQL nodes.

Alternatively we can run Spark cluster on a separate host and connect it to the HA Proxy, which in turn will load balance selects across multiple Percona XtraDB Cluster nodes.

Query Performance Benchmark

Finally, here is the query response time test on the three AWS Percona XtraDB Cluster nodes:

Query 1: select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10;

Query / Index typeMySQL TimeSpark Time (3 nodes)Times Improvement
No covered index (partitioned)19 min 16.58 sec192.17 sec6.02
Covered index (partitioned)2 min 10.81 sec48.38 sec2.7

 

Query 2:  select dayofweek, count(*) from ontime_part group by dayofweek;

Query / Index typeMySQL TimeSpark Time (3 nodes)Times Improvement
No covered index (partitoned)19 min 15.21 sec195.058 sec5.92
Covered index (partitioned)1 min 10.38 sec27.323 sec2.58

 

Now, this looks really good, but it can be better. With three nodes @ m4.2xlarge we will have 8*3 = 24 cores total (although they are shared between Spark and MySQL). We can expect 10x improvement, especially without a covered index.

However, on m4.2xlarge the amount of RAM did not allow me to run MySQL out of memory, so all reads were from EBS non-provisioned IOPS, which only gave me ~120MB/sec. I’ve redone the test on a set of three dedicated servers:

  • 28 cores E5-2683 v3 @ 2.00GHz
  • 240GB of RAM
  • Samsung 850 PRO

The test was running completely off RAM:

Query 1 (from the above)

Query / Index typeMySQL TimeSpark Time (3 nodes)Times Improvement
No covered index (partitoned)3 min 13.94 sec14.255 sec13.61
Covered index (partitioned)2 min 2.11 sec9.035 sec13.52

 

Query 2:  select dayofweek, count(*) from ontime_part group by dayofweek;

Query / Index typeMySQL TimeSpark Time (3 nodes)Times Improvement
No covered index (partitoned) 2 min 0.36 sec7.055 sec17.06
Covered index (partitioned)1 min 6.85 sec4.514 sec14.81

 

With this amount of cores and running out of RAM we actually do not have enough concurrency as the table only have 26 partitions. I’ve tried the unpartitioned table with ID primary key and use 128 partitions.

Note about partitioning

I’ve used partitioned table (partition by year) in my tests to help reduce MySQL level contention. At the same time the “partitionColumn” option in Spark does not require that MySQL table is partitioned. For example, if a table has a primary key, we can use this CREATE VIEW in Spark :

Assuming we have enough MySQL servers (i.e., nodes or slaves), we can increase the number of partitions and that can improve the parallelism (as opposed to only 26 partitions when running one partition by year). Actually, the above test gives us even better response time: 6.44 seconds for query 1.

Where Spark doesn’t work well

For faster queries (those that use indexes or can efficiently use an index) it does not make sense to use Spark. Retrieving data from MySQL and loading it into Spark is not free. This overhead can be significant for faster queries. For example, a query like this  select count(*) from ontime_part where YearD = 2013 and DayOfWeek = 7 and OriginState = 'NC' and DestState = 'NC'; will only scan 1300 rows and will return instant (0.00 seconds reported by MySQL).

An even better example is this:  select max(id) from ontime_part. In MySQL, the query will use the index and all calculations will be done inside MySQL. Spark, on the other hand, will have to retrieve all IDs (select id from ontime_part) from MySQL and calculate maximum. That took 24.267 seconds.

Conclusion

Using Apache Spark as an additional engine level on top of MySQL can help to speed up the slow reporting queries and add much-needed scalability for the long running select queries. In addition, Spark can help with query caching for frequent queries.

PS: Visual explain plan with Spark

Spark Web GUI provides lots of ways of monitoring Spark jobs. For example, it shows the “job” progress:

spark_jobs

And SQL visual explain details:

slow MySQL queries

16 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Justin Swanhart

Or you could use Shard-Query and just talk over the wire to your MySQL server using MySQL proxy or by using the stored procedure execution method. No need for an extra JDBC layer if, for example, you have a GO or PHP app.

Shard-Query also adds support for scale-out (sharding), window functions, and more.

utdrmac

I would awesome, and helpful, to see this post repeated in similar detail using shard-query.

Muhammad Anzar

Good Article. It is helpful for MySQL db optimization.

Jonathan

I am happy about this article and using this technology, but I would like to make a comment as to how it can be used in the right context and the right use case:

1) If you had a report that only needed to be run once a year, bi-year, quarterly, monthly or even weekly, then waiting 19 minutes is not so bad Vs setting up a new server with a new technology and maintaining that.
2) If you need the report more frequently, then updating an intermediate or summary table once a day, a couple of times a day or hourly, would be more cost, time and resource effective.
3) If the data is too big for that AND you have sharded your data across a few servers – lets say, a server per destination/continent – then this technology is absolutely amazing and exactly what you would need.

Tim Cline

I loaded the airline data using the scripts posted on 9/14/16.

The tables we have different row counts?

For example:

Your query:
mysql> select count(*) FROM ontime;
+———–+
| count(*) |
+———–+
| 152657276 |
+———–+

Mine:
SQL> select count(*) from ontime;

COUNT(*)
———-
163841725

Elapsed: 00:00:00.02

Your query:
mysql> select count(*) FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’) and DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’);
+———–+
| count(*) |
+———–+
| 108776741 |
+———–+

Mine:
SQL> select count(*) FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’) and DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’);

COUNT(*)
———-
116642066

For example in Q1 – Just partitioning, no indexes, IMDB, DOP=4, cold cache with no warming, left of limit clause to see the actual rows so we can compare values

SQL> select min(yeard) min_year,
max(yeard) max_year,
Carrier,
count(*) as cnt,
sum(case
when ArrDelayMinutes > 30 THEN 1
else 0
end
)as flights_delayed,
round(sum(case
when ArrDelayMinutes > 30 THEN 1
else 0
end
)/count(*), 2) as rate
FROM ontime
WHERE DayOfWeek not in (6,7) and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)
and DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)
GROUP by Carrier
HAVING count(*) > 1000
and max(yeard) > 1990
ORDER by rate DESC, count(*) desc
/

MIN_YEAR MAX_YEAR CA CNT FLIGHTS_DELAYED RATE
———- ———- — ———- ————— ———-
2015 2015 NK 82151 15476 .19
2003 2015 EV 3989473 617177 .15
2003 2015 B6 1597104 239929 .15
2001 2015 MQ 5047969 689042 .14
2006 2011 XE 1615266 230977 .14
2003 2005 DH 501056 69833 .14
2004 2010 OH 1307404 175258 .13
2006 2013 YV 1136242 145135 .13
2003 2006 RU 1007248 126733 .13
1987 2015 UA 11033601 1375462 .12
2003 2014 FL 1766874 219155 .12
2003 2006 TZ 136735 16496 .12
1987 2015 AA 13019769 1476071 .11
1987 2011 CO 6441970 720484 .11
1987 2001 TW 2563283 271861 .11
2007 2013 9E 998404 105391 .11
2005 2015 F9 673387 77295 .11
2012 2015 VX 170041 19156 .11
1987 2015 DL 15247116 1460094 .1
1988 2015 US 11345314 1086779 .1
1987 2009 NW 7430117 709326 .1
2003 2015 OO 5317133 552988 .1
1987 2015 WN 17958919 1690362 .09
1987 2005 HP 2467155 219645 .09
1987 2015 AS 1839322 160592 .09
1987 1991 PA 215108 20258 .09
1991 1991 ML 51867 3608 .07
2001 2008 AQ 4408 198 .04

28 rows selected.

Elapsed: 00:00:02.76

Query 2:
SQL> select dayofweek, count(*) from ontime group by dayofweek;

DAYOFWEEK COUNT(*)
———- ———-
2 23889775
4 24085159
7 22813815
6 20792612
5 24113594
1 24146309
3 24000461

7 rows selected.

Elapsed: 00:00:00.72

Hardware: 2011 Macbook Pro laptop running Oracle 12c on VirtualBox VM – 4 core, 10GB RAM, single SSD. I plan on loading this dataset up on S7 Server to see what kind of DAX offloading/benefits you will receive with SparkSQL – advertised is 9X gains.

Tim cline

As you can see Oracle 12.2 in-memory parallel query is much faster! Small laptop destroying many larger server configurations with sparksql.

On sparc, 12.2, I’m getting 180-220B rows second per core by leveraging the DAX database accelerators. The good news is the DAX API natively supports Spark so it fully offloads to the offloading gpu’s. So you could use the sparc chips to run faster mysql/ sparksql by 10x over Intel.

Otherwise, just leverage big data sql and create in memory external table of the files on hdfs or perform smartscan on hdfs or nosql tables. Use regular sql/jdbc and a single security model for all your data in different polyglot systems.

Paul

When Spark loads data from a table does the load happen on a single node in the cluster or should the query work be spread across several nodes depending on the size of the results? I have read references regarding MySQL saying that data is loaded on a single node and then parallized? I am interested from an Oracle side but I think this is a general question. I get executor failures when trying to load a full large table which is approx the size of RAM on a single node in the cluster. Any help appreciated!

Leon

Unfortunately Spark and MySQL does not love each other. Spark 1.6 cannot process by default large MySQL files that does not fit in memory. You have to set fetchsize to Integer.MIN_VALUE. It will force fetchsize to 1, any slowly but steady can deal with MySQL large tables.
Now, Spark 2 decide to check foe negative parameters, so it does not allow Integer.MIN_VALUE, so it just does not work.

Jay Prajapati

Hello Alexander Can you tell me probably that how much RAM need to use for some ammount of Data for example i have 18 GB RAM machine and i need to work with 5 GB data / 3 caror 50 lacs Raws will it be possible?

Can you tell me that how much RAM is succifient for how much data ?

Thanks

Tim Cline

I updated the results to reflect Oracle 12.2 with the latest airline data pulled (1987-2017) on a new Macbook Pro with 9 other Pluggable Databases running. I created a Pluggable database called Kraken.

–Used External Table Preprocessor so I can parallel unzip, load into in-memory, query.
SQL> conn airline/airline@kraken
Connected.
SQL> set timing on
SQL> select count(*) from ontime;

COUNT(*)
———-
173277113

Elapsed: 00:00:00.15

Query1

SQL> select min(yeard) min_year,
max(yeard) max_year,
Carrier,
count(*) as cnt,
sum(case
when ArrDelayMinutes > 30 THEN 1
else 0
end
)as flights_delayed,
round(sum(case
when ArrDelayMinutes > 30 THEN 1
else 0
end
)/count(*), 2) as rate
FROM ontime
WHERE DayOfWeek not in (6,7) and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)
and DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)
GROUP by Carrier
HAVING count(*) > 1000
and max(yeard) > 1990
ORDER by rate DESC, count(*) desc
/

MIN_YEAR MAX_YEAR CA CNT FLIGHTS_DELAYED RATE
———- ———- — ———- ————— ———-
2015 2017 NK 252352 42433 .17
2003 2017 B6 1916168 297666 .16
2003 2017 EV 4548031 689699 .15
2001 2015 MQ 5047969 689042 .14
2006 2011 XE 1615266 230977 .14
2003 2005 DH 501056 69833 .14
2004 2010 OH 1307404 175258 .13
2006 2013 YV 1136242 145135 .13
2003 2006 RU 1007248 126733 .13
2012 2017 VX 253244 33336 .13
1987 2017 UA 11695349 1457354 .12
2003 2014 FL 1766874 219155 .12
2005 2017 F9 788494 94298 .12
2003 2006 TZ 136735 16496 .12
1987 2017 AA 14095227 1604675 .11
1987 2011 CO 6441970 720484 .11
2003 2017 OO 6107403 645357 .11
1987 2001 TW 2563283 271861 .11
2007 2013 9E 998404 105391 .11
1987 2017 WN 19575287 1868474 .1
1987 2017 DL 16372845 1558507 .1
1988 2015 US 11345314 1086779 .1
1987 2009 NW 7430117 709326 .1
1987 2005 HP 2467155 219645 .09
1987 2017 AS 1986491 172876 .09
1987 1991 PA 215108 20258 .09
1991 1991 ML 51867 3608 .07
2001 2008 AQ 4408 198 .04

28 rows selected.

Elapsed: 00:00:00.84

Query2
SQL> select DayOfWeek, count(*) from ontime group by DayOfWeek;

DAYOFWEEK COUNT(*)
———- ———-
6 21935781
1 25545423
4 25491699
5 25522447
7 24132802
3 25388310
2 25260651

7 rows selected.

Elapsed: 00:00:00.23

I’m working on a sharded 12.2 (3 node) DB to see next.

Anand

Very nice article sir… please sir can you give similar tutorial for spark where realtime time data is to be calculated ,sorted and ready for user to view, for example in fatancy sport application where there are so many teams being created and for those team score needs to be updated on the basis of realtime match information…i am assumng that before calculation all the data can be loaded in the memory of spark and then update operation can be perform which will be realtime. but i cannot find update for spark-sql

Anand Gupta

Sur I think that can be achieved via transformation mean transforming one dataframe into another and deleting the old one….Pls go to https://gamechanger.dream11.in/blog/leaderboard-dream11/ they have uses spark for point calculation on realtime basis….