MySQL Optimizer: Naughty Aberrations on Queries Combining WHERE, ORDER BY and LIMIT

by Jean-François Gagné

Sometimes, the MySQL Optimizer chooses a wrong plan, and a query that should execute in less than 0.1 second ends-up running for 12 minutes!This is not a new problem: bugs about this can be traced back to 2014, and a blog post on this subject was published in 2015.But even if this is old news, because this problem recently came yet again to my attention, and because this is still not fixed in MySQL 5.7 and 8.0, this is a subject worth writing about.


The MySQL Optimizer

Before looking at the problematic query, we have to say a few words about the optimizer.The Query Optimizer is the part of query execution that chooses the query plan.A Query Execution Plan is the way MySQL chooses to execute a specific query.It includes index choices, join types, table query order, temporary table usage, sorting type … You can get the execution plan for a specific query using the EXPLAIN command.

A Case in Question

Now that we know what are the Query Optimizer and a Query Execution Plan, I can introduce you to the table we are querying. The SHOW CREATE TABLE for our table is below.

mysql> SHOW CREATE TABLE _test_jfg_201907G
*************************** 1. row ***************************
Table: _test_jfg_201907
Create Table: CREATE TABLE `_test_jfg_201907` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`str1` varchar(150) DEFAULT NULL,
`id1` int(10) unsigned NOT NULL,
`id2` bigint(20) unsigned DEFAULT NULL,
`str2` varchar(255) DEFAULT NULL,

[...many more id and str fields...]

`create_datetime` datetime NOT NULL,
`update_datetime` datetime DEFAULT NULL,
KEY `key1` (`id1`,`id2`)

1 row in set (0.00 sec)

And this is not a small table (it is not very big either though…):

# ls -lh _test_jfg_201907.ibd
-rw-r----- 1 mysql mysql 11G Jul 23 13:21 _test_jfg_201907.ibd

Now we are ready for the problematic query (I ran PAGER cat > /dev/null before to skip printing the result):

mysql> SELECT * FROM _test_jfg_201907
WHERE id1 = @v AND id2 IS NOT NULL
20 rows in set (27.22 sec)

Hum, this query takes a long time (27.22 sec) considering that the table has an index on id1 and id2. Let’s check the query execution plan:

mysql> EXPLAIN SELECT * FROM _test_jfg_201907
WHERE id1 = @v AND id2 IS NOT NULL
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: _test_jfg_201907
partitions: NULL
type: index
possible_keys: key1
key_len: 4
ref: NULL
rows: 13000
filtered: 0.15
Extra: Using where
1 row in set, 1 warning (0.00 sec)

What ? The query is not using the index key1, but is scanning the whole table (key: PRIMARY in above EXPLAIN) ! How can this be ? The short explanation is that the optimizer thinks — or should I say hopes — that scanning the whole table (which is already sorted by the id field) will find the limited rows quick enough, and that this will avoid a sort operation. So by trying to avoid a sort, the optimizer ends-up losing time scanning the table.

Some Solutions

How can we solve this ? The first solution is to hint MySQL to use key1 as shown below. Now the query is almost instant, but this is not my favourite solution because if we drop the index, or if we change its name, the query will fail.

mysql> SELECT * FROM _test_jfg_201907 USE INDEX (key1)
WHERE id1 = @v AND id2 IS NOT NULL
20 rows in set (0.00 sec)

A more elegant, but still very hack-ish, solution is to prevent the optimizer from using an index for the ORDER BY. This can be achieved with the modified ORDER BY clause below (thanks to Shlomi Noach for suggesting this solution on a MySQL Community Chat). This is the solution I prefer so far, even if it is still somewhat a hack.

mysql> SELECT * FROM _test_jfg_201907
WHERE id1 = @v AND id2 IS NOT NULL
20 rows in set (0.00 sec)

A third solution is to use the Late Row Lookups trick. Even if the post about this trick is 10 years old, it is still useful — thanks to my colleague Michal Skrzypecki for bringing it to my attention. This trick basically forces the optimizer to choose the good plan because the query is modified with the intention of making the plan explicit. This is an elegant hack, but as it makes the query more complicated to understand, I prefer not to use it.

mysql> SELECT y.* FROM (
SELECT id FROM _test_jfg_201907
WHERE id1 = @v AND id2 IS NOT NULL
JOIN _test_jfg_201907 y ON =
20 rows in set (0.00 sec)

The ideal solution…

Well, the best solution would be to fix the bugs below. I claim Bug#74602 is not fixed even if it is marked as such in the bug system, but I will not make too much noise about this as Bug#78612 also raises attention on this problem:

PS-4935 is a duplicate of PS-1653 that I opened a few months ago. In that report, I mention a query that is taking 12 minutes because of a bad choice by the optimizer (when using the good plan, the query is taking less than 0.1 second).

One last thing before ending this post: I wrote above that I would give a longer explanation about the reason for this bad choice by the optimizer. Well, this longer explanation has already been written by Domas Mituzas in 2015, so I am referring you to his on ORDER BY optimization post for more details.

Photo by Jamie Street on Unsplash

The content in this blog is provided in good faith by members of the open source community. The content is not edited or tested by Percona, and views expressed are the authors’ own. When using the advice from this or any other online resource test ideas before applying them to your production systems, and always secure a working back up.


Øystein Grøvlen July 29, 2019 at 9:43 am
Hi JF,

I think this behavior may be expected if there is a correlation between the columns. For example, that id2 is more likely to be NULL for high (recent?) values of id. The MySQL optimizer does not have any statistics on how columns are correlated. Hence, it is not be able to effectively determine how many rows it needs to read to find the first 20 rows that satisfies the WHERE clause.

Bug#74602 describes a scenario where column values were not correlated. This particular problem was fixed in 5.7. Bug#78612 seems to be caused by the use of a prefix index, which does not seem to be relevant here.

However, there are probably other bug reports that describes the problem you are facing. In order to address this problem, I think MySQL needs to add statistics on correlation between columns.

Unfortunately, as Domas decribes, the optimizer trace does not contain any information on the cost calculations made when it decides to switch to an index that provides sorting. Hence, it is not straight-forward to verify why this choice was made.

Jean-François Gagné July 29, 2019 at 9:59 am
Hi Øystein, thanks for the details about Bug#74602 and Bug#78612.

In order to address this problem, I think MySQL needs to add statistics on correlation between columns.

This might be a solution, but I am sure there are others. Tracking correlations might be very complicated. A more simple solution might be to identify plans that are “probabilistic” (like the worse case I show in this post) and to not let queries using those plans run for too long before trying an alternative plan. Also, in the case of plans that might have a very worse case (like the one in this post), maybe running both queries in parallel and killing the other when one completes might be another way to avoid this problem.

Øystein Grøvlen July 30, 2019 at 5:13 am

I think it is an interesting idea to let the optimizer have a fallback plan, in case its original estimates is off. The challenge is how to detect in time that the estimates are off. Maybe it would be easier to just switch to the more safe plan if the execution takes longer than the estimate for the safe plan. (Unfortunately, it is not straight-forward to translate query cost to execution time in MySQL.) Another aspect is diagnostics. It must be a way for the user to determine which plan was actually used.

Maybe, the optimizer could be a bit more cautious, and choose a safe plan over a more risky, but potentially quicker plan. In your case, there will be a pretty accurate estimate for the number of rows that need to be read when using the secondary index, while how many rows needs to be read using the primary index depends on how the interesting rows are distributed.

Jeremy July 31, 2019 at 4:08 pm
While in an ideal world I would like to see this fixed my solution is turn towards the application. It is easier to grow application servers than database servers. After all with solutions like Nginx and such one can easily have a farm of whatever application servers (PHP, Python, Java) and just keep adding more.

I generally keep queries super simple and let the application server(s) do the heavy lifting. For example sort. I almost never ask the database server(s) to sort in my own applications. That is wasting DB cycles on something the application layer can do quite easily and faster. So I am like just dump the raw data DB to app.

Thus keeping in mind: “The fastest query is the query you do NOT run”. I prefer to dump as much heavy lifting onto the application and let the database layer handle as little as possible. As stated I would rather spin up another app server than a DB server.

Of course I understand in some limited cases this isn’t always possible. Still in the vast majority of deployments there is an application layer. Also one could turn toward solutions like ProxySQL to cache bad queries although that doesn’t address the bug.

Finally, as stated, I would like to see this bug fixed. However I still wouldn’t ask the DB to sort in most cases.

s August 3, 2019 at 4:27 am
also see (optimizer prefers index for order by rather than filtering – (70x slower))
Jean-François Gagné July 28, 2021 at 5:48 pm
Another blog post on the same subject (with a patch that was merged in 5.7 and 8.0):

Jean-François Gagné

Jean-François (J-F, JF or Jeff for short, not just Jean please) is a System/Infrastructure Engineer and a MySQL Expert. In July 2018, he joined MessageBird, an IT telco startup in Amsterdam, with the mission of scaling the MySQL infrastructure. Before that, J-F worked for five years on growing the MySQL and MariaDB installations, including dealing with replication bottlenecks (he also works on many other non MySQL related projects that are less relevant here). Some of his latest projects are cracking Master Automatic Failover, making Parallel Replication run faster and promoting Binlog Servers. He also has a good understanding of replication in general and a respectable understanding of InnoDB, MySQL, Linux and TCP/IP. Before, he worked as a System/Network/Storage Administrator in a Linux/VMWare environment, as an Architect for a Mobile Services Provider, and as a C and Java Programmer in an IT Service Company. Even before that, when he was learning computer science, Jeff studied cache consistency in distributed systems and network group communication protocols.
Jean-François’ MySQL Blog | J-F’s LinkedIn Profile | Jeff’s Twitter Account

See all posts by Jean-François Gagné »


We invite you to our forum for discussion. You are welcome to use the widget below.

✎ Edit this page on GitHub