I believe InnoDB storage engine architecture is great for a lot of online workloads, however, there are no silver bullets in technology and all design choices have their trade offs. In this blog post I’m going to talk about one important InnoDB limitation that you should consider.

InnoDB is a multiversion concurrency control (MVCC) storage engine which means many versions of the single row can exist at the same time. In fact there can be a huge amount of such row versions. Depending on the isolation mode you have chosen, InnoDB might have to keep all row versions going back to the earliest active read view, but at the very least it will have to keep all versions going back to the start of SELECT query which is currently running.

In most cases this is not a big deal – if you have many short transactions happening you will have only a few row versions to deal with. If you just use the system for reporting queries but do not modify data aggressively at the same time you also will not have many row versions. However, if you mix heavy updates with slow reporting queries going at the same time you can get into a lot of trouble.

Consider for example an application with a hot row (something like actively updated counter) which has 1000 updates per second together with some heavy batch job that takes 1000 to run. In such case we will have 1M of row versions to deal with.

Let’s now talk about how those old-row versions are stored in InnoDB – they are stored in the undo space as an essentially linked list where each row version points to the previous row version together with transaction visibility information that helps to decide which version will be visible by this query. Such design favors short new queries that will typically need to see one of the newer rows, so they do not have to go too far in this linked list. This might not be the case with reporting queries that might need to read rather old row version which correspond to the time when the query was started or logical backups that use consistent reads (think mysqldump or mydumper) which often would need to access such very old row versions.

So going through the linked list of versions is expensive, but how expensive it can get? In this case a lot depends upon whenever UNDO space fits in memory, and so the list will be traversed efficiently – or it does not, in which case you might be looking at the massive disk IO. Keep in mind undo space is not clustered by PRIMARY key, as normal data in InnoDB tables, so if you’re updating multiple rows at the same time (typical case) you will be looking at the row-version chain stored in many pages, often as little as one row version per page, requiring either massive IO or a large amount of UNDO space pages to present in the InnoDB Buffer pool.

Where it can get even worse is Index Scan. This is because Indexes are structured in InnoDB to include all row versions corresponding to the key value, current and past. This means for example the index for KEY=5 will contain pointers to all rows that either have value 5 now or had value 5 some time in the past and have not been purged yet. Now where it can really bite is the following – InnoDB needs to know which of the values stored for the key are visible by the current transaction – and that might mean going through all long-version chains for each of the keys.

This is all theory, so lets see how we can simulate such workloads and see how bad things really can get in practice.

I have created 1Bil rows “sysbench” table which takes some 270GB space and I will use a small buffer pool – 6GB. I will run sysbench with 64 threads pareto distribution (hot rows) while running a full table scan query concurrently: select avg(k) from sbtest1 Here is exact sysbench run done after prepare.

Here is the explain for the “reporting” query that you would think to be a rather efficient index scan query. With just 4 bytes 1 Billion of values would be just 4G (really more because of InnoDB overhead) – not a big deal for modern systems:

2 days have passed and the “reporting” query is still running… furthermore the foreground workload started to look absolutely bizarre:

As you can see we have long stretches of times when there are no queries completed at all… going to some spikes of higher performance. This is how it looks on the graph:

Corresponding CPU usage:

This shows what we are not only observing something we would expect with InnoDB design but also there seems to be some serve starvation happening in this case which we can confirm:

Waiting for the given buffer pool block to become available for more than 3 minutes is a big issue – this lock should never be held by more than a few microseconds.

SHOW PROCESSLIST confirms even most basic selects by primary key can get stalled for long time

How do I know it is UNDO space related issue in this case? Because it ends up taking majority of buffer pool

And it does so in a very crazy way – when there is almost no work being done UNDO_LOG contents of the buffer pool is growing very rapidly while when we’re getting some work done the INDEX type pages take a lot more space. To me this seems like as the index scan is going it touches some hot rows and some not-so-hot ones, containing less row versions and so does not put much pressure on “undo space.”

Take Away: Now you might argue that this given workload and situation is rather artificial and rather narrow. It well might be. My main point here is what if you’re looking at just part of your workload, such as your main short application queries, and not taking reporting or backups into account “because their performance is not important.” In this case you might be in for a big surprise. Those background activities might be taking much more than you would expect, and in addition, they might have much more of a severe impact to your main application workload, like in this case above.

P.S: I’ve done more experiments to validate how bad the problem really is and I can repeat it rather easily even without putting system into overdrive. Even if I run sysbench injecting just 25% of the transactions the system is possibly capable of handling at peak I have “select avg(k) from sbtest1” query on 1 billion row table to never complete as it looks like the new entries are injected into the index at this point faster than Innodb can examine which of them are visible.

22 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Henrik Ingo

It is not at all artificial workload. I was seeing exactly this back in my days at Nokia for one app. Large database, heavy on updates and of course reads, and a few background processes wanted to do count(*) on some large tables. These count(*) queries took 7-10 hours to run.

Chaning isolation level helped us get over most of the pain, and I think ultimately we also started avoiding the count(*) queries altogether. (The app has since been completely rewritten to a more denomarlized schema…)

I hadn’t considered the effect of undo records on the situation. We were at a similar level though, with hundreds of updates per second.

Thanks for a thorough explanation once again!

Garret Niel

We were experiencing this problem in lower scale in a very active forums, while trying to calculate stats in the background.

Lakshma

Peter, similar experience. We came over that issue but the internal reason for the issue is not clear till now. Thank you, appreciate for your time explaining. Frankly speaking familiarity with this kind of situation is very rare

Vladimir Fedorkov

That’s in particular why XtraBackup is better than mysqldump –single-transaction

sheyda

We ran into a similar issue with heavy updates on a given table and since the table was a session table we decided to r the data to convert it into a ‘memory’ table.
I understand each case would be different, however what would be the best approach? Avoiding mix heavy updates with slow reporting queries if possible or changing the isolation level?
a ‘memory’ table.

sheyda

Sorry a lot of typos.

The correct version …

We ran into a similar issue with heavy updates on a given table and since the table was a session table we decided to convert it into a ‘memory’ table.
I understand each case would be different, however what would be the best approach? Avoiding mix heavy updates with slow reporting queries if possible or changing the isolation level?

ives

great post Peter. very interesting. yet another reason to leverage percona’s ability to show buffer breakdown.

Ben Krug

Have you been talking to Domas? http://bugs.mysql.com/bug.php?id=74919
This idea seems to be in the air this holiday season. 🙂

Daniël van Eeden

What about changing the transaction isolation from REPEATABLE READ to READ COMMITTED?

Henrik Ingo

If I remember correctly, in my case we changed everything to READ UNCOMMITTED and that helped a lot.

If I understand the problem correctly – especially now with Peter’s excellent explanation – then setting only the long running query to READ UNCOMMITTED would suffice.

Henrik Ingo

Peter: In the project I was working with, switching everything to READ UNCOMITTED did have a significant positive impact. But I will have to be the first one to admit I didn’t fully understand why, I was just happy that the problem was solved. At the time I also suspected the InnoDB gap locking, another area I don’t feel confident I fully understand.

Henrik Ingo

I think we tried both REPEATABLE READ and READ COMMITTED, but only READ UNCOMMITTED solved the problem. (Then the workload was of course heavily disk bound anyway.)

Simon J Mudd

Another related issue that comes from a lot of undo information is the amount of undo space that gets used on disk on the server. This information is stored in the ibdata file which will make it grow. Like many others I use innodb_file_per_table so the ibdata1 file in theory should be very small but I have a few servers where this file has grown to 300GB due to the undo information growing so much. Later even if the undo information is no longer used the ibdata1 file can not be shrunk or modified in size. When I clone one server to make a new one this thus requires me to copy this additional useless space onto a new server. mysqldump + load is just too slow for this type of server, so I’m stuck with this wasted disk space.
MySQL 5.7 promises a separate undo table space which allows for these files to be cleared but there is no migration path from the current setup to the new one. It would be nice if work would be done on that as while the database is down (during a clone or or regular maintenance) there is an opportunity if this undo space is mainly “empty” now to rewrite this ibdata1 file or rearrange the table spaces to take advantage of this new feature. Such a migration facility would save me a few hundred GB here and there on different servers and that would be really useful.

Domas Mituzas

well, as mentioned in https://www.facebook.com/MySQLatFacebook/posts/10152610642241696 – this is a problem that should be looked at, I filed a bug at http://bugs.mysql.com/bug.php?id=74919 – let us all put pressure on it

Mark Callaghan

Excellent post. Without more details, this statement from your text above can be scary:
“This is because Indexes are structured in InnoDB to include all row versions corresponding to the key value, current and past.”

Fortunately an old post from InnoDB has more details – https://blogs.oracle.com/mysqlinnodb/entry/mysql_5_5_innodb_change

Henrik Ingo

This is an old post but for some reason I was thinking about this last week.

I now remember what we did in the project I mentioned in this thread. My answer above is wrong.

Changing from REPEATABLE READ to READ COMMITTED did help with the performance drag created by the long running queries.

However, when running in READ COMMITTED mode we then did a pt-query-digest and there was one frequent query that was using up more than 50% of the execution time. (I don’t remember if this was a SELECT or UPDATE.) This seemed abnormal so we changed to READ UNCOMMITTED and got a more balanced situation in pt-query-digest and overall some additional boost in performance. (…which is of course not surprising.)

Still, I think for the problem discussed in this blog post, it actually did help to move from REPEATABLE READ to READ UNCOMMITTED.