Comments on: Innodb Double Write https://www.percona.com/blog/innodb-double-write/ Fri, 08 Feb 2013 19:01:15 +0000 hourly 1 https://wordpress.org/?v=6.5.2 By: Ian https://www.percona.com/blog/innodb-double-write/#comment-1251926 Fri, 08 Feb 2013 19:01:15 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-1251926 If this buffer has a capacity of ~2MB, how is a TEXT/BLOB that is larger than that size handled?

]]>
By: Baron Schwartz https://www.percona.com/blog/innodb-double-write/#comment-804976 Mon, 18 Apr 2011 16:46:34 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-804976 Zonker, the doublewrite buffer works as kind of a two-phase commit. It allows recovering a failed write. The write either succeeded or not, and if it didn’t, on recovery it will be replayed from either the doublewrite buffer or the redo logs. It doesn’t work the way you think, and the scenario you listed can’t happen.

]]>
By: Nick Peirson https://www.percona.com/blog/innodb-double-write/#comment-804959 Mon, 18 Apr 2011 12:27:16 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-804959 Zonker,

As I understand it there are two failure modes:

1. A write to the doublewrite buffer fails. In this case the data in the table hasn’t changed, so the redo log can be applied to update the data.

2. If the write to the doublewrite buffer succeeds and the write to the table fails, the data can be read from the doublewrite buffer to update the table.

If we were writing directly to the table, the failed write would’ve left the table in an inconsistent state where the redo log couldn’t be applied and there’s been no successful write to a buffer that we can read the data back from. The doublewrite buffer means that we always have a way of bringing the data up to date after a failed write, regardless of failure mode.

I haven’t looked at the internals, so this is an educated guess based on the post and comments. I’d be grateful if someone more knowledgeable could confirm.

]]>
By: Zonker Harris https://www.percona.com/blog/innodb-double-write/#comment-804863 Sat, 16 Apr 2011 22:19:20 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-804863 I guess I’m not buying it.

I write to the doublewrite buffer: AAAAAAAAAA

then flush to the table: AAAAAAAAAA

fine, all’s fine. But now I write again to the doublewrite buffer and the write fails halfway:

BBBBB…..

We “recover” and find that the doublewrite buffer is BBBBB….., with a bad checksum, or however we determine that it isn’t complete, so we “just discard it” per the mysql docs.

Really? This is good?

Now I’ve got AAAAAAAAAA in the table where I expect to have BBBBBBBBBB. AAAAAAAAAA maybe “consistent”, but it isn’t “correct”, and I stil have a problem. If that data is correlated to other data, I still have to recover from a backup to get back to true consistency, i.e. my data all being consistent with each other.

So I don’t really see how this benefits me.

To take the example further, if doublewrite is good, why wouldn’t triplewrite or fourplewrite be even better? Answer: It isn’t, for exactly the reasons I’ve described above. You still have bad data. “Consistent”, perhaps, but not correct.

And now the obligatory “Or am I simply not understanding this?”

]]>
By: Vadim https://www.percona.com/blog/innodb-double-write/#comment-803343 Fri, 01 Apr 2011 15:00:42 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-803343 markgoat.

Let me try to answer your question.

The problem comes from fact that when we issue pwrite , and there is crash during
this operation, then there is no way to check state of operation.
It may happen that for 16K operation we wrote only 4K or 8K.

So we may end up with situation when half of page contains new information and another half – old information. InnoDB of course will detect corruption using checksum, but it won’t help much, as page is broken.

So solution to this is to have 2 copies of page. if we crash during writing of one of copies – we always have another consistent copy, which we can work with.

]]>
By: Baron Schwartz https://www.percona.com/blog/innodb-double-write/#comment-803315 Fri, 01 Apr 2011 10:02:55 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-803315 The redo logs in InnoDB don’t have complete page images. This is different from some other database servers.

]]>
By: markgoat https://www.percona.com/blog/innodb-double-write/#comment-803284 Fri, 01 Apr 2011 05:38:15 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-803284 Sorry, I am still not understand why we need this.
Each entry in the log should be redo when doing recovery? I think even the page is “partial written”, it doesn’t matter, redo log know the after image. So regardless of the page itself, redo will do every change again. If part of the page are not wrtten due to a crash, redo will do the write? In other words, the “consistent page” is saved in the redo log. Unless you mean “partial write” will modify someting not saved in the redo log??
I mean, EVERY change are in the redo log entries. Partial write , in my mind, is that it failed to make some changes to the disk. But when we apply redo log, we will redo the planned actions.
For example, we have 4 blocks each page, we update 2 rows in two of the blocks, and then we have two entries in the redo log. then there is a commit log. As a result of Partial write, only block 1 was updated, block 2,3,4 did not. But I didn’t see any issue in this case, because redo will reimage block 1 and block 3 correctly? We don’t care about if the disk is consistent or not, this is what redo in theory to handle, isn’t it?
So I think I still didn’t understand innoDB , why it need “double write”? I am new to innoDB, not knowing any internal details, I just think this from a textbook point of view ^_^

thank you, it is so long time you post this great article, don’t know if you can still answer me. I am thinking this for days, but can’t understand yet.

]]>
By: Peter Zaitsev https://www.percona.com/blog/innodb-double-write/#comment-766444 Thu, 10 Jun 2010 17:38:25 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-766444 There are different techniques which can be used instead of double write buffer.

]]>
By: qihua https://www.percona.com/blog/innodb-double-write/#comment-758511 Tue, 04 May 2010 06:58:37 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-758511 nice article. But why oracle doesn’t need it?

]]>
By: Robert Milkowski https://www.percona.com/blog/innodb-double-write/#comment-705681 Mon, 04 Jan 2010 15:50:11 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-705681 btw: if MySQL is running on ZFS then you can safely disable innodb doublewrites as ZFS always guarantees that either entire write completes or nothing is updated.

]]>
By: k2s https://www.percona.com/blog/innodb-double-write/#comment-28552 Sat, 13 Jan 2007 11:57:21 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-28552 It is greate article, but it is difficult to find because you are not mentioning any of the keyword skip-innodb_doublewrite and innodb_doublewrite. Maybe this comment will make it available in search.

]]>
By: Peter Zaitsev https://www.percona.com/blog/innodb-double-write/#comment-1800 Tue, 15 Aug 2006 09:14:48 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-1800 Toby,

Checksums are also used. Checksums allow to find which pages are corrupted, ie partially written but checksums can’t help to recover page if it was corrupted.

]]>
By: Toby https://www.percona.com/blog/innodb-double-write/#comment-1788 Mon, 14 Aug 2006 22:33:17 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-1788 Could a checksum achieve the same end?

]]>
By: Ratheesh Kaniyala https://www.percona.com/blog/innodb-double-write/#comment-806016 Fri, 04 Aug 2006 07:00:01 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-806016

Hi Baron,

Could you please help me understand this a bit better. So is this double write buffer a special file on the disk and can we see this file on the File system using a ls cmd?
Or is this just a space withing the ibdata1 and/or ib_logfile? Because when you say it is useful during recovery then it has to be a file within the Mysql space.


Ratheesh

]]>
By: Baron Schwartz https://www.percona.com/blog/innodb-double-write/#comment-806392 Fri, 04 Aug 2006 07:00:01 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-806392

Zonker, the doublewrite buffer works as kind of a two-phase commit. It allows recovering a failed write. The write either succeeded or not, and if it didn’t, on recovery it will be replayed from either the doublewrite buffer or the redo logs. It doesn’t work the way you think, and the scenario you listed can’t happen.

]]>
By: Nick Peirson https://www.percona.com/blog/innodb-double-write/#comment-806394 Fri, 04 Aug 2006 07:00:01 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-806394

Zonker,

As I understand it there are two failure modes:

1. A write to the doublewrite buffer fails. In this case the data in the table hasn’t changed, so the redo log can be applied to update the data.

2. If the write to the doublewrite buffer succeeds and the write to the table fails, the data can be read from the doublewrite buffer to update the table.

If we were writing directly to the table, the failed write would’ve left the table in an inconsistent state where the redo log couldn’t be applied and there’s been no successful write to a buffer that we can read the data back from. The doublewrite buffer means that we always have a way of bringing the data up to date after a failed write, regardless of failure mode.

I haven’t looked at the internals, so this is an educated guess based on the post and comments. I’d be grateful if someone more knowledgeable could confirm.

]]>
By: Zonker Harris https://www.percona.com/blog/innodb-double-write/#comment-806395 Fri, 04 Aug 2006 07:00:01 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-806395

I guess I’m not buying it.

I write to the doublewrite buffer: AAAAAAAAAA

then flush to the table: AAAAAAAAAA

fine, all’s fine. But now I write again to the doublewrite buffer and the write fails halfway:

BBBBB…..

We “recover” and find that the doublewrite buffer is BBBBB….., with a bad checksum, or however we determine that it isn’t complete, so we “just discard it” per the mysql docs.

Really? This is good?

Now I’ve got AAAAAAAAAA in the table where I expect to have BBBBBBBBBB. AAAAAAAAAA maybe “consistent”, but it isn’t “correct”, and I stil have a problem. If that data is correlated to other data, I still have to recover from a backup to get back to true consistency, i.e. my data all being consistent with each other.

So I don’t really see how this benefits me.

To take the example further, if doublewrite is good, why wouldn’t triplewrite or fourplewrite be even better? Answer: It isn’t, for exactly the reasons I’ve described above. You still have bad data. “Consistent”, perhaps, but not correct.

And now the obligatory “Or am I simply not understanding this?”

]]>
By: Vadim Tkachenko https://www.percona.com/blog/innodb-double-write/#comment-806527 Fri, 04 Aug 2006 07:00:01 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-806527

markgoat.

Let me try to answer your question.

The problem comes from fact that when we issue pwrite , and there is crash during
this operation, then there is no way to check state of operation.
It may happen that for 16K operation we wrote only 4K or 8K.

So we may end up with situation when half of page contains new information and another half – old information. InnoDB of course will detect corruption using checksum, but it won’t help much, as page is broken.

So solution to this is to have 2 copies of page. if we crash during writing of one of copies – we always have another consistent copy, which we can work with.

]]>
By: Baron Schwartz https://www.percona.com/blog/innodb-double-write/#comment-806531 Fri, 04 Aug 2006 07:00:01 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-806531

The redo logs in InnoDB don’t have complete page images. This is different from some other database servers.

]]>
By: markgoat https://www.percona.com/blog/innodb-double-write/#comment-806533 Fri, 04 Aug 2006 07:00:01 +0000 https://www.percona.com/blog/2006/08/04/innodb-double-write/#comment-806533

Sorry, I am still not understand why we need this.
Each entry in the log should be redo when doing recovery? I think even the page is “partial written”, it doesn’t matter, redo log know the after image. So regardless of the page itself, redo will do every change again. If part of the page are not wrtten due to a crash, redo will do the write? In other words, the “consistent page” is saved in the redo log. Unless you mean “partial write” will modify someting not saved in the redo log??
I mean, EVERY change are in the redo log entries. Partial write , in my mind, is that it failed to make some changes to the disk. But when we apply redo log, we will redo the planned actions.
For example, we have 4 blocks each page, we update 2 rows in two of the blocks, and then we have two entries in the redo log. then there is a commit log. As a result of Partial write, only block 1 was updated, block 2,3,4 did not. But I didn’t see any issue in this case, because redo will reimage block 1 and block 3 correctly? We don’t care about if the disk is consistent or not, this is what redo in theory to handle, isn’t it?
So I think I still didn’t understand innoDB , why it need “double write”? I am new to innoDB, not knowing any internal details, I just think this from a textbook point of view ^_^

thank you, it is so long time you post this great article, don’t know if you can still answer me. I am thinking this for days, but can’t understand yet.

]]>