Checkpointing kısaca WAL dosyasındaki değişikliklerin gerçekten Data Page dosyasına uygulanması demek.
Açıklaması şöyle
The question here can I control how data pages are flushed to disk? And the answer is yes, this is called as checkpointing, where a new WAL record is created called checkpoint record write after the entire data pages in memory are flushed to disk. This is a very delicate process, too much checkpointing sure makes start up faster, but it indeeds takes a toll on your I/O as checkpointing is IO heavy operation as we mentioned before data pages are very large.
Açıklaması şöyle. Page'lerin diske yazılması ve bu anın kaydedilmesi
Technically speaking we are consistent in memory because we do write to pages as we write the WAL and because we always read and write from and to memory first we get the latest and greatest. So data files and indexes pages are up to date in memory but they are out of date on disk.So in a happy path, I issue a commit and my WAL is flushed successfully to disk and the database returns OK to the client (me) who issued the commit. So the WAL is updated. Then moments later the database start flushing my full pages to disk and now both data files and WAL are in sync on disk. We call this a checkpoint and the process that creates this checkpoint is checkpointing. The database will actually create a checkpoint record that says, I hereby declare at this moment in time, everything written to WAL on this moment is completely in SYNC with what we have on data files on disk.
Açıklaması şöyle. Checkpointing çökerse bellekteki kirli sayfaların bir kısmı diske yazılmış bir kısmı yazılmamış olacak.
Ok. Here is another edge case for you, I flush my WAL and commit, a checkpointer process kicks in and start flushing dirty pages to disk (dirty pages are pages in memory that have been written to) and midway through checkpointing database crashed!
You now have half of the data pages updated and half of them are outdated with the WAL. The WAL only has changes, it has no clue what the original rows looked like. So for that we need another log to record the original state of the row before we made the change and that is the undo log.
Now with the undo log, the database first checks for transactions that started but didn’t commit and roll those back, any changes made by those transactions must be discarded from the WAL. Any changes that made it to the data files MUST be undone so we go back to consistent state where all rows in the data pages are trusted and committed. And finally we redo the WAL changes on those original pages to bring the pages back in sync with the WAL.
The undo logs are also used by transactions that need MVCC visibility to the old version of the row. This means having a long running transaction that does massive changes may slow down repeatable read/serializable isolation level read transactions as those have to go to the undo log and restore the original row before use.
Postgres clever architecture completely can get away without undo logs I explain it here.
Hiç yorum yok:
Yorum Gönder