HBase and HDFS go hand in hand to provide HBase's durability and consistency guarantees. One way of looking at this setup is that HDFS handles the distribution and storage of your data whereas HBase handles the distribution of CPU cycles a
HBase and HDFS go hand in hand to provide HBase's durability and consistency guarantees.One way of looking at this setup is that HDFS handles the distribution and storage of your data whereas HBase handles the distribution of CPU cycles and provides a consistent view of that data.
As described in many other places, HBase
HDFS sync has a colorful history, with needed support for HBase only available in an unreleased "append"-branch of HDFS for a long time. (Note that the append and sync features are independent and against common believe HBase only relies on the sync feature). See also this Cloudera blog post.
In order to understand what HDFS provides let's take a look at how a DFSClient (client) interacts with a Datanode (DN).
In a nutshell a DN just waits for commands. One of these commands is WRITE_BLOCK. When the DN receives a WRITE_BLOCK command it instantiates a BlockReceiver thread.
The BlockReceiver then simply waits for packets on an InputStream and flushes the data to OS buffers. An open block is maintained at the DN as an open file. When a block is filled, the block and hence its associate files is closed and the BlockReceiver ends. For all practical purposes the DN forgets that the block existed.
Replication to replica DNs is done via pipelining. The first DN forwards each packet to the next DN in the chain before the data is flushed locally, and waits for the downstreadm DN to respond. The default length of the replication chain is 3.
The other side of the equation is the DFSClient. The DFSClient batches changes until a packet is filled.
Since HADOOP-6313 a Syncable supports hflush and hsync.
For HBase and similar applications with durability guarantees this can be insufficient. If three or more DN machines crash at the same time (assume three replicas), for example caused by a multi rack or data center power outage, data might be lost.
Further, since HBase constantly compacts older, smaller HFiles into newer, larger ones, this potential data loss is not limited to new data.
(But note that, like most database setups, HBase should be deployed with redundant power supply anyway, so this is not necessarily an issue).
Due to the inner working of the DN it is difficult to implement 100% Posix fsync semantics. Imagine a client, which writes many blocks worth of data and then issues an hsync. In order to sync data correctly to disk either the client or all involved DNs would need to keep track of all blocks (full or partial) written to so far that have not yet been sync'ed.
This would be a significant change to how either the client or the DN work and lead to more complicated code. It would also require to keep the block files open in order to retain the file descriptors so that an fsync could be potentially issued in the future. Since the client might in fact never issue a sync request the number of open files to retain is unbounded.
The other option (similar to Posix' O_SYNC) is to have the DNs call fsync upon receipt of every single packet. leading to many unnecessary fsyncs.
In HDFS-744 I propose a hybrid solution. A data stream can be created with a SYNC_BLOCK flag. This flag causes the DFSClient set a "sync" flag on the last packet of a block. I.e. the block file is fsync'ed upon close.
This flag is also set when the client issues hsync. If the client has outstanding data the current packet is tagged with the "sync" flag and sent immediately, otherwise an empty packet with this flags is sent.
When a DN receives such a packet, it will immediately flush the currently open file (representing the current block - full on close or partial on hsync - being written) to disk.
In summary: With this compromise a client can guarantee - byte-by-byte if needed - which portion of an open file is guaranteed on a durable medium while avoiding either syncing every packet to disk or keeping track of past unsync'ed block.
For HBase this would conveniently deal with compactions as blocks are sync'ed upon close and also with WAL edits as it correctly allows sync'ing the current block.
The downside is that upon close each block needs to be sync'ed to disk, even though the client might never issue a sync request for this stream; this leads to potentially unneeded fsyncs.
HBASE-5954 proposes matching changes to HBase to make use of this new HDFS feature. This issue introduces a WAL sync config option and an HFile sync option.
The former causes HBase to issue an hsync when a batch of WAL entries is written. The latter makes sure HFiles (generated from memstore flushes or compactions) are guaranteed to be on a durable medium when the stream is closed.
There are also a few simple performance tests listed in that issue.
Future optimization is possible:
原文地址:HBase, HDFS and durable sync, 感谢原作者分享。
声明:本网页内容旨在传播知识,若有侵权等问题请及时与本网联系,我们将在第一时间删除处理。TEL:177 7030 7066 E-MAIL:11247931@qq.com