Hadoop on Remote Storage

By 0x0FFF

mapreduce_with_external_data

The question regarding running Hadoop on a remote storage rises again and again by many independent developers, enterprise users and vendors. And there are still many discussions in community, with completely opposite opinions. I’d like to state here my personal view on this complex problem.

In this article I would call remote storage “NAS” for simplicity. I would also take as a given that remote storage is not the same HDFS, but something completely different – from standard storage arrays with LUNs mounted to the servers to different distributed storage systems. For all these systems I assume that they are remote, because unlike HDFS they don’t allow you to run your custom code on the storage nodes. And they are mostly “storages”, so they are using some kind of erasure encoding to save the space and make this solution more competitive.

If you are reading my blog for a long time, you might mention that it is the second version of this article. During the last year I was constantly thinking on this problem, and my position has shifted a bit, mostly based on the real world practice and experience.

Read IO Performance. For most of the Hadoop clusters the limiting factor in performance is IO. The more IO bandwidth you have, the faster your cluster would work. You won’t be surprised if I tell you that the IO bandwidth mostly depends on the amount of disks you have and their type. For example, a single SATA HDD can deliver you somewhat 50MB/sec in sequential scans, SAS HDD can give you 90MB/sec and SSD might achieve 300MB/sec. This is a simple math to calculate the total platform bandwidth given these numbers. Comparing DAS with NAS does not make much sense in this context, because both NAS and cluster with DAS might have the same amount of disks and thus would deliver comparable bandwidth. So again, considering infinite network bandwidth with zero latency, same RAID controllers and same number and type of drives used, DAS and NAS solutions would deliver the same read IO performance.
Write IO Performance. Here the things are getting a bit more complicated, and you should understand how exactly your NAS solution work to be able to compare it with Hadoop on DAS. HDFS stores a number of exact copies of the data, 3 by default. So if you write X GB of data, in fact they would occupy 3*X GB of disk space. And of course, the process of writing 3 copies of the data is 3 times slower than the process of writing a single copy. How does the most NAS storages work? NAS is an old industry and they clearly understood that storing many exact copies of the data is very wasteful, so most of them use some kind of erasure coding (like Reed-Solomon one). This allows you to achieve similar redundancy with storing 3 exact copies of the data with only 40% overhead with RS(10,4). But everything comes at cost, and the cost here is performance. For writing a single block in HDFS you have to just write it 3 times. With RD(10,4) to write a single block you have to calculate erasure codes for it either by reading other 9 blocks and writing out 4 of them, or having some kind of a caching layer with replication and background compaction process. In short, writing to it would always be slower than writing to the cluster with replication, this is like comparing RAID10 with RAID5, same logic of replication vs erasure coding.
Read IO performance (degraded). In case you have lost a single machine or single drive in Hadoop cluster with DAS, your read performance is not affected – you read the same data from a different node that is still alive. But what happens in NAS with RS(10,4)? Right, to restore a single block with RS(10,4) you have to read up to 13 blocks, which would make your system up to 13 times slower! Of course, in most cases you encode sequential blocks and then read sequential blocks, so you can restore the missing one easier. But still, your performance would degrade 2x in best scenario and up to 13x in worst:

And if you think that the degraded case is not very relevant for you, here is the statistics of Facebook Hadoop cluster:

Data Recovery. When you are losing the node and repliacing it, how long does it take to recover the redundancy of your system? For HDFS with DAS you are just copying the data for under-replicated blocks to the new node. For RS(10,4) you have to restore the missing blocks by reading all the other blocks in its group and performing computations on top of them. Usually it is 5x-10x slower:

Network. When you run a Hadoop cluster with DAS, Hadoop framework itself tries to schedule executers as close to the data as possible, usually making a preference to local IO. In the cluster with NAS, your IO is always remote, with no exceptions. So the network becomes a big pain point – you should plan it very carefully with no oversubscription both between the compute nodes, and between compute and storage. Network rarely becomes a bottleneck if you have enough 10GbE interfaces, but the switches should be good, and you need much more of them than in solution with DAS. Here’s the slide from Cisco’s presentation regarding this subject:

Local Storage. Having remote HDFS might look like a good option, but what about the local storage on the “compute” nodes? Usually people forget that the same MapReduce stores intermediate data on the local storage, and the same Spark puts all the shuffle intermediate data to the local storage. Plus the same Hive and Pig are translated into MR or Tez or Spark, storing their intermediate results on local storage as well. Thus even “compute” nodes should have enough local storage, and the safest option is to have the same amount of raw …read more

Read more here:: Spark and related

      

Leave a Comment

Your email address will not be published. Required fields are marked *