Latest Posts

Hadoop on Remote Storage

mapreduce_with_external_data

The question regarding running Hadoop on a remote storage rises again and again by many independent developers, enterprise users and vendors. And there are still many discussions in community, with completely opposite opinions. I’d like to state here my personal view on this complex problem.

In this article I would call remote storage “NAS” for simplicity. I would also take as a given that remote storage is not the same HDFS, but something completely different – from standard storage arrays with LUNs mounted to the servers to different distributed storage systems. For all these systems I assume that they are remote, because unlike HDFS they don’t allow you to run your custom code on the storage nodes. And they are mostly “storages”, so they are using some kind of erasure encoding to save the space and make this solution more competitive.

If you are reading my blog for a long time, you might mention that it is the second version of this article. During the last year I was constantly thinking on this problem, and my position has shifted a bit, mostly based on the real world practice and experience.

Read IO Performance. For most of the Hadoop clusters the limiting factor in performance is IO. The more IO bandwidth you have, the faster your cluster would work. You won’t be surprised if I tell you that the IO bandwidth mostly depends on the amount of disks you have and their type. For example, a single SATA HDD can deliver you somewhat 50MB/sec in sequential scans, SAS HDD can give you 90MB/sec and SSD might achieve 300MB/sec. This is a simple math to calculate the total platform bandwidth given these numbers. Comparing DAS with NAS does not make much sense in this context, because both NAS and cluster with DAS might have the same amount of disks and thus would deliver comparable bandwidth. So again, considering infinite network bandwidth with zero latency, same RAID controllers and same number and type of drives used, DAS and NAS solutions would deliver the same read IO performance.
Write IO Performance. Here the things are getting a bit more complicated, and you should understand how exactly your NAS solution work to be able to compare it with Hadoop on DAS. HDFS stores a number of exact copies of the data, 3 by default. So if you write X GB of data, in fact they would occupy 3*X GB of disk space. And of course, the process of writing 3 copies of the data is 3 times slower than the process of writing a single copy. How does the most NAS storages work? NAS is an old industry and they clearly understood that storing many exact copies of the data is very wasteful, so most of them use some kind of erasure coding (like Reed-Solomon one). This allows you to achieve similar redundancy with storing 3 exact copies of the data with only 40% overhead with RS(10,4). But everything comes at cost, and the cost here is performance. For writing a single block in HDFS you have to just write it 3 times. With RD(10,4) to write a single block you have to calculate erasure codes for it either by reading other 9 blocks and writing out 4 of them, or having some kind of a caching layer with replication and background compaction process. In short, writing to it would always be slower than writing to the cluster with replication, this is like comparing RAID10 with RAID5, same logic of replication vs erasure coding.
Read IO performance (degraded). In case you have lost a single machine or single drive in Hadoop cluster with DAS, your read performance is not affected – you read the same data from a different node that is still alive. But what happens in NAS with RS(10,4)? Right, to restore a single block with RS(10,4) you have to read up to 13 blocks, which would make your system up to 13 times slower! Of course, in most cases you encode sequential blocks and then read sequential blocks, so you can restore the missing one easier. But still, your performance would degrade 2x in best scenario and up to 13x in worst:

And if you think that the degraded case is not very relevant for you, here is the statistics of Facebook Hadoop cluster:

Data Recovery. When you are losing the node and repliacing it, how long does it take to recover the redundancy of your system? For HDFS with DAS you are just copying the data for under-replicated blocks to the new node. For RS(10,4) you have to restore the missing blocks by reading all the other blocks in its group and performing computations on top of them. Usually it is 5x-10x slower:

Network. When you run a Hadoop cluster with DAS, Hadoop framework itself tries to schedule executers as close to the data as possible, usually making a preference to local IO. In the cluster with NAS, your IO is always remote, with no exceptions. So the network becomes a big pain point – you should plan it very carefully with no oversubscription both between the compute nodes, and between compute and storage. Network rarely becomes a bottleneck if you have enough 10GbE interfaces, but the switches should be good, and you need much more of them than in solution with DAS. Here’s the slide from Cisco’s presentation regarding this subject:

Local Storage. Having remote HDFS might look like a good option, but what about the local storage on the “compute” nodes? Usually people forget that the same MapReduce stores intermediate data on the local storage, and the same Spark puts all the shuffle intermediate data to the local storage. Plus the same Hive and Pig are translated into MR or Tez or Spark, storing their intermediate results on local storage as well. Thus even “compute” nodes should have enough local storage, and the safest option is to have the same amount of raw …read more

MPP vs Hadoop Talk

Today I had a great talk at the Hadoop User Group Ireland meetup in Dublin, and it was an adapted and refactored version of the article on the same subject, MPP vs Hadoop. Here are the slides:

Feel free to comment and share your opinion on this subject

…read more

Big Data & Brews: Anil Chakravarthy Diagrams the Big Data Ecosystem

Our last installment of Big Data & Brews with Anil touches on a cool topic. Of course, I like that we get to use the chalkboard but we also had a chance to break down how Informatica sees the ecosystem (hint, the data intelligence layer is the most promising). We also talked about what he sees happening in the next 10 years that will really accelerate change in the industry.

The full conversation is just a click away – tune in!

TRANSCRIPT:

Stefan: What would be interesting to see is in this ecosystem really of data technologies, right, where are you guys are sitting and then where you see Hadoops, Teradatas, Microstrategies, Datmeers. I kind of see you as the fabric that brings it all together. Is there a central brain of that fabric?

Anil: Right. You know, we believe so. Let me just take a stab at how we think of the word. This is obviously a logical view and it has to be translated based on … We see the world as start with this is — think of this as data persistence. This world is obviously is changing very rapidly. It was basically the databases of the world. Could be anything from mainframe database to relational database, etc. Now Hadoop and NoSQL and this world could be either on the framework or in the cloud or a combination.

Then we see the world or what we think of as data infrastructure. So this is the world, which we have traditionally played in and this world is also changing rapidly because it obviously, when this changes, this has to change here. You have things like data ingestion, which is changing very rapidly. Somebody once joked to me that that whatever IBM worked on in the 1970s always will be useful at some point so it’s like that. Things, concepts like changes and capture. The concepts like real time, streaming, etc. so all of those are coming back, right?

You have ingestion. You have data integration. Obviously that’s where you put it together, the aggregation etc. I think you have a lot of work around data quality, which is increasingly, “How do you do quality, especially on unstructured data” and things like that. That becomes a lot of work to …read more

Hadoop Manufacturing Innovation & IoT

Grant Bodley

The advent of connected manufacturing has ushered in an era where low-cost machine sensors take thousands of measurements per second at many points across the manufacturing process. This stream of sensor data enables manufacturers to quickly detect emerging anomalies and solve issues before they impact yield and quality.

Big Data insights enable predictive analytics for those rapid, proactive process adjustments. Manufacturers can capitalize on this opportunity by following an approach that combines the power of Teradata with Hortonworks Data Platform’s storage and compute efficiencies at extreme scale. Working together, our technologies enable big data insights that can dramatically improve existing manufacturing processes.

Register for the Teradata Partners Event

On Wednesday October 21st from 12:00-12:45, I will be presenting a webinar along with Dale Glover, Teradata VP of Industry Consulting. Join us for 45 minutes to learn more about how manufacturing companies are utilizing Hadoop to:

Establish a Single View of data on products throughout their entire lifecycles
Build a 360° view of lifetime customer value
Optimize manufacturing quality and yield
Proactively maintain equipment to minimize the risk of downtime
Event Details
​Presentation Title: Hadoop for Manufacturing Innovation & IoT
Session Number: 3719
Date & Time: Wednesday October 21st from 12 to 12:45PM PST
Location: 202 AB
About the Speakers

Grant Bodley: Hortonworks GM for Global Manufacturing Solutions

As General Manager of Global Manufacturing Industry Solutions at Hortonworks, Grant Bodley brings over 25 years of manufacturing experience in working with leading Automotive, Industrial, High Tech, and Aerospace Manufacturers in leveraging Big Data Insights and high impact use-cases to transform their businesses. Prior to Hortonworks, Grant was Vice President of Manufacturing Industry Solutions at SAP for more than 10 years.

Dale Glover: Vice President of Industry Consulting for Teradata

Dale Glover is a Vice President of Industry Consulting for Teradata. His Industry Consulting team is responsible for helping clients successfully implement Business Intelligence and Analytics to drive business process impact and value. He is leading the transformation of this organization to support an analytic consulting focus across a broad ecosystem of platforms and tools. His advanced Applied Analytic Team is helping organizations move from Big Data insights into the realization of value from advanced analytics in day to day operations.

The post Hadoop Manufacturing Innovation & IoT appeared first on Hortonworks.

…read more

From Mechanical Engineer to Oil & Gas Data Scientist

I recently had the pleasure of visiting with Arvind Battula, Sr. Data Scientist at Schlumberger. We discussed his background as a chemical and mechanical engineer and his move onto the Data and Analytics team as a data scientist. The following is a transcript of my conversation with Arvind. We discussed his background, his interesting focus areas for data science in oil and gas, and technologies that he believes will help transform the industry.

Kohlleffel: Arvind, you entered the data science world recently on the Schlumberger Data and Analytics team and have a very interesting background coming from both chemical engineering and mechanical engineering disciplines. Tell me about your experience and engineering background.

Battula: Certainly, my background is diverse. I started my formal training as a chemical engineer. After my bachelors, I applied for graduate school in mechanical engineering to deal mostly with computational fluid dynamics. I wanted to pursue a Ph.D. in the same area, but my doctoral work changed direction to focus on nanophotonics, which is the interaction of nanometer-scale objects with light.

Kohlleffel: That makes for quite a compelling base of experience for your data science work. Now that you’ve moved to the Data and Analytics team, where have you focused so far?

Battula: My mechanical engineering background has been very helpful at Schlumberger since we are dealing with designing products that are used in the harshest conditions imaginable on the planet. In everything we do, we must consider very minute design details to ensure the most robust end product. Before we design and build parts and assemblies, we are very thorough in our calculations and modeling–to quantify our engineering and physics assumptions. This is where we leverage data and analytics to bring a new rigor to the process and move beyond some standard linear assumptions which can be obstacles to efficiently model complex phenomena across all variables.

For example, factors like high temperature, high pressure, stress, vibration, corrosion, aging all act in parallel on the mechanical systems. We can look deeply into that data to better understand the combinations of these variables that are causing mechanical failures and then we can bring together the data streams for both physics and engineering.

This non-linear root cause analysis shows us the real world we deal with on a daily basis. It is ideally suited to leveraging big data and analytics and it benefits multiple groups within our company including engineering, manufacturing, sustaining and maintenance.

In …read more

Strata Hadoop World New York

What I saw at Strata in New York! For whatever reason, these are the things that struck me immediately: Kafka has gained a lot of momentum, and I might wager that 6 months from now, you will be using it…
Read more

Big Data Expo comes to Utrecht, Netherlands

IMG_6040

There’s excitement in the air as one of Benelux’s largest Big Data conferences “Big Data Expo”, comes to Utrecht in The Netherlands.

We’re sponsoring and you’ll find our experts Chris Harris and Jhon Masschelein presenting such topics as “5 Steps for Effective use of Apache Spark in Hortonworks Data Platform 2.3” and “Lessons Learned: 5 Common Hadoop Use Cases”. You can register here.

As Hortonworks continues to extended its footprint in Europe, we’re seeing some exciting use cases and an increasing momentum of enterprise adoption of Hadoop. The Hadoop Summit that we organized in Brussels early this year showcased some of the great European use cases. Here’s a short overview of one my favorites:

ING Bank: Destroying Data Silos for Creating a Predictive Bank

Hellmar Becker a Utrecht resident discusses breaking down Data Sillos and creating a centralized Datalake at ING. He also discusses the modernization of their data centers, migrating away from legacy systems within their governance and security framework.

Bart Buler, Hellmar’s co-presenter discusses the banks steps into becoming a truly predictive bank. Bart also provides some do’s, don’ts and difficulties in this journey and talks about the future for the bank including “integrating analytics as part of data flows”, “showing interactive results to individuals without access to the cluster” and many more.

You can more videos listed here

To conclude, Big Data Expo, will showcase an array of new technologies, exciting case studies and organizations making the most out of data. Come visit us at Stand 21 as my colleague Alfie Murray-Dudgeon pictured below awaits.

The post Big Data Expo comes to Utrecht, Netherlands appeared first on Hortonworks.

…read more