Featured post

The Ultimate Cheat Sheet On Hadoop

Top 20 frequently asked questions to test your Hadoop knowledge given in the below Hadoop cheat sheet. Try finding your own answers and match the answers given here.

Question #1 

You have written a MapReduce job that will process 500 million input records and generate 500 million key-value pairs. The data is not uniformly distributed. Your MapReduce job will create a significant amount of intermediate data that it needs to transfer between mappers and reducers which is a potential bottleneck. A custom implementation of which of the following interfaces is most likely to reduce the amount of intermediate data transferred across the network?

A. Writable
B. WritableComparable
C. InputFormat
D. OutputFormat
E. Combiner
F. Partitioner
Ans: e

Question #2 

Where is Hive metastore stored by default ?

B. In client machine in the form of a flat file.
C. In client machine in a derby database
D. In lib directory of HADOOP_HOME, and requires HADOOP_CLASSPATH to be modified.
Ans: c


Hadoop The Processor of Unstructured Data

Hadoop comes into the picture to process a large volume of unstructured data. The structured data is already taken care of by traditional databases.

Hadoop unstructured data

Role of Traditional databases

Traditional relational databases have been able to store massive data sets for a long time. An Oracle 10g database can store over 8 Petabytes while for many years DB2 databases have been capable of storing well over 500 Petabytes. Of course, this is all theoretical. 

  • No customer has an Oracle or DB2 database that approaches sizes even close to that. Why? Because the speed, or velocity, at which data can be loaded and queries can be executed approaches zero well before then.
  • Similarly, all traditional relational databases can store any variety of data as text or binary large objects. The problem is that large volumes of unstructured data cannot be moved fast enough to enable rapid search and retrieval.
ETL Role
Running constant and predictable workloads is what your existing data warehouse has been all about. And as a solution for meeting the demands of structured data—data that can be entered, stored, queried, and analyzed in a simple and straightforward manner—the data warehouse will continue to be a viable solution. Storing, managing, and analyzing massive volumes of semi-structured and unstructured data is what Hadoop was purpose-built to do.

  • Unlike structured data, found within the tidy confines of records, spreadsheets, and files, semi-structured and unstructured data is raw, complex, and pours in from multiple sources such as emails, text documents, videos, photos, social media posts, Twitter feeds, sensors and clickstreams.
  • Hadoop and MapReduce enable organizations to distribute the search simultaneously across many machines, reducing the time to find relevant nuggets of information in large volumes of data in a scalable way. That’s why Hadoop is being adopted by bleeding-edge enterprises moving into the multi-petabyte club. There are already some environments that break the 100 Petabyte level and theoretically can continue to scale.
Also, read


Popular posts from this blog

AWS Vs Azure Load Balancers Top Insights

Hadoop File System Basic Commands

4 Important Skills You Need for Data Scientists

Hyperledger Fabric: 20 Real Interview Questions