Skip to main content

Big data: Quiz-2 Hadoop Top Interview Questions

I hope you enjoyed my previous post. This is second set of Questions exclusively for Big data engineers.

Read QUIZ-1.

Q.1) You have submitted a job on an input file which has 400 input splits in HDFS. How many map tasks will run?
A. At most 400.
B. At least 400
C. Between 400 and 1200.
D. Between 100 and 400.
Ans: c

QUESTION 2

What is not true about LocalJobRunner mode? Choose two
A. It requires JobTracker up and running.
B. It runs Mapper and Reducer in one single process
C. It stores output in local file system
D. It allows use of Distributed Cache.

Ans: d,a
Hadoop Jobs and Career
Hadoop Jobs and Career

QUESTION 3
What is the command you will use to run a driver named “SalesAnalyisis” whose compilped code is available in a jar file “SalesAnalytics.jar” with input data in directory “/sales/data” and output in a directory “/sales/analytics”?
A. hadoopfs  –jar  SalesAnalytics.jar  SalesAnalysis  -input  /sales/data  -output /sales/analysis
B. hadoopfs  jar  SalesAnalytics.jar    -input  /sales/data  -output /sales/analysis
C. hadoop    –jar  SalesAnalytics.jar  SalesAnalysis  -input  /sales/data  -output /sales/analysis
D. hadoop  jar  SalesAnalytics.jar  SalesAnalysis   /sales/data   /sales/analysis
ans:d

QUESTION 4
One map-reduce program takes a text file where each line break is considered one complete record and the line offset as a key. The map method parses the record into words and for each word it creates multiple key value pair where keys are the words itself and values are the characters in the word. The reducer finds the characters used for each unique word. This program may not be a perfect program but it works correctly. The problem this program has is that, it creates more key value pairs in the intermediate output of mappers from single input (key-value). This leads to increase of which of the followings? (Select the correct answer)
A. Disk-io and network traffic.
B. Memory foot-print of mappers and network traffic.
C. Disk-io and memory foot print of mappers
D. Block size and disk-io
Ans:

QUESTION 5
What is true about HDFS? (Select one)
A. It is suitable for storing large number of small files.
B. It is suitable storing small number of small files.
C. It is suitable for storing large number of large files.
D. It is suitable for storing small number of large files.
Ans:   c

QUESTION 6
You have just executed a mapreduce job. Where the intermediate data is written to after being emitted from mapper’s map method?
A. The intermediate data is directly transmitted to reducer and is not written anywhere in the disk.
B. The intermediate data is written to HDFS.
C. The intermediate data is written to the in-memory buffers which spill over to the local file system of the tasktracker’s machine where the mapper task is run.
D. The intermediate data is written to the in-memory buffers which spill over to the local file system of the tasktracker’s machine where the reducer task is run.
E. The intermediate data is written to the in-memory buffers which spill over to HDFS of the tasktracker’s machine where the reducer task is run.
Ans: e

QUESTION 7
You are developing a MapReduce job for reporting. The mapper will process input keys representing the year(intWritable) and input values representing product identities(Text). Identify what determines the data types used by the Mapper for a given job.
A. The key and value types specified in the JobConf.setMapInputKeyClass and JobConf.setMapInputValueClass methods.
B. The data types specified in HADOOP_MAP_DATATYPES environment variable.
C. The mapper-specification.xml file submitted with the job determine the mapper’s input key and value types.
D. The InputFormat used by the job determines the mapper’s input key and value types.
Ans: d

QUESTION 8
What types of algorithms are difficult to express in MapReduce v1 (MRv1)?
A. Algorithms that require applying the same mathematical function to large numbers of individual binary records.
B. Relational operations on large amounts of structured and semi-structured data.
C. Algorithms that require global sharing states.
D. Large scale graph algorithm that require one step link traversal.
E. Text analysis algorithms on large collection of un-structured text (e.g. Web crawl).
Ans: c

QUESTION 9
You wrote a map function that throws a runtime exception when it encounters any control character in input data. The input you supplied had 12 such characters spread across five input splits. The first 4 input split has 2 control characters each and the 5th input split has 4 control characters.
Identify the number of failed tasks if the job is run with mapred.max.map.attempts =4.
A. You will have 48 failed tasks.
B. You will have 12 failed tasks.
C. You will have 5 failed tasks.
D. You will have 20 failed tasks.
E. You will have 17 failed tasks.
Ans:

QUESTION 10
What are supported programming languages for Map Reduce?
A.  The most common programming language is Java, but scripting languages are also supported via Hadoop streaming.
B.  Any programming language that can comply with Map Reduce concept can be supported.
C. Only Java supported since Hadoop was written in Java.
D.  Currently Map Reduce supports Java, C, C++ and COBOL.
Ans: a,b

QUESTION 11

What is true about LocalJobRunner?
A. It can configure as many reducers as it needs.
B. You can use “Partitioners”.
C. It can use local file system as well as HDFS.
D. It can only use local file system.
Ans: d

Comments

Popular posts from this blog

Four Tableau products a quick review and explanation

I want to share you what are the Products most popular.

Total four products. Read the details below.

Tableau desktop-(Business analytics anyone can use) - Tableau  Desktop  is  based  on  breakthrough technology  from  Stanford  University  that  lets  you drag & drop to analyze data. You can connect to  data in a few clicks, then visualize and create interactive dashboards with a few more.

We’ve done years of research to build a system that supports people’s natural  ability  to  think visually. Shift fluidly between views, following your natural train of thought. You’re not stuck in wizards or bogged down writing scripts. You just create beautiful, rich data visualizations.  It's so easy to use that any Excel user can learn it. Get more results for less effort. And it’s 10 –100x faster than existing solutions.

Tableau server
Tableau  Server  is  a  business  intelligence  application  that  provides  browser-based  analytics anyone can use. It’s a rapid-fire alternative to th…

The Sqoop in Hadoop story to process structural data

Why Sqoop you need while working on Hadoop-The Sqoop and its primary reason is to import data from structural data sources such as Oracle/DB2 into HDFS(also called Hadoop file system).
To our readers, I have collected a good video from Edureka which helps you to understand the functionality of Sqoop.

The comparison between Sqoop and Flume

The Sqoop the word came from SQL+Hadoop Sqoop word came from SQL+HADOOP=SQOOP. And Sqoop is a data transfer tool. The main use of Sqoop is to import and export the large amount of data from RDBMS to HDFS and vice versa. List of basic Sqoop commands Codegen- It helps to generate code to interact with database records.Create-hive-table- It helps to Import a table definition into a hiveEval- It helps to evaluateSQL statement and display the resultsExport-It helps to export an HDFS directory into a database tableHelp- It helps to list the available commandsImport- It helps to import a table from a database to HDFSImport-all-tables- It helps to import tables …

The best 5 differences of AWS EMR and Hadoop

With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in the Amazon cloud. The cluster is managed using an open-source framework called Hadoop.

Amazon EMR has made enhancements to Hadoop and other open-source applications to work seamlessly with AWS. For example, Hadoop clusters running on Amazon EMR use EC2 instances as virtual Linux servers for the master and slave nodes, Amazon S3 for bulk storage of input and output data, and CloudWatch to monitor cluster performance and raise alarms.

You can also move data into and out of DynamoDB using Amazon EMR and Hive. All of this is orchestrated by Amazon EMR control software that launches and manages the Hadoop cluster. This process is called an Amazon EMR cluster.


What does Hadoop do...

Hadoop uses a distributed processing architecture called MapReduce in which a task is mapped to a set of servers for proce…