Skip to main content

Big data: Quiz-2 Hadoop Top Interview Questions

I hope you enjoyed my previous post. This is second set of Questions exclusively for Big data engineers.

Read QUIZ-1.

Q.1) You have submitted a job on an input file which has 400 input splits in HDFS. How many map tasks will run?
A. At most 400.
B. At least 400
C. Between 400 and 1200.
D. Between 100 and 400.
Ans: c

QUESTION 2

What is not true about LocalJobRunner mode? Choose two
A. It requires JobTracker up and running.
B. It runs Mapper and Reducer in one single process
C. It stores output in local file system
D. It allows use of Distributed Cache.

Ans: d,a
Hadoop Jobs and Career
Hadoop Jobs and Career

QUESTION 3
What is the command you will use to run a driver named “SalesAnalyisis” whose compilped code is available in a jar file “SalesAnalytics.jar” with input data in directory “/sales/data” and output in a directory “/sales/analytics”?
A. hadoopfs  –jar  SalesAnalytics.jar  SalesAnalysis  -input  /sales/data  -output /sales/analysis
B. hadoopfs  jar  SalesAnalytics.jar    -input  /sales/data  -output /sales/analysis
C. hadoop    –jar  SalesAnalytics.jar  SalesAnalysis  -input  /sales/data  -output /sales/analysis
D. hadoop  jar  SalesAnalytics.jar  SalesAnalysis   /sales/data   /sales/analysis
ans:d

QUESTION 4
One map-reduce program takes a text file where each line break is considered one complete record and the line offset as a key. The map method parses the record into words and for each word it creates multiple key value pair where keys are the words itself and values are the characters in the word. The reducer finds the characters used for each unique word. This program may not be a perfect program but it works correctly. The problem this program has is that, it creates more key value pairs in the intermediate output of mappers from single input (key-value). This leads to increase of which of the followings? (Select the correct answer)
A. Disk-io and network traffic.
B. Memory foot-print of mappers and network traffic.
C. Disk-io and memory foot print of mappers
D. Block size and disk-io
Ans:

QUESTION 5
What is true about HDFS? (Select one)
A. It is suitable for storing large number of small files.
B. It is suitable storing small number of small files.
C. It is suitable for storing large number of large files.
D. It is suitable for storing small number of large files.
Ans:   c

QUESTION 6
You have just executed a mapreduce job. Where the intermediate data is written to after being emitted from mapper’s map method?
A. The intermediate data is directly transmitted to reducer and is not written anywhere in the disk.
B. The intermediate data is written to HDFS.
C. The intermediate data is written to the in-memory buffers which spill over to the local file system of the tasktracker’s machine where the mapper task is run.
D. The intermediate data is written to the in-memory buffers which spill over to the local file system of the tasktracker’s machine where the reducer task is run.
E. The intermediate data is written to the in-memory buffers which spill over to HDFS of the tasktracker’s machine where the reducer task is run.
Ans: e

QUESTION 7
You are developing a MapReduce job for reporting. The mapper will process input keys representing the year(intWritable) and input values representing product identities(Text). Identify what determines the data types used by the Mapper for a given job.
A. The key and value types specified in the JobConf.setMapInputKeyClass and JobConf.setMapInputValueClass methods.
B. The data types specified in HADOOP_MAP_DATATYPES environment variable.
C. The mapper-specification.xml file submitted with the job determine the mapper’s input key and value types.
D. The InputFormat used by the job determines the mapper’s input key and value types.
Ans: d

QUESTION 8
What types of algorithms are difficult to express in MapReduce v1 (MRv1)?
A. Algorithms that require applying the same mathematical function to large numbers of individual binary records.
B. Relational operations on large amounts of structured and semi-structured data.
C. Algorithms that require global sharing states.
D. Large scale graph algorithm that require one step link traversal.
E. Text analysis algorithms on large collection of un-structured text (e.g. Web crawl).
Ans: c

QUESTION 9
You wrote a map function that throws a runtime exception when it encounters any control character in input data. The input you supplied had 12 such characters spread across five input splits. The first 4 input split has 2 control characters each and the 5th input split has 4 control characters.
Identify the number of failed tasks if the job is run with mapred.max.map.attempts =4.
A. You will have 48 failed tasks.
B. You will have 12 failed tasks.
C. You will have 5 failed tasks.
D. You will have 20 failed tasks.
E. You will have 17 failed tasks.
Ans:

QUESTION 10
What are supported programming languages for Map Reduce?
A.  The most common programming language is Java, but scripting languages are also supported via Hadoop streaming.
B.  Any programming language that can comply with Map Reduce concept can be supported.
C. Only Java supported since Hadoop was written in Java.
D.  Currently Map Reduce supports Java, C, C++ and COBOL.
Ans: a,b

QUESTION 11

What is true about LocalJobRunner?
A. It can configure as many reducers as it needs.
B. You can use “Partitioners”.
C. It can use local file system as well as HDFS.
D. It can only use local file system.
Ans: d

Comments

Popular posts from this blog

R Vs SAS differences to read today

Statistical analysis should know by every software engineer. R is an open source statistical programming language. SAS is licensed analysis suite for statistics. The two are very much popular in Machine learning and data analytics projects.
SAS is analysis suite software and R is a programming language R ProgrammingR supports both statistical analysis and GraphicsR is an open source project.R is 18th most popular LanguageR packages are written in C, C++, Java, Python and.NetR is popular in Machine learning, data mining and Statistical analysis projects. SASSAS is a statistical analysis suite. Developed to process data sets in mainframe computers.Later developed to support multi-platforms. Like  Mainframe, Windows, and LinuxSAS has multiple products. SAS/ Base is very basic level.SAS is popular in data related projects. Learn SAS vs R Top Differences between SAS Vs R Programming SAS AdvantagesThe data integration from any data source is faster in SAS.The licensed software suite, so you…

Blue Prism complete tutorials download now

Blue prism is an automation tool useful to execute repetitive tasks without human effort. To learn this tool you need the right material. Provided below quick reference materials to understand detailed elements, architecture and creating new bots. Useful if you are a new learner and trying to enter into automation career. The number one and most popular tool in automation is a Blue prism. In this post, I have given references for popular materials and resources so that you can use for your interviews.
RPA Blue Prism RPA blue prism tutorial popular resources I have given in this post. You can download quickly. Learning Blue Prism is a really good option if you are a learner of Robotic process automation.
RPA Advantages The RPA is also called "Robotic Process Automation"- Real advantages are you can automate any business process and you can complete the customer requests in less time.

The Books Available on Blue Prism 
Blue Prism resourcesDavid chappal PDF bookBlue Prism BlogsVi…

Top Differences Read Today Agile vs Waterfall model

The Agile and Waterfall both models are popular in Software development. The Agile model is so flexible compared to waterfall model. Top differences on Waterfall vs Agile give you clear understanding on both the processes. Waterfall ModelThe traditional model is waterfall. It has less flexibility.Expensive and time consuming model.Less scalable to meet the demand of customer requirements.The approach is top down. Starting from requirements one has to finish all the stages, till deployment to complete one cycle.A small change in requirement, one has to follow all the stages till deployment.Waterfall model creates idleness in resource management. Agile ModelAgile model is excellent for rapid deployment of small changesThe small split-requirements you can call them as sprintsLess idleness in resource management.Scope for complete team involvement.Faster delivery makes client happy.You can deploy changes related to compliance or regulatory quickly.Collaboration improves among the team.