Featured post

4 Layers of AWS Architecture a Quick Answer

I have collected real interview questions on AWS key architecture components. Those are S3, EC2, SQS, and SimpleDB. AWS is one of the most popular skills in the area of Cloud computing. Many companies are recruiting software developers to work on cloud computing.

AWS Key Architecture Components AWS is the top cloud platform. The knowledge of this helpful to learn other cloud platforms. Below are the questions asked in interviews recently.
What are the components involved in AWS?Amazon S3.With this, one can retrieve the key information which is occupied in creating cloud structural design, and the amount of produced information also can be stored in this component that is the consequence of the key specified.Amazon EC2. Helpful to run a large distributed system on the Hadoop cluster. Automatic parallelization and job scheduling can be achieved by this component.Amazon SQS. This component acts as a mediator between different controllers. Also worn for cushioning requirements those are obt…

Big data: Quiz-2 Hadoop Top Interview Questions

I hope you enjoyed my previous post. This is second set of Questions exclusively for Big data engineers.

Read QUIZ-1.

Q.1) You have submitted a job on an input file which has 400 input splits in HDFS. How many map tasks will run?
A. At most 400.
B. At least 400
C. Between 400 and 1200.
D. Between 100 and 400.
Ans: c


What is not true about LocalJobRunner mode? Choose two
A. It requires JobTracker up and running.
B. It runs Mapper and Reducer in one single process
C. It stores output in local file system
D. It allows use of Distributed Cache.

Ans: d,a
Hadoop Jobs and Career
Hadoop Jobs and Career

What is the command you will use to run a driver named “SalesAnalyisis” whose compilped code is available in a jar file “SalesAnalytics.jar” with input data in directory “/sales/data” and output in a directory “/sales/analytics”?
A. hadoopfs  –jar  SalesAnalytics.jar  SalesAnalysis  -input  /sales/data  -output /sales/analysis
B. hadoopfs  jar  SalesAnalytics.jar    -input  /sales/data  -output /sales/analysis
C. hadoop    –jar  SalesAnalytics.jar  SalesAnalysis  -input  /sales/data  -output /sales/analysis
D. hadoop  jar  SalesAnalytics.jar  SalesAnalysis   /sales/data   /sales/analysis

One map-reduce program takes a text file where each line break is considered one complete record and the line offset as a key. The map method parses the record into words and for each word it creates multiple key value pair where keys are the words itself and values are the characters in the word. The reducer finds the characters used for each unique word. This program may not be a perfect program but it works correctly. The problem this program has is that, it creates more key value pairs in the intermediate output of mappers from single input (key-value). This leads to increase of which of the followings? (Select the correct answer)
A. Disk-io and network traffic.
B. Memory foot-print of mappers and network traffic.
C. Disk-io and memory foot print of mappers
D. Block size and disk-io

What is true about HDFS? (Select one)
A. It is suitable for storing large number of small files.
B. It is suitable storing small number of small files.
C. It is suitable for storing large number of large files.
D. It is suitable for storing small number of large files.
Ans:   c

You have just executed a mapreduce job. Where the intermediate data is written to after being emitted from mapper’s map method?
A. The intermediate data is directly transmitted to reducer and is not written anywhere in the disk.
B. The intermediate data is written to HDFS.
C. The intermediate data is written to the in-memory buffers which spill over to the local file system of the tasktracker’s machine where the mapper task is run.
D. The intermediate data is written to the in-memory buffers which spill over to the local file system of the tasktracker’s machine where the reducer task is run.
E. The intermediate data is written to the in-memory buffers which spill over to HDFS of the tasktracker’s machine where the reducer task is run.
Ans: e

You are developing a MapReduce job for reporting. The mapper will process input keys representing the year(intWritable) and input values representing product identities(Text). Identify what determines the data types used by the Mapper for a given job.
A. The key and value types specified in the JobConf.setMapInputKeyClass and JobConf.setMapInputValueClass methods.
B. The data types specified in HADOOP_MAP_DATATYPES environment variable.
C. The mapper-specification.xml file submitted with the job determine the mapper’s input key and value types.
D. The InputFormat used by the job determines the mapper’s input key and value types.
Ans: d

What types of algorithms are difficult to express in MapReduce v1 (MRv1)?
A. Algorithms that require applying the same mathematical function to large numbers of individual binary records.
B. Relational operations on large amounts of structured and semi-structured data.
C. Algorithms that require global sharing states.
D. Large scale graph algorithm that require one step link traversal.
E. Text analysis algorithms on large collection of un-structured text (e.g. Web crawl).
Ans: c

You wrote a map function that throws a runtime exception when it encounters any control character in input data. The input you supplied had 12 such characters spread across five input splits. The first 4 input split has 2 control characters each and the 5th input split has 4 control characters.
Identify the number of failed tasks if the job is run with mapred.max.map.attempts =4.
A. You will have 48 failed tasks.
B. You will have 12 failed tasks.
C. You will have 5 failed tasks.
D. You will have 20 failed tasks.
E. You will have 17 failed tasks.

What are supported programming languages for Map Reduce?
A.  The most common programming language is Java, but scripting languages are also supported via Hadoop streaming.
B.  Any programming language that can comply with Map Reduce concept can be supported.
C. Only Java supported since Hadoop was written in Java.
D.  Currently Map Reduce supports Java, C, C++ and COBOL.
Ans: a,b


What is true about LocalJobRunner?
A. It can configure as many reducers as it needs.
B. You can use “Partitioners”.
C. It can use local file system as well as HDFS.
D. It can only use local file system.
Ans: d


Popular posts from this blog

Hyperledger Fabric: 20 Real Interview Questions

Python IF Statements Multiple Conditions Examples

Best Machine Learning Book for Beginners