Featured Post

15 Python Tips : How to Write Code Effectively

Image
 Here are some Python tips to keep in mind that will help you write clean, efficient, and bug-free code.     Python Tips for Effective Coding 1. Code Readability and PEP 8  Always aim for clean and readable code by following PEP 8 guidelines.  Use meaningful variable names, avoid excessively long lines (stick to 79 characters), and organize imports properly. 2. Use List Comprehensions List comprehensions are concise and often faster than regular for-loops. Example: squares = [x**2 for x in range(10)] instead of creating an empty list and appending each square value. 3. Take Advantage of Python’s Built-in Libraries  Libraries like itertools, collections, math, and datetime provide powerful functions and data structures that can simplify your code.   For example, collections.Counter can quickly count elements in a list, and itertools.chain can flatten nested lists. 4. Use enumerate Instead of Range     When you need both the index ...

Big data: Quiz-2 Hadoop Top Interview Questions

I hope you enjoyed my previous post. This is second set of Questions exclusively for Big data engineers.

Read QUIZ-1.

Q.1) You have submitted a job on an input file which has 400 input splits in HDFS. How many map tasks will run?
A. At most 400.
B. At least 400
C. Between 400 and 1200.
D. Between 100 and 400.
Ans: c

QUESTION 2

What is not true about LocalJobRunner mode? Choose two
A. It requires JobTracker up and running.
B. It runs Mapper and Reducer in one single process
C. It stores output in local file system
D. It allows use of Distributed Cache.

Ans: d,a
Hadoop Jobs and Career
Hadoop Jobs and Career

QUESTION 3
What is the command you will use to run a driver named “SalesAnalyisis” whose compilped code is available in a jar file “SalesAnalytics.jar” with input data in directory “/sales/data” and output in a directory “/sales/analytics”?
A. hadoopfs  –jar  SalesAnalytics.jar  SalesAnalysis  -input  /sales/data  -output /sales/analysis
B. hadoopfs  jar  SalesAnalytics.jar    -input  /sales/data  -output /sales/analysis
C. hadoop    –jar  SalesAnalytics.jar  SalesAnalysis  -input  /sales/data  -output /sales/analysis
D. hadoop  jar  SalesAnalytics.jar  SalesAnalysis   /sales/data   /sales/analysis
ans:d

QUESTION 4
One map-reduce program takes a text file where each line break is considered one complete record and the line offset as a key. The map method parses the record into words and for each word it creates multiple key value pair where keys are the words itself and values are the characters in the word. The reducer finds the characters used for each unique word. This program may not be a perfect program but it works correctly. The problem this program has is that, it creates more key value pairs in the intermediate output of mappers from single input (key-value). This leads to increase of which of the followings? (Select the correct answer)
A. Disk-io and network traffic.
B. Memory foot-print of mappers and network traffic.
C. Disk-io and memory foot print of mappers
D. Block size and disk-io
Ans:

QUESTION 5
What is true about HDFS? (Select one)
A. It is suitable for storing large number of small files.
B. It is suitable storing small number of small files.
C. It is suitable for storing large number of large files.
D. It is suitable for storing small number of large files.
Ans:   c

QUESTION 6
You have just executed a mapreduce job. Where the intermediate data is written to after being emitted from mapper’s map method?
A. The intermediate data is directly transmitted to reducer and is not written anywhere in the disk.
B. The intermediate data is written to HDFS.
C. The intermediate data is written to the in-memory buffers which spill over to the local file system of the tasktracker’s machine where the mapper task is run.
D. The intermediate data is written to the in-memory buffers which spill over to the local file system of the tasktracker’s machine where the reducer task is run.
E. The intermediate data is written to the in-memory buffers which spill over to HDFS of the tasktracker’s machine where the reducer task is run.
Ans: e

QUESTION 7
You are developing a MapReduce job for reporting. The mapper will process input keys representing the year(intWritable) and input values representing product identities(Text). Identify what determines the data types used by the Mapper for a given job.
A. The key and value types specified in the JobConf.setMapInputKeyClass and JobConf.setMapInputValueClass methods.
B. The data types specified in HADOOP_MAP_DATATYPES environment variable.
C. The mapper-specification.xml file submitted with the job determine the mapper’s input key and value types.
D. The InputFormat used by the job determines the mapper’s input key and value types.
Ans: d

QUESTION 8
What types of algorithms are difficult to express in MapReduce v1 (MRv1)?
A. Algorithms that require applying the same mathematical function to large numbers of individual binary records.
B. Relational operations on large amounts of structured and semi-structured data.
C. Algorithms that require global sharing states.
D. Large scale graph algorithm that require one step link traversal.
E. Text analysis algorithms on large collection of un-structured text (e.g. Web crawl).
Ans: c

QUESTION 9
You wrote a map function that throws a runtime exception when it encounters any control character in input data. The input you supplied had 12 such characters spread across five input splits. The first 4 input split has 2 control characters each and the 5th input split has 4 control characters.
Identify the number of failed tasks if the job is run with mapred.max.map.attempts =4.
A. You will have 48 failed tasks.
B. You will have 12 failed tasks.
C. You will have 5 failed tasks.
D. You will have 20 failed tasks.
E. You will have 17 failed tasks.
Ans:

QUESTION 10
What are supported programming languages for Map Reduce?
A.  The most common programming language is Java, but scripting languages are also supported via Hadoop streaming.
B.  Any programming language that can comply with Map Reduce concept can be supported.
C. Only Java supported since Hadoop was written in Java.
D.  Currently Map Reduce supports Java, C, C++ and COBOL.
Ans: a,b

QUESTION 11

What is true about LocalJobRunner?
A. It can configure as many reducers as it needs.
B. You can use “Partitioners”.
C. It can use local file system as well as HDFS.
D. It can only use local file system.
Ans: d

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

SQL Query: 3 Methods for Calculating Cumulative SUM

Big Data: Top Cloud Computing Interview Questions (1 of 4)