Skip to main content

Big data: Quiz-1 Hadoop Top Interview Questions

Q.1) How Hadoop achieves scaling in terms of storage?
A.By increasing the hard disk capacity of the machine
B.By increasing the RAM capacity of the machine
C.By increasing both hard disk and RAM capacity of the machine
D.By increasing the hard disk capacity of the machine and by adding more machine

Q.2) How fault tolerance with respect to data is achieved in Hadoop?
A.By breaking the data into smaller blocks and distributing these smaller blocks into several machines
B.By adding extra nodes.
C.By breaking the data into smaller blocks and copying each block several times, and distributing these replicas across several machines. By doing this Hadoop makes sure even if the machines are failed the replica is present in some other machine
D.None of these

Q.3) In what all parameters Hadoop scales up?
A. Storage only
B. Performance only
C. Storage and performance both
D. Storage ,performance and IO bandwidth

Q.4) What is the scalability limit of Hadoop?
A. NameNode’s RAM
B. NameNode’s hard disk
C. Both Hard disk and RAM of the NameNode
D. Hadoop can scale up to any limit

Q.5) How does Hadoop does the reading faster?
A. Hadoop uses high end machines which has lower disk latency
B. Hadoop minimizes the seek rate by reading the full block of data at once
C. By adding more machines to the cluster, so that it can read the data faster
D. By increasing the hard disk size of the machine where data is stored

Bigdata Career and Options
Bigdata Career and Options
Q.6) What is HDFS?
A. HDFS is regular file system like any other file system, and you can perform any operations on HDFS
B.  HDFS is a layered file system on top of your native file system and you can do all the operations you want
C.  HDFS is layered file system which modifies the local system in such a way that you can perform any operations
D.  HDFS is layered file system on top of your local file system which does not modify local file system and there are some restrictions with respect to the operations which you perform

Q.7) When you put the files on HDFS, what does it do?
A.  Break the file into blocks,  each block is replicated and  replicas are distributed  over the machines and NameNode updates its meta data
B.  File is replicated and is distributed across several machines and NameNode update its metadata
C.  File is broken into blocks, each block is replicated and distributed across machines and DataNode’s update its meta data
D.  File is kept as it is on the machines, along with the replicas.

Q.8) When you put the files on HDFS, where does the HDFS stores its blocks?
A.  On HDFS
B.  On NameNode’s local file system
C.  On Data Node’s local file system
D.  Blocks are placed both on NameNode’s and DataNode’s local file system so that if DataNode goes down, NameNode should be able to replicate the data from its own local file system
  
Q.9) What if the NameNode goes down?
A. Secondary NameNode takes up the charge and starts serving data nodes
B. NameNode is single point of failure; administrator has to manually start the NameNode. Till then HDFS is inaccessible.
C. Secondary Name Node asks one of the DataNodes to take up the charge of the NameNode, so that there is no interruption in the service
D.  None of these

Q.10) .Does Hadoop efficiently solve every kind of problem?
A. Yes, it is like any other framework and is capable of solving any problem efficiently
B. Hadoop can solve those problems very efficiently where the data is independent of each other
C. Hadoop can solve only data intensive problems efficiently
D. Hadoop can solve only computational intensive problems efficiently

Q.11) If a file is broken into blocks and distributed across machine then how you read back the file?
A.  You will search each of the data nodes and ask the data nodes the list of blocks. Then you check each of the blocks and read the appropriate block
B.  You will ask the NameNode, and since NameNode has the meta information, it will read the data from the data nodes and give back the file to you
C.  You will ask the NameNode, and since the NameNode has the meta information, it will give you the list of data nodes which are hosting the blocks and then you go to each of the data nodes and read the block
D.  You will directly read the files from HDFS

Q.12) What is the command to copy a file from a client’s local machine to HDFS? Let’s assume a file by name “sample” is present under the location “/usr/local” directory and client is interested to copy the file by name “sample_hdfs” on HDFS?
A.      hadoop  fs    -cp/usr/local/sample    sample_hdfs
B.      hadoop fs   -copyFromLocal   /usr/local/sample    sample_hdfs
C.      hadoop   fs   -get    sample_hdfs   /usr/local/sample
D.      hadoop   fs  -put   sample_hdfs    /usr/local/sample

Q.13) Does the following command will execute successfully or will throw exception “hadoop   fs  -setrep 0  sample” where sample is a file present on HDFS?
A.  This command will not throw any exception
B.  This command might throw exception when the size of the sample file is greater than the block size
C.  Yes this command will throw exception as you cannot have the replication factor set to 0
D.  This command will throw exception only when the size of the sample file is less than the block size

Q.14) There are two files file_1 and file_2 on HDFS under directory “foo”. What is the result of the command hadoop fs  -getMerge foo foo
A.  It will create a directory “foo” on local file system and under this directory file_1 and file_2 will be copied
B.  It will create a file “foo” on local file system with the contents of file_1 and file_2 merged in this file
C.  This will throw an exception as the getmerge command works only on files not on directories
D.  This command will throw an exception as both the source and destination directories are same. They need to be different if this operation needs to be performed. 

Comments

Popular posts from this blog

The best 5 differences of AWS EMR and Hadoop

With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in the Amazon cloud. The cluster is managed using an open-source framework called Hadoop.

Amazon EMR has made enhancements to Hadoop and other open-source applications to work seamlessly with AWS. For example, Hadoop clusters running on Amazon EMR use EC2 instances as virtual Linux servers for the master and slave nodes, Amazon S3 for bulk storage of input and output data, and CloudWatch to monitor cluster performance and raise alarms.

You can also move data into and out of DynamoDB using Amazon EMR and Hive. All of this is orchestrated by Amazon EMR control software that launches and manages the Hadoop cluster. This process is called an Amazon EMR cluster.


What does Hadoop do...

Hadoop uses a distributed processing architecture called MapReduce in which a task is mapped to a set of servers for proce…

5 Things About AWS EC2 You Need to Focus!

Amazon Elastic Compute Cloud (Amazon EC2) - is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction.

The basic functions of EC2... 
It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios. 
Key Points for Interviews:

EC2 is the basic fundamental block around which the AWS are structured.EC2 provides remote ope…

6 Most Popular IoT Protocols Currently Being Used

The below is complete list of Protocols being used in Internet of things projects.

CoAP: Constrained Application Protocol. MQTT: Message Queue Telemetry Transport. XMPP: Extensible Messaging and Presence Protocol. RESTFUL Services: Representational State Transfer. AMQP: Advanced Message Queuing Protocol Websockets. 
Related:
5 Challenges in Internet-of-things mostly people look inHot IT Skills by Udemy and Dice