Skip to main content

Big data: Quiz-1 Hadoop Top Interview Questions

Q.1) How Hadoop achieves scaling in terms of storage?
A.By increasing the hard disk capacity of the machine
B.By increasing the RAM capacity of the machine
C.By increasing both hard disk and RAM capacity of the machine
D.By increasing the hard disk capacity of the machine and by adding more machine

Q.2) How fault tolerance with respect to data is achieved in Hadoop?
A.By breaking the data into smaller blocks and distributing these smaller blocks into several machines
B.By adding extra nodes.
C.By breaking the data into smaller blocks and copying each block several times, and distributing these replicas across several machines. By doing this Hadoop makes sure even if the machines are failed the replica is present in some other machine
D.None of these

Q.3) In what all parameters Hadoop scales up?
A. Storage only
B. Performance only
C. Storage and performance both
D. Storage ,performance and IO bandwidth

Q.4) What is the scalability limit of Hadoop?
A. NameNode’s RAM
B. NameNode’s hard disk
C. Both Hard disk and RAM of the NameNode
D. Hadoop can scale up to any limit

Q.5) How does Hadoop does the reading faster?
A. Hadoop uses high end machines which has lower disk latency
B. Hadoop minimizes the seek rate by reading the full block of data at once
C. By adding more machines to the cluster, so that it can read the data faster
D. By increasing the hard disk size of the machine where data is stored

Bigdata Career and Options
Bigdata Career and Options
Q.6) What is HDFS?
A. HDFS is regular file system like any other file system, and you can perform any operations on HDFS
B.  HDFS is a layered file system on top of your native file system and you can do all the operations you want
C.  HDFS is layered file system which modifies the local system in such a way that you can perform any operations
D.  HDFS is layered file system on top of your local file system which does not modify local file system and there are some restrictions with respect to the operations which you perform

Q.7) When you put the files on HDFS, what does it do?
A.  Break the file into blocks,  each block is replicated and  replicas are distributed  over the machines and NameNode updates its meta data
B.  File is replicated and is distributed across several machines and NameNode update its metadata
C.  File is broken into blocks, each block is replicated and distributed across machines and DataNode’s update its meta data
D.  File is kept as it is on the machines, along with the replicas.

Q.8) When you put the files on HDFS, where does the HDFS stores its blocks?
B.  On NameNode’s local file system
C.  On Data Node’s local file system
D.  Blocks are placed both on NameNode’s and DataNode’s local file system so that if DataNode goes down, NameNode should be able to replicate the data from its own local file system
Q.9) What if the NameNode goes down?
A. Secondary NameNode takes up the charge and starts serving data nodes
B. NameNode is single point of failure; administrator has to manually start the NameNode. Till then HDFS is inaccessible.
C. Secondary Name Node asks one of the DataNodes to take up the charge of the NameNode, so that there is no interruption in the service
D.  None of these

Q.10) .Does Hadoop efficiently solve every kind of problem?
A. Yes, it is like any other framework and is capable of solving any problem efficiently
B. Hadoop can solve those problems very efficiently where the data is independent of each other
C. Hadoop can solve only data intensive problems efficiently
D. Hadoop can solve only computational intensive problems efficiently

Q.11) If a file is broken into blocks and distributed across machine then how you read back the file?
A.  You will search each of the data nodes and ask the data nodes the list of blocks. Then you check each of the blocks and read the appropriate block
B.  You will ask the NameNode, and since NameNode has the meta information, it will read the data from the data nodes and give back the file to you
C.  You will ask the NameNode, and since the NameNode has the meta information, it will give you the list of data nodes which are hosting the blocks and then you go to each of the data nodes and read the block
D.  You will directly read the files from HDFS

Q.12) What is the command to copy a file from a client’s local machine to HDFS? Let’s assume a file by name “sample” is present under the location “/usr/local” directory and client is interested to copy the file by name “sample_hdfs” on HDFS?
A.      hadoop  fs    -cp/usr/local/sample    sample_hdfs
B.      hadoop fs   -copyFromLocal   /usr/local/sample    sample_hdfs
C.      hadoop   fs   -get    sample_hdfs   /usr/local/sample
D.      hadoop   fs  -put   sample_hdfs    /usr/local/sample

Q.13) Does the following command will execute successfully or will throw exception “hadoop   fs  -setrep 0  sample” where sample is a file present on HDFS?
A.  This command will not throw any exception
B.  This command might throw exception when the size of the sample file is greater than the block size
C.  Yes this command will throw exception as you cannot have the replication factor set to 0
D.  This command will throw exception only when the size of the sample file is less than the block size

Q.14) There are two files file_1 and file_2 on HDFS under directory “foo”. What is the result of the command hadoop fs  -getMerge foo foo
A.  It will create a directory “foo” on local file system and under this directory file_1 and file_2 will be copied
B.  It will create a file “foo” on local file system with the contents of file_1 and file_2 merged in this file
C.  This will throw an exception as the getmerge command works only on files not on directories
D.  This command will throw an exception as both the source and destination directories are same. They need to be different if this operation needs to be performed. 


Popular posts from this blog

Four Tableau products a quick review and explanation

I want to share you what are the Products most popular.

Total four products. Read the details below.

Tableau desktop-(Business analytics anyone can use) - Tableau  Desktop  is  based  on  breakthrough technology  from  Stanford  University  that  lets  you drag & drop to analyze data. You can connect to  data in a few clicks, then visualize and create interactive dashboards with a few more.

We’ve done years of research to build a system that supports people’s natural  ability  to  think visually. Shift fluidly between views, following your natural train of thought. You’re not stuck in wizards or bogged down writing scripts. You just create beautiful, rich data visualizations.  It's so easy to use that any Excel user can learn it. Get more results for less effort. And it’s 10 –100x faster than existing solutions.

Tableau server
Tableau  Server  is  a  business  intelligence  application  that  provides  browser-based  analytics anyone can use. It’s a rapid-fire alternative to th…

The Sqoop in Hadoop story to process structural data

Why Sqoop you need while working on Hadoop-The Sqoop and its primary reason is to import data from structural data sources such as Oracle/DB2 into HDFS(also called Hadoop file system).
To our readers, I have collected a good video from Edureka which helps you to understand the functionality of Sqoop.

The comparison between Sqoop and Flume

The Sqoop the word came from SQL+Hadoop Sqoop word came from SQL+HADOOP=SQOOP. And Sqoop is a data transfer tool. The main use of Sqoop is to import and export the large amount of data from RDBMS to HDFS and vice versa. List of basic Sqoop commands Codegen- It helps to generate code to interact with database records.Create-hive-table- It helps to Import a table definition into a hiveEval- It helps to evaluateSQL statement and display the resultsExport-It helps to export an HDFS directory into a database tableHelp- It helps to list the available commandsImport- It helps to import a table from a database to HDFSImport-all-tables- It helps to import tables …

Different Types Of Payment Cards

The Credit Card (Shopping): The purpose o this card is to buy any item withing the limit prescribed by banks to cardholder. These cards can have both Magnetic stripe and Chip cards. 
Now a days all banks are issuing credit cards with CHIP and PIN. After entering the PIN by cardholder, then transaction starts for further processing.

The debit (ATM, Cash) card is a relatively new method of payment. It is different from a credit card because the debit cardholder pays with the money available in their bank account, which is debited immediately in real time. A debit card seems to be more dangerous compared to a credit card because the debit card is directly linked to the bank checking account and usually allows ATM cash withdrawals.

On the other hand, it is more protected by the required two-factor authentication (PIN number plus card itself). The real dangerous element of many branded debit cards is that they can be processed as credit cards, without entering the PIN.

The Gift card
is simila…