Skip to main content

Big data: Quiz-1 Hadoop Top Interview Questions

Q.1) How Hadoop achieves scaling in terms of storage?
A.By increasing the hard disk capacity of the machine
B.By increasing the RAM capacity of the machine
C.By increasing both hard disk and RAM capacity of the machine
D.By increasing the hard disk capacity of the machine and by adding more machine

Q.2) How fault tolerance with respect to data is achieved in Hadoop?
A.By breaking the data into smaller blocks and distributing these smaller blocks into several machines
B.By adding extra nodes.
C.By breaking the data into smaller blocks and copying each block several times, and distributing these replicas across several machines. By doing this Hadoop makes sure even if the machines are failed the replica is present in some other machine
D.None of these

Q.3) In what all parameters Hadoop scales up?
A. Storage only
B. Performance only
C. Storage and performance both
D. Storage ,performance and IO bandwidth

Q.4) What is the scalability limit of Hadoop?
A. NameNode’s RAM
B. NameNode’s hard disk
C. Both Hard disk and RAM of the NameNode
D. Hadoop can scale up to any limit

Q.5) How does Hadoop does the reading faster?
A. Hadoop uses high end machines which has lower disk latency
B. Hadoop minimizes the seek rate by reading the full block of data at once
C. By adding more machines to the cluster, so that it can read the data faster
D. By increasing the hard disk size of the machine where data is stored

Bigdata Career and Options
Bigdata Career and Options
Q.6) What is HDFS?
A. HDFS is regular file system like any other file system, and you can perform any operations on HDFS
B.  HDFS is a layered file system on top of your native file system and you can do all the operations you want
C.  HDFS is layered file system which modifies the local system in such a way that you can perform any operations
D.  HDFS is layered file system on top of your local file system which does not modify local file system and there are some restrictions with respect to the operations which you perform

Q.7) When you put the files on HDFS, what does it do?
A.  Break the file into blocks,  each block is replicated and  replicas are distributed  over the machines and NameNode updates its meta data
B.  File is replicated and is distributed across several machines and NameNode update its metadata
C.  File is broken into blocks, each block is replicated and distributed across machines and DataNode’s update its meta data
D.  File is kept as it is on the machines, along with the replicas.

Q.8) When you put the files on HDFS, where does the HDFS stores its blocks?
A.  On HDFS
B.  On NameNode’s local file system
C.  On Data Node’s local file system
D.  Blocks are placed both on NameNode’s and DataNode’s local file system so that if DataNode goes down, NameNode should be able to replicate the data from its own local file system
  
Q.9) What if the NameNode goes down?
A. Secondary NameNode takes up the charge and starts serving data nodes
B. NameNode is single point of failure; administrator has to manually start the NameNode. Till then HDFS is inaccessible.
C. Secondary Name Node asks one of the DataNodes to take up the charge of the NameNode, so that there is no interruption in the service
D.  None of these

Q.10) .Does Hadoop efficiently solve every kind of problem?
A. Yes, it is like any other framework and is capable of solving any problem efficiently
B. Hadoop can solve those problems very efficiently where the data is independent of each other
C. Hadoop can solve only data intensive problems efficiently
D. Hadoop can solve only computational intensive problems efficiently

Q.11) If a file is broken into blocks and distributed across machine then how you read back the file?
A.  You will search each of the data nodes and ask the data nodes the list of blocks. Then you check each of the blocks and read the appropriate block
B.  You will ask the NameNode, and since NameNode has the meta information, it will read the data from the data nodes and give back the file to you
C.  You will ask the NameNode, and since the NameNode has the meta information, it will give you the list of data nodes which are hosting the blocks and then you go to each of the data nodes and read the block
D.  You will directly read the files from HDFS

Q.12) What is the command to copy a file from a client’s local machine to HDFS? Let’s assume a file by name “sample” is present under the location “/usr/local” directory and client is interested to copy the file by name “sample_hdfs” on HDFS?
A.      hadoop  fs    -cp/usr/local/sample    sample_hdfs
B.      hadoop fs   -copyFromLocal   /usr/local/sample    sample_hdfs
C.      hadoop   fs   -get    sample_hdfs   /usr/local/sample
D.      hadoop   fs  -put   sample_hdfs    /usr/local/sample

Q.13) Does the following command will execute successfully or will throw exception “hadoop   fs  -setrep 0  sample” where sample is a file present on HDFS?
A.  This command will not throw any exception
B.  This command might throw exception when the size of the sample file is greater than the block size
C.  Yes this command will throw exception as you cannot have the replication factor set to 0
D.  This command will throw exception only when the size of the sample file is less than the block size

Q.14) There are two files file_1 and file_2 on HDFS under directory “foo”. What is the result of the command hadoop fs  -getMerge foo foo
A.  It will create a directory “foo” on local file system and under this directory file_1 and file_2 will be copied
B.  It will create a file “foo” on local file system with the contents of file_1 and file_2 merged in this file
C.  This will throw an exception as the getmerge command works only on files not on directories
D.  This command will throw an exception as both the source and destination directories are same. They need to be different if this operation needs to be performed. 

Comments

Popular posts from this blog

Blue Prism complete tutorials download now

Blue prism is an automation tool useful to execute repetitive tasks without human effort. To learn this tool you need the right material. Provided below quick reference materials to understand detailed elements, architecture and creating new bots. Useful if you are a new learner and trying to enter into automation career. The number one and most popular tool in automation is a Blue prism. In this post, I have given references for popular materials and resources so that you can use for your interviews.
RPA Blue Prism RPA blue prism tutorial popular resources I have given in this post. You can download quickly. Learning Blue Prism is a really good option if you are a learner of Robotic process automation.
RPA Advantages The RPA is also called "Robotic Process Automation"- Real advantages are you can automate any business process and you can complete the customer requests in less time.

The Books Available on Blue Prism 
Blue Prism resourcesDavid chappal PDF bookBlue Prism BlogsVi…

Hyperledger Fabric Real Interview Questions Read Today

I am practicing Hyperledger. This is one of the top listed blockchains. This architecture follows R3 Corda specifications. Sharing the interview questions with you that I have prepared for my interview.

Though Ethereum leads in the real-time applications. The latest Hyperledger version is now ready for production applications. It has now become stable for production applications.
The Hyperledger now backed by IBM. But, it is still an open source. These interview questions help you to read quickly. The below set of interview questions help you like a tutorial on Hyperledger fabric. Hyperledger Fabric Interview Questions1). What are Nodes?
In Hyperledger the communication entities are called Nodes.

2). What are the three different types of Nodes?
- Client Node
- Peer Node
- Order Node
The Client node initiates transactions. The peer node commits the transaction. The order node guarantees the delivery.

3). What is Channel?
A channel in Hyperledger is the subnet of the main blockchain. You c…

Data analysis tools top demand in the job market to read today

Data analytics is the job role hot in demand in each organization. The digital skills such as Mobile development, Full stack development, and Data Science, and Cloud computing are successful because those are very user-friendly to the end users.
Predictive Analytics Digital devices enabled with digital technologies cause to generate more data. You need different kinds of tools to analyze data of different format.

You need the right tools. Else you cannot predict user mind. User search data is the source for big retail markets. Based on these search words, they start selling the products.

The motto behind data analytics is to get the benefit to all stakeholders.
Cloud Computing Let us take a cloud computing the main advantage is cost-effective and scalability. Top Data Analytics Tools in DemandR ProgrammingSASExcelTableauQlikViewTop Magazines in Data AnalyticsAnalytics InsightAnalytics MagazineAnalytics India Magazine Related PostsR Vs SAS Top Differences6 Top IT Skills that have Huge D…

Three popular RPA tools functional differences

Robotic process automation is growing area and many IT developers across the board started up-skill in this popular area. I have written this post for the benefit of Software developers who are interested in RPA also called Robotic Process Automation.

In my previous post, I have described that total 12 tools are available in the market. Out of those 3 tools are most popular. Those are Automation anywhere, BluePrism and Uipath. Many programmers asked what are the differences between these tools. I have given differences of all these three RPA tools.

BluePrism Blue Prism has taken a simple concept, replicating user activity on the desktop, and made it enterprise strength. The technology is scalable, secure, resilient, and flexible and is supported by a comprehensive methodology, operational framework and provided as packaged software.The technology is developed and deployed within a “corridor of IT governance” and has sophisticated error handling and process modelling capabilities to ens…

R Vs SAS differences to read today

Statistical analysis should know by every software engineer. R is an open source statistical programming language. SAS is licensed analysis suite for statistics. The two are very much popular in Machine learning and data analytics projects.
SAS is analysis suite software and R is a programming language R ProgrammingR supports both statistical analysis and GraphicsR is an open source project.R is 18th most popular LanguageR packages are written in C, C++, Java, Python and.NetR is popular in Machine learning, data mining and Statistical analysis projects. SASSAS is a statistical analysis suite. Developed to process data sets in mainframe computers.Later developed to support multi-platforms. Like  Mainframe, Windows, and LinuxSAS has multiple products. SAS/ Base is very basic level.SAS is popular in data related projects. Learn SAS vs R Top Differences between SAS Vs R Programming SAS AdvantagesThe data integration from any data source is faster in SAS.The licensed software suite, so you…

Automation developer these are top Skills you need to learn

Robotic process automation is an upcoming IT skill. Three tools are popular. It is difficult to learn all three tool. So, learn anyone tool to start your career in automation.
To get a job in this line, I found in my research that some programming skills and Hand-on training on any one of the tools is required. Also, try to know the differences between popular RPA tools.
All big companies looking for candidates having experience in Automation anywhere, Blue Prism and UIPath. It is not possible to learn all tools. Learn anyone tool and do practice well.

Ok.

You may ask a question about how to do it. Join in good training institute and learn one tool.  Take online classes to learn faster.

To learn Uipath try here. Also, you can enroll online course to learn UiPath.

The list of IT skills you need
Automation anywhere/Blue Prism/Uipath .Net/C#/Java/SQL skills MS-Visio Power Builder Python scripts/Unix Scripts/Perl Scripts HTML/CSS/JavaScript