Skip to main content

Featured post

AWS EC2 Real Story on Elastic Cloud Computing

The short name for Amazon Elastic Computing Cloud is EC2. You can keep this point as an interview question. The computing capacity has an elastic property. Based on your requirement you can increase or decrease computing power.
You need to be very attentive when you enable Auto scaling feature. It is a responsibility on Admins. Amazon AWS EC2Making your existing hardware to the requirement, always is not so easy. So the EC2 feature in AWS helps you to allocate computing power according to your needs. AWS EC2 instance acts as your physical server.It has memory.You can increase the instance size in terms of CPU, Memory, Storage and GPU.EC2 auto scaling is a property, where it automatically increase your computing power. List of Top Security Features in EC2  1#. Virtual Private CloudThe responsibility of Virtual Private Cloud, is to safeguard each instance separately. That means, you cannot access others instance, which is already created by other organization.
2#. Network Access Control L…

Top 100 Hadoop Complex Interview Questions (Part 2 of 4)

I am giving series Hadoop interview questions. This is my 2nd set of questions. You can get quick benefit by reading these questions start to end.

hadoop part 2
1). If a data Node is full how it’s identified?
Ans). When data is stored in a data node, then the metadata of that data will be stored in the Namenode. So Namenode will identify if the data node is full.

2). If data nodes increase, then do we need to upgrade Namenode?
Ans). While installing the Hadoop system, Namenode is determined based on the size of the clusters. Most of the time, we do not need to upgrade the Namenode because it does not store the actual data, but just the metadata, so such a requirement rarely arise.

3). Are job tracker and task trackers present in separate machines?Ans). Yes, job tracker and task tracker are present in different machines. The reason is job tracker is a single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted.

4). When we send a data to a node, do we allow settling in time, before sending another data to that node?
Ans). Yes, we do.

Related: Hadoop Complex Questions part-1

5). Does Hadoop always require digital data to process?
Ans). Yes. Hadoop always require digital data to be processed.

6).On what basis Namenode will decide which data node to write on?
As the Namenode has the metadata (information) related to all the data nodes, it knows which data node is free.

7). Doesn’t Google have its very own version of DFS?
Ans). Yes, Google owns a DFS known as “Google File System (GFS)” developed by Google Inc. for its own use.

8). Who is a ‘user’ in HDFS?
Ans). A user is like you or me, who has some query or who needs some kind of data.

9) Is client the end user in HDFS?
Ans). No, Client is an application which runs on your machine, which is used to interact with the Namenode (job tracker) or data node (task tracker).

10). What is the communication channel between the client and the name node/data node?
Ans). The mode of communication is SSH.

11). What is a rack?
Ans). A rack is a storage area with all the data nodes put together. These data nodes can be physically located in different places. The rack is a physical collection of data nodes which are stored at a single location. There can be multiple racks in a single location.

12). On what basis data will be stored on a rack?
Ans). When the client is ready to load a file into the cluster, the content of the file will be divided into blocks. Now the client consults the Namenode and gets 3 data nodes for every block of the file which indicates where the block should be stored. While placing the data nodes, the key rule followed is “for every block of data, two copies will exist in one rack, third copy in a different rack“. This rule is known as “Replica Placement Policy“.

13). Do we need to place 2nd and 3rd data in rack 2 only?
Ans). Yes, this is to avoid data node failure.

14). What if rack 2 and data node fails?
Ans). If both rack2 and data node present in rack 1 fails then there is no chance of getting data from it. In order to avoid such situations, we need to replicate that data number of times instead of replicating only thrice. This can be done by changing the value in replication factor which is set to 3 by default.

15). What is a Secondary Namenode? Is it a substitute for the Namenode?
Ans). The secondary Namenode constantly reads the data from the RAM of the Namenode and writes it into the hard disk or the file system. It is not a substitute for the Namenode, so if the Namenode fails, the entire Hadoop system goes down.

16). What is the difference between Gen1 and Gen2 Hadoop with regards to the Namenode?
Ans). In Gen 1 Hadoop, Namenode is the single point of failure. In Gen 2 Hadoop, we have what is known as Active and Passive Namenodes kind of a structure. If the active Namenode fails, passive Namenode takes over the charge.

17). What is MapReduce?
Ans). Map Reduce is the ‘heart‘ of Hadoop that consists of two parts – ‘map’ and ‘reduce’. Maps and reduces are programs for processing data. ‘Map’ processes the data first to give some intermediate output which is further processed by ‘Reduce’ to generate the final output. Thus, MapReduce allows for distributed processing of the map and reduction operations.

18). Can you explain how do ‘map’ and ‘reduce’ work?
Ans). Namenode takes the input and divide it into parts and assign them to data nodes. These data nodes process the tasks assigned to them and make a key-value pair and return the intermediate output to the Reducer. The reducer collects this key-value pairs of all the data nodes and combines them and generates the final output.

19). What is ‘Key value pair’ in HDFS?
Ans). The key value pair is the intermediate data generated by maps and sent to reduces for generating the final output.

20). What is the difference between MapReduce engine and HDFS cluster?
Ans). HDFS cluster is the name given to the whole configuration of master and slaves where data is stored. Map Reduce Engine is the programming module which is used to retrieve and analyze data.

21). Is map like a pointer?
Ans). No, Map is not like a pointer.

22). Do we require two servers for the Namenode and the data nodes?
Ans). Yes, we need two different servers for the Namenode and the data nodes. This is because Namenode requires highly configurable system as it stores information about the location details of all the files stored in different data nodes and on the other hand, data nodes require low configuration system.

23). Why is the number of splits equal to the number of maps?
Ans). The number of maps is equal to the number of input splits because we want the key and value pairs of all the input splits.

24). Is a job split into maps?
Ans). No, a job is not split into maps. Spilled is created for the file. The file is placed on data nodes in blocks. For each split, a map is needed.

25). Which are the two types of ‘writes’ in HDFS?
Ans). There are two types of writes in HDFS: posted and non-posted write. Posted Write is when we write it and forget about it, without worrying about the acknowledgment. It is similar to our traditional Indian post. In a Non-posted Write, we wait for the acknowledgment. It is similar to today’s courier services. Naturally, non-posted write is more expensive than the posted write. It is much more expensive, though both writes are asynchronous.

26). Why ‘Reading‘ is done in parallel and ‘Writing‘ is not in HDFS?
Ans). Reading is done in parallel because by doing so we can access the data fast. But we do not perform the write operation in parallel. The reason is that if we perform the write operation in parallel, then it might result in data inconsistency. For example, you have a file and two nodes are trying to write data into the file in parallel, then the first node does not know what the second node has written and vice-versa. So, this makes it confusing which data to be stored and accessed.

27). Can Hadoop be compared to NoSQL database like Cassandra?
Ans). Though NoSQL is the closest technology that can be compared to Hadoop, it has its own pros and cons. There is no DFS in NOSQL. Hadoop is not a database. It’s a filesystem (HDFS) and distributed programming framework (MapReduce).

28). How can I install Cloudera VM in my system?
Ans). When you enroll for the Hadoop course at Eureka, you can download the Hadoop Installation steps.pdf file from our dropbox. This will be shared with you by an e-mail.

29). Which are the three modes in which Hadoop can be run?
Ans). The three modes in which Hadoop can be run are:

  1. standalone (local) mode
  2. Pseudo-distributed mode
  3. Fully distributed mode

Related Posts

Comments

Popular posts from this blog

Blue Prism complete tutorials download now

Blue prism is an automation tool useful to execute repetitive tasks without human effort. To learn this tool you need the right material. Provided below quick reference materials to understand detailed elements, architecture and creating new bots. Useful if you are a new learner and trying to enter into automation career.
The number one and most popular tool in automation is a Blue prism. In this post, I have given references for popular materials and resources so that you can use for your interviews.
RPA Blue Prism RPA blue prism tutorial popular resources I have given in this post. You can download quickly. Learning Blue Prism is a really good option if you are a learner of Robotic process automation.

RPA Advantages The RPA is also called "Robotic Process Automation"- Real advantages are you can automate any business process and you can complete the customer requests in less time.

The Books Available on Blue Prism 
Blue Prism resourcesDavid chappal PDF bookBlue Prism Blogs

AWS EC2 Real Story on Elastic Cloud Computing

The short name for Amazon Elastic Computing Cloud is EC2. You can keep this point as an interview question. The computing capacity has an elastic property. Based on your requirement you can increase or decrease computing power.
You need to be very attentive when you enable Auto scaling feature. It is a responsibility on Admins. Amazon AWS EC2Making your existing hardware to the requirement, always is not so easy. So the EC2 feature in AWS helps you to allocate computing power according to your needs. AWS EC2 instance acts as your physical server.It has memory.You can increase the instance size in terms of CPU, Memory, Storage and GPU.EC2 auto scaling is a property, where it automatically increase your computing power. List of Top Security Features in EC2  1#. Virtual Private CloudThe responsibility of Virtual Private Cloud, is to safeguard each instance separately. That means, you cannot access others instance, which is already created by other organization.
2#. Network Access Control L…

Python Syntax Rules Eliminate Errors Before you start debugging

In Python, if you know syntax rules, you can eliminate errors. The basic mistakes programmers do are missing semicolons, adding extra commas, and extra spaces. Python is case sensitive. So using the wrong identifier gives an error.
Indentation is unique to Python. You cannot find this kind of rule in any other programming languages Python Syntax Cheat Sheet These are the main areas you need to focus while writing a Python program. You need to learn rules. Else you need to waste a lot of time fixing the issues or errors.
Indentation or Syntax ErrorsExceptionsHandling Exceptions
1. Indentation If you do not follow proper order, you will get an error. The details of one block shroud follow in one vertical line. The sub-block should be inside of that.

In if loop, the if, elif, and else should have same indentation. Not only, the statement inside of them should have same indentation.Understand these examples a good material on indentation for you.   2. Exceptions  Python raises exception, wh…

Python Improved Logic Easy Way to Calculate Factorial

I am practicing Python programming. This post is you can write logic to calculate factorial in function. This function you can call it a user-defined function. The function name is 'factorial.py'. In real-time, you can write a program in a file and run it in python console. The main task of a developer is to create functions for the reusable code. They call these functions whenever they need. Factorial calculation program for supplied input value. Factorial Logic in Python I have completed this logic in 3 steps. Write factorial.pyImportExecute it Write Factorial.py  Here you need to define a function. Use 2 for loops, and write your logic. This is done on LInux operating system. You can also try on Linux.
After, ESC command Use, :wq to come out of the module. Import Factorial.py Go to Python console, using 'python' command. Use import factorial.py command.


Execute Factorial.py  >>> factorial.fact(5) It will show the result of factorial. Bottom line  Factorial o…

Calculate Circle Area the Logic You Need to write in Python

In Python, you can calculate circle area easily by using function. The purpose of Python is to use in data analysis.


You need this logic in many areas. You can use in your present finance projects or new ones.

Benefits of function you can re-use the same code number of times Area of Circle=pi*r*r Area of Circle Steps Given - You Can do Using Two Methods, Explained Both Methods I have given steps to calculate area of circle using two different methods. First I followed by creating an user defined function. Next directly I ran  the formula in interpreter.


Method-1 Steps I have followed to Calculate Area Using FunctionLogged into Cent Os (Linux) Create .py module Import .py module into Python Execute .py module  1. Log in I have first logged into CentOS. You can see there '$'.

2. Creating .py module To create .py module. You can use vi editor command.

You need 'import decimal' to get Decimal values. Else you will get only integer.


I have given pwd comman…

Hyperledger Fabric Real Interview Questions Read Today

I am practicing Hyperledger. This is one of the top listed blockchains. This architecture follows R3 Corda specifications. Sharing the interview questions with you that I have prepared for my interview.

Though Ethereum leads in the real-time applications. The latest Hyperledger version is now ready for production applications. It has now become stable for production applications.
The Hyperledger now backed by IBM. But, it is still an open source. These interview questions help you to read quickly. The below set of interview questions help you like a tutorial on Hyperledger fabric. Hyperledger Fabric Interview Questions1). What are Nodes?
In Hyperledger the communication entities are called Nodes.

2). What are the three different types of Nodes?
- Client Node
- Peer Node
- Order Node
The Client node initiates transactions. The peer node commits the transaction. The order node guarantees the delivery.

3). What is Channel?
A channel in Hyperledger is the subnet of the main blockchain. You c…

Top 10 SCALA Quiz Questions for Programmers

Scala is an acronym for “Scalable Language”. This means that Scala grows with you. You can play with it by typing one-line expressions and observing the results. But you can also rely on it for large mission critical systems, as many companies, including Twitter, LinkedIn, or Intel do.


To some, Scala feels like a scripting language. Its syntax is concise and low ceremony; its types get out of the way because the compiler can infer them. There’s a REPL and IDE worksheets for quick feedback.

Developers like it so much that Scala won the ScriptBowl contest at the 2012 JavaOne conference. At the same time, Scala is the preferred workhorse language for many mission critical server systems. The generated code is on a par with Java’s and its precise typing means that many problems are caught at compile-time rather than after deployment.


At the root, the language’s scalability is the result of a careful integration of object-oriented and functional language concepts.(Ref-what is Scala).View Su…

Automation developer these are top Skills you need to learn

Robotic process automation is an upcoming IT skill. Three tools are popular. It is difficult to learn all three tool. So, learn anyone tool to start your career in automation.
To get a job in this line, I found in my research that some programming skills and Hand-on training on any one of the tools is required. Also, try to know the differences between popular RPA tools.
Skills Companies Looking in Automation Engineers All big companies looking for candidates having experience in Automation anywhere, Blue Prism and UIPath. It is not possible to learn all tools. Learn anyone tool and do practice well.

Ok.

You may ask a question about how to do it. Join in good training institute and learn one tool.  Take online classes to learn faster.

To learn Uipath try here. Also, you can enroll online course to learn UiPath.

UiPath GO The list of IT skills you needAutomation anywhere/Blue Prism/Uipath .Net/C#/Java/SQL skills MS-Visio Power Builder Python scripts/Unix Scripts/Perl Scripts HTML/CSS/J…