Skip to main content

Featured post

AWS EC2 Real Story on Elastic Cloud Computing

The short name for Amazon Elastic Computing Cloud is EC2. You can keep this point as an interview question. The computing capacity has an elastic property. Based on your requirement you can increase or decrease computing power.
You need to be very attentive when you enable Auto scaling feature. It is a responsibility on Admins. Amazon AWS EC2Making your existing hardware to the requirement, always is not so easy. So the EC2 feature in AWS helps you to allocate computing power according to your needs. AWS EC2 instance acts as your physical server.It has memory.You can increase the instance size in terms of CPU, Memory, Storage and GPU.EC2 auto scaling is a property, where it automatically increase your computing power. List of Top Security Features in EC2  1#. Virtual Private CloudThe responsibility of Virtual Private Cloud, is to safeguard each instance separately. That means, you cannot access others instance, which is already created by other organization.
2#. Network Access Control L…

Top 100 Hadoop Complex Interview Questions (Part 1 of 4)

The below list is complex interview questions as part of Hadoop tutorial you can go through these questions quickly.
hadoop part 1

1. What is BIG DATA?
Ans). Big Data is nothing but an assortment of such a huge and complex data that it becomes very tedious to capture, store, process, retrieve and analyze it with the help of on-hand database management tools or traditional data processing techniques.

2. Can you give some examples of Big Data?

Ans). There are many real-life examples of Big Data! Facebook is generating 500+ terabytes of data per day, NYSE (New York Stock Exchange) generates about 1 terabyte of new trade data per day, a jet airline collects 10 terabytes of sensor data for every 30 minutes of flying time. All these are a day to day examples of Big Data!

3. Can you give a detailed overview of the Big Data being generated by Facebook? 
Ans). As of December 31, 2012, there are 1.06 billion monthly active users on Facebook and 680 million mobile users. On an average, 3.2 billion likes and comments are posted every day on Facebook. 72% of the web audience is on Facebook. And why not! There are so many activities going on facebook from wall posts, sharing images, videos, writing comments and liking posts, etc. In fact, Facebook started using Hadoop in mid-2009 and was one of the initial users of Hadoop.

4. According to IBM, what are the three characteristics of Big Data?

Ans). According to IBM, the three characteristics of Big Data are Volume: Facebook generating 500+ terabytes of data per day. Velocity: Analyzing 2 million records each day to identify the reason for losses.Variety: images, audio, video, sensor data, log files, etc.

5. How Big is ‘Big Data’?

Ans). With time, data volume is growing exponentially. Earlier we used to talk about Megabytes or Gigabytes. But time has arrived when we talk about data volume in terms of terabytes, petabytes and also zettabytes! Global data volume was around 1.8ZB in 2011 and is expected to be 7.9ZB in 2015. It is also known that global information doubles every two years!

6. How the analysis of Big Data is useful for organizations?
Ans). Effective analysis of Big Data provides a lot of business advantage as organizations will learn which areas to focus on and which areas are less important. Big data analysis provides some early key indicators that can prevent the company from a huge loss or help in grasping a great opportunity with open hands! A precise analysis of Big Data helps in decision making! For instance, nowadays people rely so much on Facebook and Twitter before buying any product or service. All thanks to the Big Data explosion.

7. Who are ‘Data Scientists’?   
Ans). Data scientists are soon replacing business analysts or data analysts. Data scientists are experts who find solutions to analyze data. Just as web analytics, we have data scientists who have good business insight as to how to handle a business challenge. Sharp data scientists are not only involved in dealing with business problems but also choosing the relevant issues that can bring value-addition to the organization.

8. What is Hadoop? 
Ans). Hadoop is a framework that allows for distributed processing of large data sets across clusters of commodity computers using a simple programming model. Hadoop doesn’t have any expanding version like ‘oops’. The charming yellow elephant you see is basically named after Doug’s son’s toy elephant!

9. Why do we need Hadoop?
Ans). Every day a large amount of unstructured data is getting dumped into our machines. The major challenge is not to store large datasets in our systems but to retrieve and analyze the big data in the organizations, that to data present in different machines at different locations. In this situation, a necessity for Hadoop arises. Hadoop has the ability to analyze the data present in different machines at different locations very quickly and in a very cost-effective way. It uses the concept of MapReduce which enables it to divide the query into small parts and process them in parallel. This is also known as parallel computing.

The link Why Hadoop gives you a detailed explanation about why Hadoop is gaining so much popularity! Hadoop framework is written in Java. It is designed to solve problems that involve analyzing large data (e.g. petabytes). The programming model is based on Google’s MapReduce. The infrastructure is based on Google’s Big Data and Distributed File System. Hadoop handles large files/data throughput and supports data-intensive distributed applications. Hadoop is scalable as more nodes can be easily added to it.

10. Give examples of some companies that are using Hadoop structure?

Ans). A lot of companies are using the Hadoop structure such as Cloudera, EMC, MapR, Hortonworks, Amazon, Facebook, eBay, Twitter, Google and so on.

11. What is the basic difference between traditional RDBMS and Hadoop?
Ans). Traditional RDBMS is used for transactional systems to report and archive the data, whereas Hadoop is an approach to store huge amount of data in the distributed file system and process it. RDBMS will be useful when you want to seek one record from Big data, whereas, Hadoop will be useful when you want Big data in one shot and perform analysis on that later.
  • Structured data is the data that is easily identifiable as it is organized in a structure. The most common form of structured data is a database where specific information is stored in tables, that is, rows and columns. Unstructured data refers to any data that cannot be identified easily.
  • It could be in the form of images, videos, documents, email, logs, and random text. It is not in the form of rows and columns. Core components of Hadoop are HDFS and MapReduce. HDFS is basically used to store large data sets and MapReduce is used to process such large datasets. HDFS is a file system designed for storing very large files with streaming data access patterns, running clusters on commodity hardware.
12. What are the key features of HDFS?
Ans). HDFS is highly fault-tolerant, with high throughput, suitable for applications with large data sets, streaming access to file system data and can be built out of commodity hardware.

13. What is Fault Tolerance?

Ans). Suppose you have a file stored in a system, and due to some technical problem that file gets destroyed. Then there is no chance of getting the data back present in that file. To avoid such situations, Hadoop has introduced the feature of fault tolerance in HDFS.

In Hadoop, when we store a file, it automatically gets replicated at two other locations also. So even if one or two of the systems collapse, the file is still available on the third system.

14. Replication causes data redundancy then why is pursued in HDFS?

Ans). HDFS works with commodity hardware (systems with average configurations) that has high chances of getting crashed any time. Thus, to make the entire system highly fault-tolerant, HDFS replicates and stores data in different places. Any data on HDFS gets stored at least 3 different locations. So, even if one of them is corrupted and the other is unavailable for some time for any reason, then data can be accessed from the third one. Hence, there is no chance of losing the data. This replication factor helps us to attain the feature of Hadoop called Fault Tolerant.

15. Since the data is replicated thrice in HDFS, does it mean that any calculation done on one node will also be replicated on the other two?

Ans). Since there are 3 nodes, when we send the MapReduce programs, calculations will be done only on the original data. The master node will know which node exactly has that particular data. In case, if one of the nodes is not responding, it is assumed to be failed. Only then, the required calculation will be done on the second replica.

16. What is throughput? How does HDFS get a good throughput?
  • Throughput is the amount of work done in a unit time. It describes how fast the data is getting accessed from the system and it is usually used to measure the performance of the system. 
  • In HDFS, when we want to perform a task or an action, then the work is divided and shared among different systems. So all the systems will be executing the tasks assigned to them independently and in parallel. So the work will be completed in a very short period of time. 
  • In this way, the HDFS gives good throughput. By reading data in parallel, we decrease the actual time to read data tremendously. 
  • As HDFS works on the principle of ‘Write Once, Read Many‘, the feature of streaming access is extremely important in HDFS. HDFS focuses not so much on storing the data but how to retrieve it at the fastest possible speed, especially while analyzing logs. In HDFS, reading the complete data is more important than the time taken to fetch a single record from the data. 
  • Commodity hardware is a non-expensive system which is not of high quality or high-availability. Hadoop can be installed in any average commodity hardware. We don’t need supercomputers or high-end hardware to work on Hadoop. Yes, Commodity hardware includes RAM because there will be some services which will be running on RAM.
17. What is a Namenode?
Ans). Namenode is the master node on which job tracker runs and consists of the metadata. It maintains and manages the blocks which are present on the data nodes. It is a high-availability machine and single point of failure in HDFS.

18. Is Namenode also a commodity?
Ans). No. Namenode can never be a commodity hardware because the entire HDFS rely on it. It is the single point of failure in HDFS. Namenode has to be a high-availability machine.

19. What is a metadata?
Ans). Metadata is the information about the data stored in data nodes such as the location of the file, size of the file and so on.

20. Why do we use HDFS for applications having large data sets and not when there are a lot of small files?
Ans). HDFS is more suitable for a large number of data sets in a single file as compared to small amount of data spread across multiple files. This is because Namenode is a very expensive high-performance system, so it is not prudent to occupy the space in the Namenode by an unnecessary amount of metadata that is generated for multiple small files. So, when there is a large amount of data in a single file, name node will occupy less space. Hence for getting optimized performance, HDFS supports large data sets instead of multiple small files.

21. What is a daemon?

Ans). Daemon is a process or service that runs in the background. In general, we use this word in the UNIX environment. The equivalent of Daemon in Windows is “services” and in Dos is ” TSR”.

22. What is a job tracker?
Ans). Job tracker is a daemon that runs on a name node for submitting and tracking MapReduce jobs in Hadoop. It assigns the tasks to the different task tracker. In a Hadoop cluster, there will be only one job tracker but many task trackers. It is the single point of failure for Hadoop and MapReduce Service. If the job tracker goes down all the running jobs are halted. It receives a heartbeat from task tracker based on which Job tracker decides whether the assigned task is completed or not.

Task tracker is also a daemon that runs on data nodes. Task Trackers manage the execution of individual tasks on the slave node. When a client submits a job, the job tracker will initialize the job and divide the work and assign them to different task trackers to perform MapReduce tasks. While performing this action, the task tracker will be simultaneously communicating with job tracker by sending the heartbeat. If the job tracker does not receive the heartbeat from task tracker within the specified time, then it will assume that task tracker has crashed and assigned that task to another task tracker in the cluster.

23. Is Namenode machine same as data node machine as in terms of hardware?

Ans). It depends upon the cluster you are trying to create. The Hadoop VM can be there on the same machine or on another machine. For instance, in a single node cluster, there is only one machine, whereas in the development or in a testing environment, Namenode and data nodes are on different machines.

24. What is a heartbeat in HDFS?

Ans). A heartbeat is a signal indicating that it is alive. A data node sends heartbeat to Namenode and task tracker will send its heartbeat to job tracker. If the Namenode or job tracker does not receive heartbeat then they will decide that there is some problem in data node or task tracker is unable to perform the assigned task. No, in a practical environment, Namenode is on a separate host and job tracker is on a separate host.

25. What is a ‘block’ in HDFS?
Ans). A ‘block’ is the minimum amount of data that can be read or written. In HDFS, the default block size is 64 MB as a contrast to the block size of 8192 bytes in Unix/Linux. Files in HDFS are broken down into block-sized chunks, which are stored as independent units. HDFS blocks are large as compared to disk blocks, particularly to minimize the cost of seeks.

26. If a particular file is 50 MB, will the HDFS block still consume 64 MB as the default size?
Ans). No, not at all! 64 MB is just a unit where the data will be stored. In this particular situation, only 50 MB will be consumed by an HDFS block and 14 MB will be free to store something else. It is the master node that does data allocation in an efficient manner.

27. What are the benefits of block transfer?

Ans). A file can be larger than any single disk in the network. There’s nothing that requires the blocks from a file to be stored on the same disk so they can take advantage of any of the disks in the cluster. Making the unit of abstraction a block rather than a file simplifies the storage subsystem. Blocks provide fault tolerance and availability. To ensure against corrupted blocks and disk and machine failure, each block is replicated to a small number of physically separate machines (typically three). If a block becomes unavailable, a copy can be read from another location in a way that is transparent to the client.

28. If we want to copy 10 blocks from one machine to another, but another machine can copy only 8.5 blocks, can the blocks be broken at the time of replication?
Ans). In HDFS, blocks cannot be broken down. Before copying the blocks from one machine to another, the Master node will figure out what is the actual amount of space required, how many blocks are being used, how much space is available, and it will allocate the blocks accordingly.

29. How indexing is done in HDFS?

Ans). Hadoop has its own way of indexing. Depending upon the block size, once the data is stored, HDFS will keep on storing the last part of the data which will say where the next part of the data will be. In fact, this is the base of HDFS.

Related Posts

Comments

Popular posts from this blog

Blue Prism complete tutorials download now

Blue prism is an automation tool useful to execute repetitive tasks without human effort. To learn this tool you need the right material. Provided below quick reference materials to understand detailed elements, architecture and creating new bots. Useful if you are a new learner and trying to enter into automation career.
The number one and most popular tool in automation is a Blue prism. In this post, I have given references for popular materials and resources so that you can use for your interviews.
RPA Blue Prism RPA blue prism tutorial popular resources I have given in this post. You can download quickly. Learning Blue Prism is a really good option if you are a learner of Robotic process automation.

RPA Advantages The RPA is also called "Robotic Process Automation"- Real advantages are you can automate any business process and you can complete the customer requests in less time.

The Books Available on Blue Prism 
Blue Prism resourcesDavid chappal PDF bookBlue Prism Blogs

AWS EC2 Real Story on Elastic Cloud Computing

The short name for Amazon Elastic Computing Cloud is EC2. You can keep this point as an interview question. The computing capacity has an elastic property. Based on your requirement you can increase or decrease computing power.
You need to be very attentive when you enable Auto scaling feature. It is a responsibility on Admins. Amazon AWS EC2Making your existing hardware to the requirement, always is not so easy. So the EC2 feature in AWS helps you to allocate computing power according to your needs. AWS EC2 instance acts as your physical server.It has memory.You can increase the instance size in terms of CPU, Memory, Storage and GPU.EC2 auto scaling is a property, where it automatically increase your computing power. List of Top Security Features in EC2  1#. Virtual Private CloudThe responsibility of Virtual Private Cloud, is to safeguard each instance separately. That means, you cannot access others instance, which is already created by other organization.
2#. Network Access Control L…

Python Syntax Rules Eliminate Errors Before you start debugging

In Python, if you know syntax rules, you can eliminate errors. The basic mistakes programmers do are missing semicolons, adding extra commas, and extra spaces. Python is case sensitive. So using the wrong identifier gives an error.
Indentation is unique to Python. You cannot find this kind of rule in any other programming languages Python Syntax Cheat Sheet These are the main areas you need to focus while writing a Python program. You need to learn rules. Else you need to waste a lot of time fixing the issues or errors.
Indentation or Syntax ErrorsExceptionsHandling Exceptions
1. Indentation If you do not follow proper order, you will get an error. The details of one block shroud follow in one vertical line. The sub-block should be inside of that.

In if loop, the if, elif, and else should have same indentation. Not only, the statement inside of them should have same indentation.Understand these examples a good material on indentation for you.   2. Exceptions  Python raises exception, wh…

Python Improved Logic Easy Way to Calculate Factorial

I am practicing Python programming. This post is you can write logic to calculate factorial in function. This function you can call it a user-defined function. The function name is 'factorial.py'. In real-time, you can write a program in a file and run it in python console. The main task of a developer is to create functions for the reusable code. They call these functions whenever they need. Factorial calculation program for supplied input value. Factorial Logic in Python I have completed this logic in 3 steps. Write factorial.pyImportExecute it Write Factorial.py  Here you need to define a function. Use 2 for loops, and write your logic. This is done on LInux operating system. You can also try on Linux.
After, ESC command Use, :wq to come out of the module. Import Factorial.py Go to Python console, using 'python' command. Use import factorial.py command.


Execute Factorial.py  >>> factorial.fact(5) It will show the result of factorial. Bottom line  Factorial o…

Calculate Circle Area the Logic You Need to write in Python

In Python, you can calculate circle area easily by using function. The purpose of Python is to use in data analysis.


You need this logic in many areas. You can use in your present finance projects or new ones.

Benefits of function you can re-use the same code number of times Area of Circle=pi*r*r Area of Circle Steps Given - You Can do Using Two Methods, Explained Both Methods I have given steps to calculate area of circle using two different methods. First I followed by creating an user defined function. Next directly I ran  the formula in interpreter.


Method-1 Steps I have followed to Calculate Area Using FunctionLogged into Cent Os (Linux) Create .py module Import .py module into Python Execute .py module  1. Log in I have first logged into CentOS. You can see there '$'.

2. Creating .py module To create .py module. You can use vi editor command.

You need 'import decimal' to get Decimal values. Else you will get only integer.


I have given pwd comman…

Hyperledger Fabric Real Interview Questions Read Today

I am practicing Hyperledger. This is one of the top listed blockchains. This architecture follows R3 Corda specifications. Sharing the interview questions with you that I have prepared for my interview.

Though Ethereum leads in the real-time applications. The latest Hyperledger version is now ready for production applications. It has now become stable for production applications.
The Hyperledger now backed by IBM. But, it is still an open source. These interview questions help you to read quickly. The below set of interview questions help you like a tutorial on Hyperledger fabric. Hyperledger Fabric Interview Questions1). What are Nodes?
In Hyperledger the communication entities are called Nodes.

2). What are the three different types of Nodes?
- Client Node
- Peer Node
- Order Node
The Client node initiates transactions. The peer node commits the transaction. The order node guarantees the delivery.

3). What is Channel?
A channel in Hyperledger is the subnet of the main blockchain. You c…

Top 10 SCALA Quiz Questions for Programmers

Scala is an acronym for “Scalable Language”. This means that Scala grows with you. You can play with it by typing one-line expressions and observing the results. But you can also rely on it for large mission critical systems, as many companies, including Twitter, LinkedIn, or Intel do.


To some, Scala feels like a scripting language. Its syntax is concise and low ceremony; its types get out of the way because the compiler can infer them. There’s a REPL and IDE worksheets for quick feedback.

Developers like it so much that Scala won the ScriptBowl contest at the 2012 JavaOne conference. At the same time, Scala is the preferred workhorse language for many mission critical server systems. The generated code is on a par with Java’s and its precise typing means that many problems are caught at compile-time rather than after deployment.


At the root, the language’s scalability is the result of a careful integration of object-oriented and functional language concepts.(Ref-what is Scala).View Su…

Automation developer these are top Skills you need to learn

Robotic process automation is an upcoming IT skill. Three tools are popular. It is difficult to learn all three tool. So, learn anyone tool to start your career in automation.
To get a job in this line, I found in my research that some programming skills and Hand-on training on any one of the tools is required. Also, try to know the differences between popular RPA tools.
Skills Companies Looking in Automation Engineers All big companies looking for candidates having experience in Automation anywhere, Blue Prism and UIPath. It is not possible to learn all tools. Learn anyone tool and do practice well.

Ok.

You may ask a question about how to do it. Join in good training institute and learn one tool.  Take online classes to learn faster.

To learn Uipath try here. Also, you can enroll online course to learn UiPath.

UiPath GO The list of IT skills you needAutomation anywhere/Blue Prism/Uipath .Net/C#/Java/SQL skills MS-Visio Power Builder Python scripts/Unix Scripts/Perl Scripts HTML/CSS/J…