Featured post

4 Layers of AWS Architecture a Quick Answer

I have collected real interview questions on AWS key architecture components. Those are S3, EC2, SQS, and SimpleDB. AWS is one of the most popular skills in the area of Cloud computing. Many companies are recruiting software developers to work on cloud computing.

AWS Key Architecture Components AWS is the top cloud platform. The knowledge of this helpful to learn other cloud platforms. Below are the questions asked in interviews recently.
What are the components involved in AWS?Amazon S3.With this, one can retrieve the key information which is occupied in creating cloud structural design, and the amount of produced information also can be stored in this component that is the consequence of the key specified.Amazon EC2. Helpful to run a large distributed system on the Hadoop cluster. Automatic parallelization and job scheduling can be achieved by this component.Amazon SQS. This component acts as a mediator between different controllers. Also worn for cushioning requirements those are obt…

Apache Storm Architecture Tutorial Flowchart

There are two main reasons why Apache Storm is so popular. The number one is it can connect to many sources. The number two is scalable. The other advantage is fault-tolerant. That means, guaranteed data processing.


Apache Storm topologies

The map-reduce jobs process data analytics in Hadoop. The topology in Storm is the real data processor.
The co-ordination between Nimbus and Supervisor carried by Zookeeper

Apache Storm

  1. The jobs in Hadoop are similar to the topology. The jobs run as per the schedule defined.
  2. In Storm, the topology runs forever.
  3. A topology consists of many worker processes spread across many machines. 
  4. A topology is a pre-defined design to get end product using your data.
  5. A topology comprises of 2 parts. These are Spout and bolts.
  6. The Spout is a funnel for topology
Storm Topology

Two nodes in Storm

  1. Master Node: similar to the Hadoop job tracker. It runs on a daemon called Nimbus.
  2. Worker Node: It runs on a daemon called Supervisor. The Supervisor listens to the work assigned to each machine.

Master Node

  • Nimbus is responsible for distributing the code
  • Monitors failures
  • Assign tasks to each machine

Worker Node

  • It listens to the work assigned by Nimbus.
  • It works under the subset of the topology.

Read More

Comments

Popular posts from this blog

Hyperledger Fabric: 20 Real Interview Questions

Python IF Statements Multiple Conditions Examples

Best Machine Learning Book for Beginners