Featured Post

Best Practices for Handling Duplicate Elements in Python Lists

Image
Here are three awesome ways that you can use to remove duplicates in a list. These are helpful in resolving your data analytics solutions.  01. Using a Set Convert the list into a set , which automatically removes duplicates due to its unique element nature, and then convert the set back to a list. Solution: original_list = [2, 4, 6, 2, 8, 6, 10] unique_list = list(set(original_list)) 02. Using a Loop Iterate through the original list and append elements to a new list only if they haven't been added before. Solution: original_list = [2, 4, 6, 2, 8, 6, 10] unique_list = [] for item in original_list:     if item not in unique_list:         unique_list.append(item) 03. Using List Comprehension Create a new list using a list comprehension that includes only the elements not already present in the new list. Solution: original_list = [2, 4, 6, 2, 8, 6, 10] unique_list = [] [unique_list.append(item) for item in original_list if item not in unique_list] All three methods will result in uni

Apache Storm Architecture Tutorial Flowchart

There are two main reasons why Apache Storm is so popular. The number one is it can connect to many sources. The number two is scalable. The other advantage is fault-tolerant. That means, guaranteed data processing.


Apache Storm topologies

The map-reduce jobs process data analytics in Hadoop. The topology in Storm is the real data processor.
The co-ordination between Nimbus and Supervisor carried by Zookeeper

Apache Storm

  1. The jobs in Hadoop are similar to the topology. The jobs run as per the schedule defined.
  2. In Storm, the topology runs forever.
  3. A topology consists of many worker processes spread across many machines. 
  4. A topology is a pre-defined design to get end product using your data.
  5. A topology comprises of 2 parts. These are Spout and bolts.
  6. The Spout is a funnel for topology
Storm Topology

Two nodes in Storm

  1. Master Node: similar to the Hadoop job tracker. It runs on a daemon called Nimbus.
  2. Worker Node: It runs on a daemon called Supervisor. The Supervisor listens to the work assigned to each machine.

Master Node

  • Nimbus is responsible for distributing the code
  • Monitors failures
  • Assign tasks to each machine

Worker Node

  • It listens to the work assigned by Nimbus.
  • It works under the subset of the topology.

Read More

Comments

Popular posts from this blog

Explained Ideal Structure of Python Class

6 Python file Methods Real Usage

How to Decode TLV Quickly