Posts

Showing posts with the label apache-storm-topology-example

Featured Post

Mastering flat_map in Python with List Comprehension

Image
Introduction In Python, when working with nested lists or iterables, one common challenge is flattening them into a single list while applying transformations. Many programming languages provide a built-in flatMap function, but Python does not have an explicit flat_map method. However, Python’s powerful list comprehensions offer an elegant way to achieve the same functionality. This article examines implementation behavior using Python’s list comprehensions and other methods. What is flat_map ? Functional programming  flatMap is a combination of map and flatten . It transforms the collection's element and flattens the resulting nested structure into a single sequence. For example, given a list of lists, flat_map applies a function to each sublist and returns a single flattened list. Example in a Functional Programming Language: List(List(1, 2), List(3, 4)).flatMap(x => x.map(_ * 2)) // Output: List(2, 4, 6, 8) Implementing flat_map in Python Using List Comprehension Python’...

Apache Storm Architecture Tutorial Flowchart

Image
There are two main reasons why Apache Storm is so popular. The number one is it can connect to many sources. The number two is scalable. The other advantage is fault-tolerant. That means, guaranteed data processing. The map-reduce jobs process data analytics in Hadoop. The topology in Storm is the real data processor. The co-ordination between Nimbus and Supervisor carried by Zookeeper Apache Storm The jobs in Hadoop are similar to the topology. The jobs run as per the schedule defined. In Storm, the topology runs forever. A topology consists of many worker processes spread across many machines.  A topology is a pre-defined design to get end product using your data. A topology comprises of 2 parts. These are Spout and bolts. The Spout is a funnel for topology Two nodes in Storm Master Node: similar to the Hadoop job tracker. It runs on a daemon called Nimbus. Worker Node: It runs on a daemon called Supervisor. The Supervisor listens to the work assigne...