Skip to main content

Featured post

Python Top Libraries You Need to Create ML Model

Creating a Model of Machine Learning in Python, you need two libraries. One is 'NUMPY' and the other one is 'PANDA'.


For this project, we are using Python Libraries to Create a Model.
What Are Key Libraries You Need I have explained in the below steps. You need Two.
NUMPY - It has the capabilities of CalculationsPANDA - It has the capabilities of Data processing. To Build a model of Machine learning you need the right kind of data. So, to use data for your project, the Data should be refined. Else, it will not give accurate results. Data AnalysisData Pre-processing How to Import Libraries in Pythonimportnumpy as np # linear algebra
importpandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)

How to Check NUMPY/Pandas installed After '.' you need to give double underscore on both the sides of version. 
How Many Types of Data You Need You need two types of data. One is data to build a model and the other one is data you need to test the model. Data to build…

Topologies in Apache Storm the concept you need to know

There are two main reasons why Apache Storm is so popular. The number one is it can connect to many sources. The number two is scalable. The other advantage is fault tolerant. That means, guaranteed data processing.

Apache Storm topologies
The map-reduce jobs process the data analytics in Hadoop. The topology in Storm is the real data processor.
The co-ordination between Nimbus and Supervisor carried by Zookeeper

Topologies in Apache Storm

  1. The jobs in Hadoop are similar to topology. The jobs run as per schedule defined.
  2. In Storm, the topology runs forever.
  3. A topology consists of many worker processes spread across many machines. 
  4. A topology is a pre-defined design to get end product using your data.
  5. A topology comprises of 2 parts. These are Spout and bolts.
  6. The Spout is a funnel for topology
Topology

Two nodes in Storm

  1. Master Node: similar to Hadoop job tracker. It runs on a daemon called Nimbus.
  2. Worker Node: It runs on a daemon called Supervisor. The Supervisor listens to the work assigned to each machine.

Master Node

  • Nimbus is responsible for distributing the code
  • Monitors failures
  • Assign tasks to each machine

Worker Node

  • It listens to the work assigned by Nimbus.
  • It works under the subset of the topology.

Read More

Comments

Most Viewed

Hyperledger Fabric Real Interview Questions Read Today

I am practicing Hyperledger. This is one of the top listed blockchains. This architecture follows R3 Corda specifications. Sharing the interview questions with you that I have prepared for my interview.

Though Ethereum leads in the real-time applications. The latest Hyperledger version is now ready for production applications. It has now become stable for production applications.
The Hyperledger now backed by IBM. But, it is still an open source. These interview questions help you to read quickly. The below set of interview questions help you like a tutorial on Hyperledger fabric. Hyperledger Fabric Interview Questions1). What are Nodes?
In Hyperledger the communication entities are called Nodes.

2). What are the three different types of Nodes?
- Client Node
- Peer Node
- Order Node
The Client node initiates transactions. The peer node commits the transaction. The order node guarantees the delivery.

3). What is Channel?
A channel in Hyperledger is the subnet of the main blockchain. You c…