Featured Post

How to Work With Tuple in Python

Image
Tuple in python is one of the streaming datasets. The other streaming datasets are List and Dictionary. Operations that you can perform on it are shown here for your reference. Writing tuple is easy. It has values of comma separated, and enclosed with parenthesis '()'. The values in the tuple are immutable, which means you cannot replace with new values. #1. How to create a tuple Code: my_tuple=(1,2,3,4,5) print(my_tuple) Output: (1, 2, 3, 4, 5) ** Process exited - Return Code: 0 ** Press Enter to exit terminal #2. How to read tuple values Code: print(my_tuple[0]) Output: 1 ** Process exited - Return Code: 0 ** Press Enter to exit terminal #3. How to add two tuples Code: a=(1,6,7,8) c=(3,4,5,6,7,8) d=print(a+c) Output: (1, 6, 7, 8, 3, 4, 5, 6, 7, 8) ** Process exited - Return Code: 0 ** Press Enter to exit terminal #4.  How to count tuple values Here the count is not counting values; count the repetition of a given value. Code: sample=(1, 6, 7, 8, 3, 4, 5, 6, 7, 8) print(sample

Exclusive Apache Kafka Top Features

Here are the top features of Kafka. It works on the principle of publishing messages. It routes real-time information to consumers far faster. Also, it connects heterogeneous applications by sending messages among them. Here the prime component (a.k.a message router) is a broker. The top features you can read here.


Kafka features


The exclusive Kafka features

The message broker provides seamless integration, but there are two collateral objectives: the first is to not block the producers and the second is to not let the producers know who the final consumers are.

Apache Kafka is a real-time publish-subscribe solution messaging system: open source, distributed, partitioned, replicated, commit-log based with a publish-subscribe schema. Its main characteristics are as follows:

1. Distributed. Cluster


Centric design that supports the distribution of the messages over the cluster members, maintaining the semantics. So you can grow the cluster horizontally without downtime.

2. Multiclient.


Easy integration with different clients from different platforms: Java, .NET, PHP, Ruby, Python, etc.

3. Persistent.


You cannot afford any data lost. Kafka is designed with efficient O(1), so data structures provide constant time performance no matter the data size.

4. Real time.


The messages produced are immediately seen by consumer threads; these are the basis of the systems called complex event processing (CEP).

5. Very high throughput.


As we mentioned, all the technologies in the stack are designed to work in commodity hardware. Kafka can handle hundreds of read and write operations per second from a large number of clients.


Related posts

Comments

Popular posts from this blog

7 AWS Interview Questions asked in Infosys, TCS

How to Decode TLV Quickly

Hyperledger Fabric: 20 Real Interview Questions