Featured post

How to Decode TLV Quickly

In TLV, the format is Tag, Length, and Value. The TLV protocol needs this type of data. Here you will know how to decode TLV data. According to IBM , TLV data is three parts. The tag tells what type of data it is. The length field denotes the length of the value. The Value-field denotes the actual value. Structure of TLV. TLV comprises three field values.  Tag Length Value EMV formulated different tags. They have their meanings. Usually, the Tag and Length together takes 1 to 4 bytes. The Best example for TLV. In the below example, you can find the sample TAG, LENGTH, and VALUE fields. [Tag][Value Length][Value] (ex. " 9F4005F000F0A001 ") where Tag Name =  9F40 Value Length (in bytes) =  05  Value (Hex representation of bytes. Example, "F0" – 1-byte) =  F000F0A001 In the above message, tag 9F40 has some meaning designed by EMV company. Here  you can find a list of EMV Tags. How to read the TLV Tag: 1 or 2 bytes Length: Length of the Value. F0-00-F0-A0-01 ==> 5 By

Big Data:Top Hadoop Interview Questions (2 of 5)

Frequently asked Hadoop interview questions.

1. What is Hadoop?Hadoop is a framework that allows users the power of distributed computing.

2.What is the difference between SQL and Hadoop?

SQL is allowed to work with structured data. But SQL is most suitable for legacy technologies. Hadoop is suitable for unstructured data. And, it is well suited for modern technologis.

3. What is Hadoop framework?

It is distributed network of commodity servers(A server can contain multiple clusters, and a cluster can have multiple nodes)

4. What are 4 properties of Hadoop?

Accessible-Hadoop runs on large clusters of commodity machinesRobust-An assumption that low commodity machines cause many machine failures. But it handles these tactfully. Scalable-Hadoop scales linearly to handle larger data by adding more nodes to the cluster. Simple-Hadoop allows users to quickly write efficient parallel code

5. What kind of data Hadoop needs?

Traditional RDBMS having relational structure with data resides in tables. In Hadoop. data should be in Key,Value pair.

6. Is Hadoop suitable for on the fly processing?

Hadoop is not suitable. It is suitable only for off-line processing. That means, we can not use Hadoop on active web logs. We can use it on web logs data,which already generated. So, in this property Hadoop is matching to traditional data warehouses.

7. What is Map reduce?

Map reduce is a data processing model, which contain mappers, and reducers. It takes unstructred data as input, and create as Key,Value pairs for processing on Hadoop.


Popular posts from this blog

How to Fix Python Syntax Errors Quickly

7 AWS Interview Questions asked in Infosys, TCS

Hyperledger Fabric: 20 Real Interview Questions