Featured post

How to Decode TLV Quickly

In TLV, the format is Tag, Length, and Value. The TLV protocol needs this type of data. Here you will know how to decode TLV data. According to IBM , TLV data is three parts. The tag tells what type of data it is. The length field denotes the length of the value. The Value-field denotes the actual value. Structure of TLV. TLV comprises three field values.  Tag Length Value EMV formulated different tags. They have their meanings. Usually, the Tag and Length together takes 1 to 4 bytes. The Best example for TLV. In the below example, you can find the sample TAG, LENGTH, and VALUE fields. [Tag][Value Length][Value] (ex. " 9F4005F000F0A001 ") where Tag Name =  9F40 Value Length (in bytes) =  05  Value (Hex representation of bytes. Example, "F0" – 1-byte) =  F000F0A001 In the above message, tag 9F40 has some meaning designed by EMV company. Here  you can find a list of EMV Tags. How to read the TLV Tag: 1 or 2 bytes Length: Length of the Value. F0-00-F0-A0-01 ==> 5 By

Apache Yarn to Manage Resources a Solution

Apache Hadoop is one of the most popular tools for big data processing. It has been successfully deployed in production by many companies for several years. 

Though Hadoop is considered a reliable, scalable, and cost-effective solution, it is constantly being improved by a large community of developers. As a result, the 2.0 version offers several revolutionary features, including Yet Another Resource Negotiator (YARN), HDFS Federation, and a highly available NameNode, which make the Hadoop cluster much more efficient, powerful, and reliable. 

Apache Yarn

Apache Hadoop 2.0 includes YARN, which separates the resource management and processing components. The YARN-based architecture is not constrained to MapReduce.
  • New developmens in Hadoop 2.0 Architecture with YARN: 
  • ResourceManager instead of a cluster manager 
  • ApplicationMaster instead of a dedicated and short-lived JobTracker 
  • NodeManager instead of TaskTracker 
  • A distributed application instead of a MapReduce job 

Basic changes in Hadoop 2.0 architecture

  • The ResourceManager, the NodeManager, and a container are not concerned about the type of application or task.
  • All application framework-specific code is simply moved to its ApplicationMaster so that any distributed framework can be supported by YARN — as long as someone implements an appropriate ApplicationMaster for it.
  • Thanks to this generic approach, the dream of a Hadoop YARN cluster running many various workloads comes true. Imagine: a single Hadoop cluster in your data center that can run MapReduce, Giraph, Storm, Spark, Tez/Impala, MPI, and more.

Read More

Comments

Popular posts from this blog

How to Fix Python Syntax Errors Quickly

Hyperledger Fabric: 20 Real Interview Questions

7 AWS Interview Questions asked in Infosys, TCS