Featured post

Best Machine Learning Book for Beginners

You need a mixof different technologies for Data Science projects. Instead of learning many skills, just learn a few. The four main steps of any project are extracting the data, model development, artificial intelligence, and presentation. Attending interviews with many skills is not so easy. So keep the skills short.
A person with many skills can't perform all the work. You had better learn a few skills like Python, MATLAB, Tableau, and RDBMS. So that you can get a job quickly in the data-science project.
Out of Data Science skills, Machine learning is a new concept. Why because you can learn Python, like any other language. Tableau also the same. Here is the area that needs your 60% effort is Machine learning.  Machine Learning best book to start.

Related Posts How to write multiple IF-conditions in Python Simplified

Essential features of Hadoop Data joins (1 of 2)

Limitation of map side joining: 

A record being processed by a mapper may be joined with a record not easily accessible (or even located) by that mapper. This is the main limitation.

Who will facilitate map side join:

Hadoop's apache.hadoop.mapred.join package contains helper classes to facilitate this map side join.

What is joining data in Hadoop:

You will come across, you need to analyze data from multiple sources, this scenario Hadoop follows data joining. In the case database world, joining of two or more tables is called joining. In Hadoop joining data involved different approaches.

Approaches:
  • Reduce side join
  • Replicated joins using a Distributed cache
  • Semijoin-Reduce side join with map side filtering
What is the functionality of Map-reduce job:

The traditional MapReduce job reads a set of input data, performs some transformations in the map phase, sorts the results, performs another transformation in the reduce phase, and writes a set of output data. The sorting stage requires data to be transferred across the network and also requires the computational expense of sorting. In addition, the input data is read from and the output data is written to HDFS. 

The overhead involved in passing data between HDFS and the map phase, and the overhead involved in moving the data during the sort stage, and the writing of data to HDFS at the end of the job result in application design patterns that have large complex map methods and potentially complex reduce methods, to minimize the number of times the data is passed through the cluster.

Many processes require multiple steps, some of which require a reduce phase, leaving at least one input to the next job step already sorted. Having to re-sort this data may use significant cluster resources. In my next post I will give different joining methods in Hadoop.

Comments

Popular posts from this blog

Hyperledger Fabric: 20 Real Interview Questions

Best Machine Learning Book for Beginners

Python Assigning Multiple Values at Once