Featured Post

The Quick and Easy Way to Analyze Numpy Arrays

Image
The quickest and easiest way to analyze NumPy arrays is by using the numpy.array() method. This method allows you to quickly and easily analyze the values contained in a numpy array. This method can also be used to find the sum, mean, standard deviation, max, min, and other useful analysis of the value contained within a numpy array. Sum You can find the sum of Numpy arrays using the np.sum() function.  For example:  import numpy as np  a = np.array([1,2,3,4,5])  b = np.array([6,7,8,9,10])  result = np.sum([a,b])  print(result)  # Output will be 55 Mean You can find the mean of a Numpy array using the np.mean() function. This function takes in an array as an argument and returns the mean of all the values in the array.  For example, the mean of a Numpy array of [1,2,3,4,5] would be  result = np.mean([1,2,3,4,5])  print(result)  #Output: 3.0 Standard Deviation To find the standard deviation of a Numpy array, you can use the NumPy std() function. This function takes in an array as a par

Essential features of Hadoop Data joins (1 of 2)

Limitation of map side joining: 

A record being processed by a mapper may be joined with a record not easily accessible (or even located) by that mapper. This is the main limitation.

Who will facilitate map side join:

Hadoop's apache.hadoop.mapred.join package contains helper classes to facilitate this map side join.

What is joining data in Hadoop:

You will come across, you need to analyze data from multiple sources, this scenario Hadoop follows data joining. In the case database world, joining of two or more tables is called joining. In Hadoop joining data involved different approaches.

Approaches:
  • Reduce side join
  • Replicated joins using a Distributed cache
  • Semijoin-Reduce side join with map side filtering
What is the functionality of Map-reduce job:

The traditional MapReduce job reads a set of input data, performs some transformations in the map phase, sorts the results, performs another transformation in the reduce phase, and writes a set of output data. The sorting stage requires data to be transferred across the network and also requires the computational expense of sorting. In addition, the input data is read from and the output data is written to HDFS. 

The overhead involved in passing data between HDFS and the map phase, and the overhead involved in moving the data during the sort stage, and the writing of data to HDFS at the end of the job result in application design patterns that have large complex map methods and potentially complex reduce methods, to minimize the number of times the data is passed through the cluster.

Many processes require multiple steps, some of which require a reduce phase, leaving at least one input to the next job step already sorted. Having to re-sort this data may use significant cluster resources. In my next post I will give different joining methods in Hadoop.

Comments

Popular posts from this blog

How to Decode TLV Quickly

7 AWS Interview Questions asked in Infosys, TCS