Featured post

3 Top Books Every Analytics Engineer to Read

Many of the analytics jobs nowadays are for the financial domain. The top financial domains are Banking, Payments, and credit cards. 
The Best Books are on:
SASUNIXPython

The skills you need to work in data analytics are SAS, UNIX, Python, and JavaScript.  I have selected three books for beginners of data analysts. 

1. SAS best book 
I found one best book that is little SAS. This post covers almost all examples and critical macros you need for your job.

The best-selling Little SAS Book just got even better. Readers worldwide study this easy-to-follow book to help them learn the basics of SAS programming.

Now Rebecca Ottesen has teamed up with the original authors, Lora Delwiche, and Susan Slaughter, to provide a new way to challenge and improve your SAS skills through thought-provoking questions, exercises, and projects.
2. UNIX best book
The basic commands you will get everywhere. The way of executing Macros or shell scripts is really you need. This is a good book so that you can automate…

Essential features of Hadoop Data joins (1 of 2)

Essential features of Hadoop Data joins
#Essential features of Hadoop Data joins:
Limitation of map side joining: A record being processed by a mapper may be joined with a record not easily accessible (or even located) by that mapper. This is main limitation.

Who will facilitate map side join:

Hadoop's apache.hadoop.mapred.join package contains helper classes to facilitate this map side join.

What is joining data in Hadoop:

You will come across, you need to analyze data from multiple sources, this scenario Hadoop follows data joining. In the case database world, joining of two or more tables is called joining. In Hadoop joining data involved different approaches.

Approaches:
  • Reduce side join
  • Replicated joins using Distributed cache
  • Semijoin-Reduce side join with map side filtering
What is functionality of Map reduce job:

The traditional MapReduce job reads a set of input data, performs some transformations in the map phase, sorts the results, performs another transformation in the reduce phase, and writes a set of output data. The sorting stage requires data to be transferred across the network and also requires the computational expense of sorting. In addition, the input data is read from and the output data is written to HDFS. The overhead involved in passing data between HDFS and the map phase, and the overhead involved in moving the data during the sort stage, and the writing of data to HDFS at the end of the job result in application design patterns that have large complex map methods and potentially complex reduce methods, to minimize the number of times the data is passed through the cluster.

Many processes require multiple steps, some of which require a reduce phase, leaving at least one input to the next job step already sorted. Having to re-sort this data may use significant cluster resources. In my next post I will give different joining methods in Hadoop.

Comments

Popular posts from this blog

Quick Comparison AWS Vs Azure Load Balancer

Hyperledger Fabric: 20 Real Interview Questions

10 Best Visualization Charts to Present data

JavaScript Vs JSON Top Differences