Featured Post

SQL Interview Success: Unlocking the Top 5 Frequently Asked Queries

Image
 Here are the five top commonly asked SQL queries in the interviews. These you can expect in Data Analyst, or, Data Engineer interviews. Top SQL Queries for Interviews 01. Joins The commonly asked question pertains to providing two tables, determining the number of rows that will return on various join types, and the resultant. Table1 -------- id ---- 1 1 2 3 Table2 -------- id ---- 1 3 1 NULL Output ------- Inner join --------------- 5 rows will return The result will be: =============== 1  1 1   1 1   1 1    1 3    3 02. Substring and Concat Here, we need to write an SQL query to make the upper case of the first letter and the small case of the remaining letter. Table1 ------ ename ===== raJu venKat kRIshna Solution: ========== SELECT CONCAT(UPPER(SUBSTRING(name, 1, 1)), LOWER(SUBSTRING(name, 2))) AS capitalized_name FROM Table1; 03. Case statement SQL Query ========= SELECT Code1, Code2,      CASE         WHEN Code1 = 'A' AND Code2 = 'AA' THEN "A" | "A

11 Top PIG Interview Questions

Here are the top PIG interview questions. These are useful for your project and interviews.


1). What is PIG?

PIG is a platform for analyzing large data sets that consist of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. 


PIG’s infrastructure layer consists of a compiler that produces a sequence of MapReduce Programs.

2). What is the difference between logical and physical plans?

Pig undergoes some steps when a Pig Latin Script is converted into MapReduce jobs. After performing the basic parsing and semantic checking, it produces a logical plan. 


The logical plan describes the logical operators that have to be executed by Pig during execution. After this, Pig produces a physical plan. The physical plan describes the physical operators that are needed to execute the script.

3). Does ‘ILLUSTRATE’ run MR job?

No, illustrate will not pull any MR, it will pull the internal data. On the console, illustrate will not do any job. It just shows the output of each stage and not the final output.

4). Is the keyword ‘DEFINE’ like a function name?

Yes, the keyword ‘DEFINE’ is like a function name. Once you have registered, you have to define it. Whatever logic you have written in the Java program, you have an exported jar and also a jar registered by you. 


Now the compiler will check the function in an exported jar. When the function is not present in the library, it looks into your jar.

5). Is the keyword ‘FUNCTIONAL’ a User Defined Function (UDF)?

No, the keyword ‘FUNCTIONAL’ is not a User Defined Function (UDF). While using UDF, we have to override some functions. Certainly, you have to do your job with the help of these functions only. 


But the keyword ‘FUNCTIONAL’ is a built-in function i.e a pre-defined function, therefore it does not work as a UDF.

6). Why do we need MapReduce during Pig programming?

  • Pig is a high-level platform that makes many Hadoop data analysis issues easier to execute. The language we use for this platform is Pig Latin. 
  • A program written in Pig Latin is like a query written in SQL, where we need an execution engine to execute the query. So, when a program is written in Pig Latin, the Pig compiler will convert the program into MapReduce jobs. Here, MapReduce acts as the execution engine.

7). Are there any problems that can only be solved by MapReduce and cannot be solved by PIG? In which kind of scenarios MR jobs will be more useful than PIG?

  • Let us take a scenario where we want to count the population in two cities. I have a data set and sensor list of different cities.
  • I want to count the population by using one MapReduce for two cities. Let us assume that one is Bangalore and the other is Noida. So I need to consider the key of Bangalore city similar to Noida through which I can bring the population data of these two cities to one reducer.
  • The idea behind this is somehow I have to instruct map reducer program – whenever you find a city with the name ‘Bangalore‘ and a city with the name ‘Noida’, you create the alias name which will be the common name for these two cities so that you create a common key for both the cities and it gets passed to the same reducer. For this, we have to write a custom partitioner.
  • In MapReduce when you create a ‘key’ for the city, you have to consider ’city’ as the key. So, whenever the framework comes across a different city, it considers it as a different key. Hence, we need to use a customized partitioner.
  • There is a provision in MapReduce only, where you can write your custom partitioner and mention if city = Bangalore or Noida then pass similar hashcode. However, we cannot create a custom partitioner in Pig. As Pig is not a framework, we cannot direct the execution engine to customize the partitioner. In such scenarios, MapReduce works better than Pig.
 
8). Does Pig give any warning when there is a type mismatch or missing field?

No, Pig will not show any warning if there is no matching field or a mismatch. If you assume that Pig gives such a warning, then it is difficult to find in a log file. If any mismatch is found, it assumes a null value in Pig.

9). What co-group does in Pig?

Co-group joins the data set by grouping one particular data set only. It groups the elements by their common field and then returns a set of records containing two separate bags. 


The first bag consists of the record of the first data set with the common data set and the second bag consists of the records of the second data set with the common data set.

10). Can we say a cogroup is a group of more than 1 data set?

Cogroup is a group of one data set. But in the case of more than one data set, cogroup will group all the data sets and join them based on the common field. Hence, we can say that cogroup is a group of more than one data set and join that data set as well.

11). What does FOREACH do?

FOREACH is used to apply transformations to the data and to generate new data items. The name itself is indicating that for each element of a data bag, the respective action will be performed.

Syntax: FOREACH bagname GENERATE expression1, expression2, …..

The meaning of this statement is that the expressions mentioned after GENERATE will be applied to the current record of the data bag.
Bonus Question:

What is the bag:

A bag is one of the data models present in Pig. It is an unordered collection of tuples with possible duplicates. Bags are used to store collections while grouping. 

The size of the bag is the size of the local disk, this means that the size of the bag is limited. When the bag is full, then Pig will spill this bag into a local disk and keep only some parts of the bag in memory. 

There is no necessity that the complete bag should fit into memory. We represent bags with “{}”.

Related:

Hadoop Complex Interview Questions Part 1 of 4
AWS Basics for Software Engineers

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

Explained Ideal Structure of Python Class

How to Check Kafka Available Brokers