Posts

Showing posts with the label legacy

Featured Post

SQL Interview Success: Unlocking the Top 5 Frequently Asked Queries

Image
 Here are the five top commonly asked SQL queries in the interviews. These you can expect in Data Analyst, or, Data Engineer interviews. Top SQL Queries for Interviews 01. Joins The commonly asked question pertains to providing two tables, determining the number of rows that will return on various join types, and the resultant. Table1 -------- id ---- 1 1 2 3 Table2 -------- id ---- 1 3 1 NULL Output ------- Inner join --------------- 5 rows will return The result will be: =============== 1  1 1   1 1   1 1    1 3    3 02. Substring and Concat Here, we need to write an SQL query to make the upper case of the first letter and the small case of the remaining letter. Table1 ------ ename ===== raJu venKat kRIshna Solution: ========== SELECT CONCAT(UPPER(SUBSTRING(name, 1, 1)), LOWER(SUBSTRING(name, 2))) AS capitalized_name FROM Table1; 03. Case statement SQL Query ========= SELECT Code1, Code2,      CASE         WHEN Code1 = 'A' AND Code2 = 'AA' THEN "A" | "A

How Hadoop is Better for Legacy data

Image
Here is an interview question on legacy data. You all know that a lot of data is available on legacy systems. You can use Hadoop to process the data for useful insights. 1. How should we be thinking about migrating data from legacy systems? Treat legacy data as you would any other complex data type.  HDFS acts as an active archive, enabling you to cost-effectively store data in any form for as long as you like and access it when you wish to explore the data. And with the latest generation of data wrangling and ETL tools, you can transform, enrich, and blend that legacy data with other, newer data types to gain a unique perspective on what’s happening across your business. 2. What are your thoughts on getting combined insights from the existing data warehouse and Hadoop? Typically one of the starter use cases for moving relational data off a warehouse and into Hadoop is active archiving.  This is the opportunity to take data that might have otherwise gone to the archive and keep it av