Posts

Featured Post

SQL Interview Success: Unlocking the Top 5 Frequently Asked Queries

Image
 Here are the five top commonly asked SQL queries in the interviews. These you can expect in Data Analyst, or, Data Engineer interviews. Top SQL Queries for Interviews 01. Joins The commonly asked question pertains to providing two tables, determining the number of rows that will return on various join types, and the resultant. Table1 -------- id ---- 1 1 2 3 Table2 -------- id ---- 1 3 1 NULL Output ------- Inner join --------------- 5 rows will return The result will be: =============== 1  1 1   1 1   1 1    1 3    3 02. Substring and Concat Here, we need to write an SQL query to make the upper case of the first letter and the small case of the remaining letter. Table1 ------ ename ===== raJu venKat kRIshna Solution: ========== SELECT CONCAT(UPPER(SUBSTRING(name, 1, 1)), LOWER(SUBSTRING(name, 2))) AS capitalized_name FROM Table1; 03. Case statement SQL Query ========= SELECT Code1, Code2,      CASE         WHEN Code1 = 'A' AND Code2 = 'AA' THEN "A" | "A

SQL Query: 3 Methods for Calculating Cumulative SUM

Image
SQL provides various constructs for calculating cumulative sums, offering flexibility and efficiency in data analysis. In this article, we explore three distinct SQL queries that facilitate the computation of cumulative sums. Each query leverages different SQL constructs to achieve the desired outcome, catering to diverse analytical needs and preferences. Using Window Functions (e.g., PostgreSQL, SQL Server, Oracle) SELECT id, value, SUM(value) OVER (ORDER BY id) AS cumulative_sum  FROM your_table; This query uses the SUM() window function with the OVER clause to calculate the cumulative sum of the value column ordered by the id column. Using Subqueries (e.g., MySQL, SQLite): SELECT t1.id, t1.value, SUM(t2.value) AS cumulative_sum FROM your_table t1 JOIN your_table t2 ON t1.id >= t2.id GROUP BY t1.id, t1.value ORDER BY t1.id; This query uses a self-join to calculate the cumulative sum. It joins the table with itself, matching rows where the id in the first table is greater than or

AWS CLI PySpark a Beginner's Comprehensive Guide

Image
AWS (Amazon Web Services) and PySpark are separate technologies, but they can be used together for certain purposes. Let me provide you with a beginner's guide for both AWS and PySpark separately. AWS (Amazon Web Services): Amazon Web Services (AWS) is a cloud computing platform that offers a wide range of services for computing power, storage, databases, machine learning, analytics, and more. 1. Create an AWS Account: Go to the AWS homepage. Click on "Create an AWS Account" and follow the instructions. 2. Set Up AWS CLI: Install the AWS Command Line Interface (AWS CLI) on your local machine. Configure it with your AWS credentials using AWS configure. 3. Explore AWS Services: AWS provides a variety of services. Familiarize yourself with core services like EC2 (Elastic Compute Cloud), S3 (Simple Storage Service), and IAM (Identity and Access Management). PySpark: PySpark is the Python API for Apache Spark, a fast and general-purpose cluster computing system. It allows you

15 Top Data Analyst Interview Questions: Read Now

Image
We will explore the world of data analysis using Python, covering topics such as data manipulation, visualization, machine learning, and more. Whether you are a beginner or an experienced data professional, join us on this journey as we dive into the exciting realm of Python analytics and unlock the power of data-driven insights. Let's harness Python's versatility and explore the endless possibilities it offers for extracting valuable information from datasets. Get ready to level up your data analysis skills and stay tuned for informative and practical content! Python Data Analyst Interview Questions 01: How do you import the pandas library in Python?  A: To import the pandas library in Python, you can use the following statement: import pandas as pd. Q2: What is the difference between a Series and a DataFrame in pandas?  A: A Series in pandas is a one-dimensional labeled array, while a DataFrame is a two-dimensional labeled data structure with columns of potentially different

How to Deal With Missing Data: Pandas Fillna() and Dropna()

Image
Here are the best examples of Pandas fillna(), dropna() and sum() methods. We have explained the process in two steps - Counting and Replacing the Null values. Count Nulls ## count null values column-wise null_counts = df.isnull(). sum() print(null_counts) ``` Output: ``` Column1    1 Column2    1 Column3    5 dtype: int64 ``` In the above code, we first create a sample Pandas DataFrame `df` with some null values. Then, we use the `isnull()` function to create a DataFrame of the same shape as `df`, where each element is a boolean value indicating whether that element is null or not. Finally, we use the `sum()` function to count the number of null values in each column of the resulting DataFrame. The output shows the count of null values column-wise. to count null values column-wise: ``` df.isnull().sum() ``` ##Code snippet to count null values row-wise: ``` df.isnull().sum(axis=1) ``` In the above code, `df` is the Pandas DataFrame for which you want to count the null values. The `isnu

How to Effectively Parse and Read Different Files in Python

Image
Here is Python logic that shows Parse and Read Different Files in Python. The formats are XML, JSON, CSV, Excel, Text, PDF, Zip files, Images, SQLlite, and Yaml. Python Reading Files import pandas as pd import json import xml.etree.ElementTree as ET from PIL import Image import pytesseract import PyPDF2 from zipfile import ZipFile import sqlite3 import yaml Reading Text Files # Read text file (.txt) def read_text_file(file_path):     with open(file_path, 'r') as file:         text = file.read()     return text Reading CSV Files # Read CSV file (.csv) def read_csv_file(file_path):     df = pd.read_csv(file_path)     return df Reading JSON Files # Read JSON file (.json) def read_json_file(file_path):     with open(file_path, 'r') as file:         json_data = json.load(file)     return json_data Reading Excel Files # Read Excel file (.xlsx, .xls) def read_excel_file(file_path):     df = pd.read_excel(file_path)     return df Reading PDF files # Read PDF file (.pdf) def rea

A Beginner's Guide to Pandas Project for Immediate Practice

Image
Pandas is a powerful data manipulation and analysis library in Python that provides a wide range of functions and tools to work with structured data. Whether you are a data scientist, analyst, or just a curious learner, Pandas can help you efficiently handle and analyze data.  In this blog post, we will walk through a step-by-step guide on how to start a Pandas project from scratch. By following these steps, you will be able to import data, explore and manipulate it, perform calculations and transformations, and save the results for further analysis. So let's dive into the world of Pandas and get started with your own project! Simple Pandas project Import the necessary libraries: import pandas as pd import numpy as np Read data from a file into a Pandas DataFrame: df = pd.read_csv('/path/to/file.csv') Explore and manipulate the data: View the first few rows of the DataFrame: print(df.head()) Access specific columns or rows in the DataFrame: print(df['column_name'])