Featured Post

How to Read a CSV File from Amazon S3 Using Python (With Headers and Rows Displayed)

Image
  Introduction If you’re working with cloud data, especially on AWS, chances are you’ll encounter data stored in CSV files inside an Amazon S3 bucket . Whether you're building a data pipeline or a quick analysis tool, reading data directly from S3 in Python is a fast, reliable, and scalable way to get started. In this blog post, we’ll walk through: Setting up access to S3 Reading a CSV file using Python and Boto3 Displaying headers and rows Tips to handle larger datasets Let’s jump in! What You’ll Need An AWS account An S3 bucket with a CSV file uploaded AWS credentials (access key and secret key) Python 3.x installed boto3 and pandas libraries installed (you can install them via pip) pip install boto3 pandas Step-by-Step: Read CSV from S3 Let’s say your S3 bucket is named my-data-bucket , and your CSV file is sample-data/employees.csv . ✅ Step 1: Import Required Libraries import boto3 import pandas as pd from io import StringIO boto3 is...

Hadoop: How to find which file is healthy

Hadoop provides file system health check utility which is called "fsck". Basically, it checks the health of all the files under a path It also checks the health of all the files under the '/'(root).
  • BIN/HADOOP fsck / - It checks the health of all the files
  • BIN/HADOOP fsck /test/ - It checks the health of files under the path
By default fsck utility cannot do anything for under replicated blocks and over replicated blocks. Hadoop itself heal the blocks.
Healthy file checking ides

 How to find which file is healthy

  • It prints out dot for each healthy file
  • It will print a message for each file, if it is not healthy, also for under replicated blocks, over replicated blocks, mis-replicated blocks, and corrupted blocks.
  • By default fsck utility cannot do anything for under replicated blocks and over replicated blocks. Hadoop itself heal the blocks.

How to delete corrupted blocks

  • BIN/HADOOP fsck -delete block-names
  • It will delete all corrupted blocks
  • BIN/HADOOP fsck -move block-names
  • It will move corrupted blocks to /lost directory
  • Other options we can use with fsck:
    • files
    • blocks
    • locations

Comments

Popular posts from this blog

SQL Query: 3 Methods for Calculating Cumulative SUM

5 SQL Queries That Popularly Used in Data Analysis

Big Data: Top Cloud Computing Interview Questions (1 of 4)