Posts

Showing posts with the label HDFS

Featured Post

8 Ways to Optimize AWS Glue Jobs in a Nutshell

Image
  Improving the performance of AWS Glue jobs involves several strategies that target different aspects of the ETL (Extract, Transform, Load) process. Here are some key practices. 1. Optimize Job Scripts Partitioning : Ensure your data is properly partitioned. Partitioning divides your data into manageable chunks, allowing parallel processing and reducing the amount of data scanned. Filtering : Apply pushdown predicates to filter data early in the ETL process, reducing the amount of data processed downstream. Compression : Use compressed file formats (e.g., Parquet, ORC) for your data sources and sinks. These formats not only reduce storage costs but also improve I/O performance. Optimize Transformations : Minimize the number of transformations and actions in your script. Combine transformations where possible and use DataFrame APIs which are optimized for performance. 2. Use Appropriate Data Formats Parquet and ORC : These columnar formats are efficient for storage and querying, signif

Hadoop HDFS Comics to Understand Quickly

Image
HDFS file system in Hadoop helps to store data supplied as input. Its fault-tolerant nature avoids data loss. About HDFS, the real story of fault-tolerant  given in Comic book for you to understand in less time. What is HDFS in Hadoop HDFS is optimized to support high-streaming read performance, and this comes at the expense of random seek performance. This means that if an application is reading from HDFS, it should avoid (or at least minimize) the number of seeks. Sequential reads are the preferred way to access HDFS files. HDFS supports only a limited set of operations on files — writes, deletes, appends, and reads, but not updates. It assumes that the data will be written to the HDFS once, and then read multiple times. HDFS does not provide a mechanism for local caching of data. The overhead of caching is large enough that data should simply be re-read from the source, which is not a problem for applications that are mostly doing sequential reads of large-sized data f

The best helpful HDFS File System Commands (2 of 4)

Image
#Top-Selected-HDFS-file-system-commands CopyFrom Local Works similarly to the put command, except that the source is restricted to a local file reference. hdfs dfs -copyFromLocal URI hdfs dfs -copyFromLocal input/docs/data2.txt hdfs://localhost/user/rosemary/data2.txt HDFS Commands Part-1of 4 copyToLocal Works similarly to the get command, except that the destination is restricted to a local file reference. hdfs dfs -copyToLocal [-ignorecrc] [-crc] URI hdfs dfs -copyToLocal data2.txt data2.copy.txt count Counts the number of directories, files, and bytes under the paths that match the specified file pattern. hdfs dfs -count [-q] hdfs dfs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2 cp Copies one or more files from a specified source to a specified destination. If you specify multiple sources, the specified destination must be a directory. hdfs dfs -cp URI [URI …] hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir du Disp

The best helpful hdfs file system commands (1 of 4)

Image
#The best helpful hdfs file system commands: cat hadoop fs -cat FILE [ ... ] Displays the file content. For reading compressed files, you should use the TEXT command instead. chgrp hadoop fs -chgrp [-R] GROUP PATH [ PATH....] Changes the group association for files and directories. The -R option applies the change recursively.