Featured Post

8 Ways to Optimize AWS Glue Jobs in a Nutshell

Image
  Improving the performance of AWS Glue jobs involves several strategies that target different aspects of the ETL (Extract, Transform, Load) process. Here are some key practices. 1. Optimize Job Scripts Partitioning : Ensure your data is properly partitioned. Partitioning divides your data into manageable chunks, allowing parallel processing and reducing the amount of data scanned. Filtering : Apply pushdown predicates to filter data early in the ETL process, reducing the amount of data processed downstream. Compression : Use compressed file formats (e.g., Parquet, ORC) for your data sources and sinks. These formats not only reduce storage costs but also improve I/O performance. Optimize Transformations : Minimize the number of transformations and actions in your script. Combine transformations where possible and use DataFrame APIs which are optimized for performance. 2. Use Appropriate Data Formats Parquet and ORC : These columnar formats are efficient for storage and querying, signif

AWS Block Vs Object Storage Top Differences

In AWS, Block and Object are two types of storage. I have given differences between these two. Why because storage is a prime concept in the cloud environment.

Object Vs Block Storage

Why are these names different? Because these two are different storage types - Object and Block.


Object Storage


  • Object means it is a single object. You are not dividing here.
  • In the context of AWS, object storage helps your file to store as-is. How big it does not matter.
  • Let your file size is 10MB. Then, it saves as a 10 MB file.
  • What happens when you update a 30MB file. It deletes the old object and creates a brand new one.
  • For small changes, you need to update the whole file. So, it utilizes a lot of resources.
  • Object storage is much better for big files and very few changes.
  • AWS manages object storage.
  • AWS has full control over Object storage.


Block Storage


  • Block storage divides your file into blocks.
  • You have selected a block size of 512 bytes. If you want to upload a 10MB file, it then divides the whole file into 20 blocks.
  • When you want to update a single character, it updates only that Block. It will not touch other blocks.
  • You can save network and bandwidth use in Block storage.
  • When the changes are more, and you want to update very often, then Block storage is much better.
  • In Block storage volumes are mountable.
  • AWS has no visibility on blocks inside of Block storage.
  • It has visibility only on Block volumes.

Keep Reading

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

How to Check Kafka Available Brokers

SQL Query: 3 Methods for Calculating Cumulative SUM