Posts

Showing posts with the label legacy

Featured Post

8 Ways to Optimize AWS Glue Jobs in a Nutshell

Image
  Improving the performance of AWS Glue jobs involves several strategies that target different aspects of the ETL (Extract, Transform, Load) process. Here are some key practices. 1. Optimize Job Scripts Partitioning : Ensure your data is properly partitioned. Partitioning divides your data into manageable chunks, allowing parallel processing and reducing the amount of data scanned. Filtering : Apply pushdown predicates to filter data early in the ETL process, reducing the amount of data processed downstream. Compression : Use compressed file formats (e.g., Parquet, ORC) for your data sources and sinks. These formats not only reduce storage costs but also improve I/O performance. Optimize Transformations : Minimize the number of transformations and actions in your script. Combine transformations where possible and use DataFrame APIs which are optimized for performance. 2. Use Appropriate Data Formats Parquet and ORC : These columnar formats are efficient for storage and querying, signif

How Hadoop is Better for Legacy data

Image
Here is an interview question on legacy data. You all know that a lot of data is available on legacy systems. You can use Hadoop to process the data for useful insights. 1. How should we be thinking about migrating data from legacy systems? Treat legacy data as you would any other complex data type.  HDFS acts as an active archive, enabling you to cost-effectively store data in any form for as long as you like and access it when you wish to explore the data. And with the latest generation of data wrangling and ETL tools, you can transform, enrich, and blend that legacy data with other, newer data types to gain a unique perspective on what’s happening across your business. 2. What are your thoughts on getting combined insights from the existing data warehouse and Hadoop? Typically one of the starter use cases for moving relational data off a warehouse and into Hadoop is active archiving.  This is the opportunity to take data that might have otherwise gone to the archive and keep it av