Posts

Showing posts with the label List duplicates

Featured Post

8 Ways to Optimize AWS Glue Jobs in a Nutshell

Image
  Improving the performance of AWS Glue jobs involves several strategies that target different aspects of the ETL (Extract, Transform, Load) process. Here are some key practices. 1. Optimize Job Scripts Partitioning : Ensure your data is properly partitioned. Partitioning divides your data into manageable chunks, allowing parallel processing and reducing the amount of data scanned. Filtering : Apply pushdown predicates to filter data early in the ETL process, reducing the amount of data processed downstream. Compression : Use compressed file formats (e.g., Parquet, ORC) for your data sources and sinks. These formats not only reduce storage costs but also improve I/O performance. Optimize Transformations : Minimize the number of transformations and actions in your script. Combine transformations where possible and use DataFrame APIs which are optimized for performance. 2. Use Appropriate Data Formats Parquet and ORC : These columnar formats are efficient for storage and querying, signif

Python Delete Duplicates in List Faster Way

Image
Removing duplicates in List simplified using SET method. It's a simple method. Just you need SET and Print to remove duplicates. Removing duplicates is common in Data science projects. What is list A list is a collection of elements. The elements can be duplicates or non-duplicates. Today's task is to remove duplicate elements in the List. Faster way to remove list duplicates Create a List Use SET Print the result List with duplicates my_list = ['The', 'unanimous', 'Declaration', 'of', 'the', 'thirteen','united', 'States', 'of', 'America,', 'When', 'in', 'the', 'Course', 'of', 'human'] Apply set method >>> non_dupes = set(my_list) Print Final list >>> print(non_dupes) Here, if you observe, there are no duplicates. The duplicates are now removed. It displays only non-duplicate values.  Here 'the' is a duplicate value. That'