Featured Post

8 Ways to Optimize AWS Glue Jobs in a Nutshell

Image
  Improving the performance of AWS Glue jobs involves several strategies that target different aspects of the ETL (Extract, Transform, Load) process. Here are some key practices. 1. Optimize Job Scripts Partitioning : Ensure your data is properly partitioned. Partitioning divides your data into manageable chunks, allowing parallel processing and reducing the amount of data scanned. Filtering : Apply pushdown predicates to filter data early in the ETL process, reducing the amount of data processed downstream. Compression : Use compressed file formats (e.g., Parquet, ORC) for your data sources and sinks. These formats not only reduce storage costs but also improve I/O performance. Optimize Transformations : Minimize the number of transformations and actions in your script. Combine transformations where possible and use DataFrame APIs which are optimized for performance. 2. Use Appropriate Data Formats Parquet and ORC : These columnar formats are efficient for storage and querying, signif

Here's Python Program for List Duplicates

Here is a program to find the item that occurs most frequently in a data structure. So why to find frequent item? Maybe it is the most purchased item on your shopping site. Perhaps it is the web page that gets hit the most often.

If you are a tester, it could easily be the test that has had the most failures over the last year. Whatever it is, you want an easy way to find the data you need, and Python is here to help you.


Python List duplicates

List frequent item


Here are the two simple lists:

list_1 = [1,2,3,2,3,2] 
list_2 = ['a', 'b', 'a', 'b', 'c']

  • We can't do simple math on the individual items since the second list contains characters. For example, it could contain the words of a book, and you want to find the most commonly used word in the work. 
  • Also, it maybe list of UPC values for commonly purchased items. Whatever it is, all we can guarantee is that the data is probably comparable, in that we can compare one of the items to another. Yet we need to find frequent items.


Python program

Below, you will find a program to find repeated values.

list_1 = [1,2,3,2,3,2]
list_2 = ['a', 'b', 'a', 'b', 'c']

def most_common_brute_force(l):

  # Find the counts of all elements
    dict_of_counts = {}
    for i in l:
        if i in dict_of_counts.keys():
            dict_of_counts[i] = dict_of_counts[i] + 1
        else:
            dict_of_counts[i] = 1

            max_count = -1
            max_value = -1
 
        for k, v in dict_of_counts.items():
            if v > max_count:
                max_count = v
                max_value = k
    return max_value
print(most_common_brute_force(list_1))
print(most_common_brute_force(list_2))

Output

2
   

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

How to Check Kafka Available Brokers

SQL Query: 3 Methods for Calculating Cumulative SUM