Posts

Showing posts with the label HDFS File System Commands

Featured Post

Best Practices for Handling Duplicate Elements in Python Lists

Image
Here are three awesome ways that you can use to remove duplicates in a list. These are helpful in resolving your data analytics solutions.  01. Using a Set Convert the list into a set , which automatically removes duplicates due to its unique element nature, and then convert the set back to a list. Solution: original_list = [2, 4, 6, 2, 8, 6, 10] unique_list = list(set(original_list)) 02. Using a Loop Iterate through the original list and append elements to a new list only if they haven't been added before. Solution: original_list = [2, 4, 6, 2, 8, 6, 10] unique_list = [] for item in original_list:     if item not in unique_list:         unique_list.append(item) 03. Using List Comprehension Create a new list using a list comprehension that includes only the elements not already present in the new list. Solution: original_list = [2, 4, 6, 2, 8, 6, 10] unique_list = [] [unique_list.append(item) for item in original_list if item not in unique_list] All three methods will result in uni

The best helpful hdfs file system commands (3 of 4)

dus- hadoop fs -dus PATH dus reports the sum of the file sizes in aggregate rather than individually. expunge- hadoop fs -expunge Empties the trash. If the trash feature is enabled, when a file is deleted, it is first moved into the temporary Trash/folder. The file will be permanently deleted from the Trash/folder only after user-configurable delay. get - hadoop -fs -get [-ignorecrc] [-crc] SRC LOCASDST Copies files to the local file system.