Featured Post

How to Check Column Nulls and Replace: Pandas

Image
Here is a post that shows how to count Nulls and replace them with the value you want in the Pandas Dataframe. We have explained the process in two steps - Counting and Replacing the Null values. Count null values (column-wise) in Pandas ## count null values column-wise null_counts = df.isnull(). sum() print(null_counts) ``` Output: ``` Column1    1 Column2    1 Column3    5 dtype: int64 ``` In the above code, we first create a sample Pandas DataFrame `df` with some null values. Then, we use the `isnull()` function to create a DataFrame of the same shape as `df`, where each element is a boolean value indicating whether that element is null or not. Finally, we use the `sum()` function to count the number of null values in each column of the resulting DataFrame. The output shows the count of null values column-wise. to count null values column-wise: ``` df.isnull().sum() ``` ##Code snippet to count null values row-wise: ``` df.isnull().sum(axis=1) ``` In the above code, `df` is the Panda

Apache HIVE Top Features

Apache Hive aids the examination of great datasets kept in Hadoop’s HDFS and harmonious file setups such as the Amazon S3 filesystem.


Apache HIVE Top Features


It delivers an SQL-like lingo named when keeping complete aid aimed at map/reduce. To accelerate requests, it delivers guides, containing bitmap guides.

By preset, Hive stores metadata in an implanted Apache Derby database, and different client/server databases like MySQL may optionally be applied.

Currently, there are 4 file setups maintained in Hive, which are TEXTFILE, SEQUENCE FILE, ORC, and RCFILE.

Other attributes of Hive include:
  • Indexing to supply quickening, directory sort containing compacting, and Bitmap directory as of 0.10, further directory kinds are designed.
  • Different depository kinds such as simple written material, RCFile, HBase, ORC, and other ones.
  • Metadata depository in an RDBMS, notably decreasing the time to accomplish verbal examines throughout request implementation.
  • Operating on compressed information kept into the Hadoop environment, set of rules containing gzip, bzip2, snappy, etcetera.
  • Built-in exploiter described purposes (UDFs) to manipulate dates, cords, and different data-mining implements. Hive aids expanding the UDF set to cover use-cases not maintained by integrated purposes.
  • SQL-like requests (Hive QL), that are completely changed into map-reduce appointments.

Comments

Popular posts from this blog

Explained Ideal Structure of Python Class

How to Check Kafka Available Brokers

6 Python file Methods Real Usage