Posts

Showing posts with the label PySpark

Featured Post

Python map() and lambda() Use Cases and Examples

Image
 In Python, map() and lambda functions are often used together for functional programming. Here are some examples to illustrate how they work. Python map and lambda top use cases 1. Using map() with lambda The map() function applies a given function to all items in an iterable (like a list) and returns a map object (which can be converted to a list). Example: Doubling Numbers numbers = [ 1 , 2 , 3 , 4 , 5 ] doubled = list ( map ( lambda x: x * 2 , numbers)) print (doubled) # Output: [2, 4, 6, 8, 10] 2. Using map() to Convert Data Types Example: Converting Strings to Integers string_numbers = [ "1" , "2" , "3" , "4" , "5" ] integers = list ( map ( lambda x: int (x), string_numbers)) print (integers) # Output: [1, 2, 3, 4, 5] 3. Using map() with Multiple Iterables You can also use map() with more than one iterable. The lambda function can take multiple arguments. Example: Adding Two Lists Element-wise list1 = [ 1 , 2 , 3 ]

AWS CLI PySpark a Beginner's Comprehensive Guide

Image
AWS (Amazon Web Services) and PySpark are separate technologies, but they can be used together for certain purposes. Let me provide you with a beginner's guide for both AWS and PySpark separately. AWS (Amazon Web Services): Amazon Web Services (AWS) is a cloud computing platform that offers a wide range of services for computing power, storage, databases, machine learning, analytics, and more. 1. Create an AWS Account: Go to the AWS homepage. Click on "Create an AWS Account" and follow the instructions. 2. Set Up AWS CLI: Install the AWS Command Line Interface (AWS CLI) on your local machine. Configure it with your AWS credentials using AWS configure. 3. Explore AWS Services: AWS provides a variety of services. Familiarize yourself with core services like EC2 (Elastic Compute Cloud), S3 (Simple Storage Service), and IAM (Identity and Access Management). PySpark: PySpark is the Python API for Apache Spark, a fast and general-purpose cluster computing system. It allows you

How to Handle Spaces in PySpark Dataframe Column

Image
In PySpark, you can employ SQL queries by importing your CSV file data to a DataFrame. However, you might face problems when dealing with spaces in column names of the DataFrame. Fortunately, there is a solution available to resolve this issue. Reading CSV file to Dataframe Here is the PySpark code for reading CSV files and writing to a DataFrame. #initiate session spark = SparkSession.builder \ .appName("PySpark Tutorial") \ .getOrCreate() #Read CSV file to df dataframe data_path = '/content/Test1.csv' df = spark.read.csv(data_path, header=True, inferSchema=True) #Create a Temporary view for the DataFrame df2.createOrReplaceTempView("temp_table") #Read data from the temporary view spark.sql("select * from temp_table").show() Output --------+-----+---------------+---+ |Student| Year|Semester1|Semester2| | ID | | Marks | Marks | +----------+-----+---------------+ | si1 |year1|62.08| 62.4| | si1 |year2|75.94| 76.75| | si