Skip to main content

Spark SQL Query how to write it in Ten steps

Spark SQL example
Spark SQL example
The post tells how to write SQL query in Spark and explained in ten steps.This example demonstrates how to use sqlContext.sql to create and load two tables and select rows from the tables into two DataFrames.

The next steps use the DataFrame API to filter the rows for salaries greater than 150,000 from one of the tables and shows the resulting DataFrame. Then the two DataFrames are joined to create a third DataFrame. Finally the new DataFrame is saved to a Hive table.

1. At the command line, copy the Hue sample_07 and sample_08 CSV files to HDFS:
$ hdfs dfs -put HUE_HOME/apps/beeswax/data/sample_07.csv /user/hdfs
$ hdfs dfs -put HUE_HOME/apps/beeswax/data/sample_08.csv /user/hdfs

where HUE_HOME defaultsto /opt/cloudera/parcels/CDH/lib/hue (parcel installation) or /usr/lib/hue
(package installation).

2. Start spark-shell:
$ spark-shell

3. Create Hive tables sample_07 and sample_08:

scala> sqlContext.sql("CREATE TABLE sample_07 (code string,description string,total_emp
 int,salary int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TextFile")
scala> sqlContext.sql("CREATE TABLE sample_08 (code string,description string,total_emp
 int,salary int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TextFile")

Also Read: Learn SparkSQL by your own with little money

4. In Beeline, show the Hive tables:
[0: jdbc:hive2://hostname.com:> show tables;
+------------+--+
| tab_name |
+------------+--+
16 | Spark Guide
Developing Spark Applications
| sample_07 |
| sample_08 |
+------------+--+

Also read: The role of Spark in Hadoop eco system

5. Load the data in the CSV files into the tables:
scala> sqlContext.sql("LOAD DATA INPATH '/user/hdfs/sample_07.csv' OVERWRITE INTO TABLE
 sample_07")
scala> sqlContext.sql("LOAD DATA INPATH '/user/hdfs/sample_08.csv' OVERWRITE INTO TABLE
 sample_08")

6. Create DataFrames containing the contents of the sample_07 and sample_08 tables:
scala> val df_07 = sqlContext.sql("SELECT * from sample_07")
scala> val df_08 = sqlContext.sql("SELECT * from sample_08")

Apache Spark
7. Show all rows in df_07 with salary greater than 150,000:
scala> df_07.filter(df_07("salary") > 150000).show()
The output should be:
+-------+--------------------+---------+------+
| code| description|total_emp|salary|
+-------+--------------------+---------+------+
|11-1011| Chief executives| 299160|151370|
|29-1022|Oral and maxillof...| 5040|178440|
|29-1023| Orthodontists| 5350|185340|
|29-1024| Prosthodontists| 380|169360|
|29-1061| Anesthesiologists| 31030|192780|
|29-1062|Family and genera...| 113250|153640|
|29-1063| Internists, general| 46260|167270|
|29-1064|Obstetricians and...| 21340|183600|
|29-1067| Surgeons| 50260|191410|
|29-1069|Physicians and su...| 237400|155150|
+-------+--------------------+---------+------+

8.Create the DataFrame df_09 by joining df_07 and df_08, retaining only the code and description columns.
scala> val df_09 = df_07.join(df_08, df_07("code") ===
df_08("code")).select(df_07.col("code"),df_07.col("description"))
scala> df_09.show()

The new DataFrame looks like:
+-------+--------------------+
| code| description|
+-------+--------------------+
|00-0000| All Occupations|
|11-0000|Management occupa...|
|11-1011| Chief executives|
|11-1021|General and opera...|
|11-1031| Legislators|
|11-2011|Advertising and p...|
|11-2021| Marketing managers|
|11-2022| Sales managers|
|11-2031|Public relations ...|
|11-3011|Administrative se...|
|11-3021|Computer and info...|
|11-3031| Financial managers|
|11-3041|Compensation and ...|
|11-3042|Training and deve...|
|11-3049|Human resources m...|
|11-3051|Industrial produc...|
|11-3061| Purchasing managers|
|11-3071|Transportation, s...|
|11-9011|Farm, ranch, and ...|
+-------+--------------------+

9. Save DataFrame df_09 as the Hive table sample_09:
scala> df_09.write.saveAsTable("sample_09")

10. In Beeline, show the Hive tables:
[0: jdbc:hive2://hostname.com:> show tables;
+------------+--+
| tab_name |
+------------+--+
| sample_07 |
| sample_08 |
| sample_09 |
+------------+--+

Comments

Popular posts from this blog

10 Tricky Interview Questions On Storm

Storm is real time computation system. It is a flagship software from Apache foundation. Has the capability to process in stream data. Storm is capable to integrate traditional databases. The list given below are tricky and highly useful for your next interview.
Bench mark for Storm is a million tuples processed per second per node. Tricky Interview Questions1) Real uses of Storm?

A) You can use in realtime analytics, online machine learning, continuous computation, distributed RPC, ETL

2) What are different availble layers on Storm?
FluxSQLStreams APITrident3)  Real use of SQL API on top of Storm?
A) You can run SQL queries on stream data
4) Most popular integrations to Storm? HDFSCassandraJDBCHIVEHBase 5) What are different possible Containers integration with Storm? YARNDOCKERMESOS6) What is Local Mode?

A) Running topologies in Local server we can say as Local Mode.

7) Where all the Events Stored in Storm?
A) Event Logger mechanism saves all events

8) What are Serializable data types in …

Blue Prism complete tutorials download now

Blueprsim is an automation tool useful to execute repetitive tasks without human effort. To learn this tool you need right material. Provided below quick reference materials to understand detailed elements, architecture and creating new bots. Useful if you are a new learner and trying to enter into automation career.
The number one and most popular tool in automation is Blue prism. In this post I have given references for popular materials and resources, so that you can use for your interviews.
Why You Need to Learn RPA blue prsim tutorial popular resources I have given in this post. You can download quickly. Learning Blue Prism is really good option if you are learner of Robotic process automation.
RPA Advantages The RPA is also called "Robotic Process Automation"- Real advantages are you can automate any business process and you can complete the customer requests in less time.

The Books Available on Blue Prism 
Blue Prism resourcesDavid chappal PDF bookBlue Prism BlogsVideo…

Blockchain Smart contract behind mechanism you need to learn quickly

Smart contract in Blockchain is a kind of software application that works without human intervention based on the transaction logs and provide solution to user request. I want to share the back end mechanism in Smart Contract of Blockchain. Smart Contract Mechanism What is Smart ContractA smart contract is a protocol which can auto execute, facilitate, verify or enforce the negotiation of a contract.Agreement between two parties you can say as a contract.Incorporating the rules of physical contract into computing world, you can say as smart contractBlockchain supports you to create smart contracts.Smart Contracts are self-executing programs which run on the blockchain and are capable of enforcing rulesUsing Blockchain as platform and making an agreement or contract between more than two parties, you can say as Smart Contract.Traditional Markets  4 Top Benefits of Smart ContractCurrently smart contracts are being used only in Crypto CurrenciesNow Smart Contracts being used in all financ…