Featured Post

8 Ways to Optimize AWS Glue Jobs in a Nutshell

Image
  Improving the performance of AWS Glue jobs involves several strategies that target different aspects of the ETL (Extract, Transform, Load) process. Here are some key practices. 1. Optimize Job Scripts Partitioning : Ensure your data is properly partitioned. Partitioning divides your data into manageable chunks, allowing parallel processing and reducing the amount of data scanned. Filtering : Apply pushdown predicates to filter data early in the ETL process, reducing the amount of data processed downstream. Compression : Use compressed file formats (e.g., Parquet, ORC) for your data sources and sinks. These formats not only reduce storage costs but also improve I/O performance. Optimize Transformations : Minimize the number of transformations and actions in your script. Combine transformations where possible and use DataFrame APIs which are optimized for performance. 2. Use Appropriate Data Formats Parquet and ORC : These columnar formats are efficient for storage and querying, signif

10 Tricky Apache-Storm Interview Questions

The storm is a real-time computation system. It is a flagship software from Apache foundation. Has the capability to process in-stream data. You can integrate traditional databases easily in the Storm. The tricky and highly useful interview questions given in this post for your quick reference. Bench mark for Storm is a million tuples processed per second per node.

Interview Questions

Tricky Interview Questions

1) Real uses of Storm?

A) You can use in real-time analytics, online machine learning, continuous computation, distributed RPC, ETL

2) What are different available layers on Storm?
  • Flux
  • SQL
  • Streams API
  • Trident 
3)  The real use of SQL API on top of Storm?

A) You can run SQL queries on stream data

4) Most popular integrations to Storm?
  1. HDFS
  2. Cassandra
  3. JDBC
  4. HIVE
  5. HBase
5) What are different possible Containers integration with Storm?
  1. YARN
  2. DOCKER
  3. MESOS
6) What is Local Mode?

A) Running topologies in the Local server we can say as Local Mode.

7) Where all the Events Stored in Storm?
A) Event Logger mechanism saves all events

8) What are Serializable data types in Storm?
A) Storm can serialize primitive types, strings, byte arrays, ArrayList, HashMap, and HashSet

9) What are Hooks in Storm?
A) You can place the custom code in Storm and you can run events many times

10) What is the Joining of Streams?
A) Streams from different sources you can Join on a particular join condition

References

Apache Spark Vs Apache Storm Vs Tableau

  • The storm is super past in stream processing engine for Big data analytics
  • Tableau is Data warehousing presentation tool
  • Spark is Cluster Maintenance and Fault Tolerance
Apache storm

  References

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

How to Check Kafka Available Brokers

SQL Query: 3 Methods for Calculating Cumulative SUM