Skip to main content

The best 15 mostly asked Java Interview Questions

1. What is JVM? Why is Java called the ‘Platform Independent Programming Language’?
htmlprojectsJVM, or the Java Virtual Machine, is an interpreter which accepts ‘Bytecode’ and executes it.
Java has been termed as a ‘Platform Independent Language’ as it primarily works on the notion of ‘compile once, run everywhere’. Here’s a sequential step establishing the Platform independence feature in Java:
  • The Java Compiler outputs Non-Executable Codes called ‘Bytecode’.
  • Bytecode is a highly optimized set of computer instruction which could be executed by the Java Virtual Machine (JVM).
  • The translation into Bytecode makes a program easier to be executed across a wide range of platforms, since all we need is a JVM designed for that particular platform.
  • JVMs for various platforms might vary in configuration, those they would all understand the same set of Bytecode, thereby making the Java Program ‘Platform Independent’.
2. What is the Difference between JDK and JRE?
When asked typical Java Interview Questions most startup Java developers get confused with JDK and JRE. And eventually, they settle for ‘anything would do man, as long as my program runs!!’ Not quite right if you aspire to make a living and career out of Programming.
The “JDK” is the Java Development Kit. I.e., the JDK is bundle of software that you can use to develop Java based software.
The “JRE” is the Java Runtime Environment. I.e., the JRE is an implementation of the Java Virtual Machine which actually executes Java programs.
Typically, each JDK contains one (or more) JRE’s along with the various development tools like the Java source compilers, bundling and deployment tools, debuggers, development libraries, etc.
3. What does the ‘static’ keyword mean?
We are sure you must be well-acquainted with the Java Basics. Now that we are settled with the initial concepts, let’s look into the Language specific offerings.
Static variable is associated with a class and not objects of that class. For example:
public class ExplainStatic {
      public static String name = "Look I am a static variable";
}
We have another class where-in we intend to access this static variable just defined.
public class Application {
        public static void main(String[] args) {
            System.out.println(ExplainStatic.name)
        }
}
We don’t create object of the class ExplainStatic to access the static variable. We directly use the class name itself: ExplainStatic.name
4. What are the Data Types supported by Java? What is Autoboxing and Unboxing?
This is one of the most common and fundamental Java interview questions. This is something you should have right at your finger-tips when asked. The eight Primitive Data types supported by Java are:
  • Byte : 8-bit signed two’s complement integer. It has a minimum value of -128 and a maximum value of 127 (inclusive)
  • Short : 16-bit signed two’s complement integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive).
  • Int : 32-bit signed two’s complement integer. It has a minimum value of -2,147,483,648 and a maximum value of 2,147,483,647 (inclusive)
  • Long : 64-bit signed two’s complement integer. It has a minimum value of -9,223,372,036,854,775,808 and a maximum value of 9,223,372,036,854,775,807 (inclusive)
  • Float
  • Double
Autoboxing: The Java compiler brings about an automatic transformation of primitive type (int, float, double etc.) into their object equivalents or wrapper type (Integer, Float, Double,etc) for the ease of compilation.
Unboxing: The automatic transformation of wrapper types into their primitive equivalent is known as Unboxing. (Read more)

Comments

Popular posts from this blog

Four Tableau products a quick review and explanation

I want to share you what are the Products most popular.

Total four products. Read the details below.

Tableau desktop-(Business analytics anyone can use) - Tableau  Desktop  is  based  on  breakthrough technology  from  Stanford  University  that  lets  you drag & drop to analyze data. You can connect to  data in a few clicks, then visualize and create interactive dashboards with a few more.

We’ve done years of research to build a system that supports people’s natural  ability  to  think visually. Shift fluidly between views, following your natural train of thought. You’re not stuck in wizards or bogged down writing scripts. You just create beautiful, rich data visualizations.  It's so easy to use that any Excel user can learn it. Get more results for less effort. And it’s 10 –100x faster than existing solutions.

Tableau server
Tableau  Server  is  a  business  intelligence  application  that  provides  browser-based  analytics anyone can use. It’s a rapid-fire alternative to th…

The Sqoop in Hadoop story to process structural data

Why Sqoop you need while working on Hadoop-The Sqoop and its primary reason is to import data from structural data sources such as Oracle/DB2 into HDFS(also called Hadoop file system).
To our readers, I have collected a good video from Edureka which helps you to understand the functionality of Sqoop.

The comparison between Sqoop and Flume

The Sqoop the word came from SQL+Hadoop Sqoop word came from SQL+HADOOP=SQOOP. And Sqoop is a data transfer tool. The main use of Sqoop is to import and export the large amount of data from RDBMS to HDFS and vice versa. List of basic Sqoop commands Codegen- It helps to generate code to interact with database records.Create-hive-table- It helps to Import a table definition into a hiveEval- It helps to evaluateSQL statement and display the resultsExport-It helps to export an HDFS directory into a database tableHelp- It helps to list the available commandsImport- It helps to import a table from a database to HDFSImport-all-tables- It helps to import tables …

The best 5 differences of AWS EMR and Hadoop

With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in the Amazon cloud. The cluster is managed using an open-source framework called Hadoop.

Amazon EMR has made enhancements to Hadoop and other open-source applications to work seamlessly with AWS. For example, Hadoop clusters running on Amazon EMR use EC2 instances as virtual Linux servers for the master and slave nodes, Amazon S3 for bulk storage of input and output data, and CloudWatch to monitor cluster performance and raise alarms.

You can also move data into and out of DynamoDB using Amazon EMR and Hive. All of this is orchestrated by Amazon EMR control software that launches and manages the Hadoop cluster. This process is called an Amazon EMR cluster.


What does Hadoop do...

Hadoop uses a distributed processing architecture called MapReduce in which a task is mapped to a set of servers for proce…