Skip to main content

10 top NoSQL database recently asked interview questions

1) Who are involved in developing NoSQL ?

Amazon and Google Papers

2) What is NoSQL?

Which we will use on non-relationla databases. Like columnar databases. By using NOSQL we can query data from non-relational databases.

3) What are unique features of NoSQL databases?

-There is no concept of relationship between records
-They need UN-structural data
-They do not store data that individual records do not have relationship with each other.

4) How NoSQL databases are faster than traditional RDBMS?

-Stores database on multiple servers,rather storing whole database in a single server
-Adding replicas on other servers, we can retrieve data faster, even one of the server crashes

5) What are the UNIQUE features of NoSQL?

-Opensource
-ACID complaint

6) What are the characteristics of good NoSQL product?

  • High availability: Fault tolerance when a single server goes down
  • Disaster recovery: For when a data center goes down, or more likely someone digs up a network cable just outside the data center
  • Support: Someone to stand behind a product when it goes wrong (or it's used incorrectly!)
  • Services: Product experts who can advise on best practices and help determine how to use a product to address new or unusual business needs
  • Ecosystem: Availability of partners, experienced developers, and product information — to avoid being locked into a single vendor's expensive support and services contract

7) Reasons to go for NoSQL databases?

An RDBMS can not fit for all of the enterprise requirements.
-Big data
-Schema Redesign overhead
-Unstructured data explosion
-It avoids Sparse data problem (RDBMS will use space for NULL values. But, NOSQL will ignore NULL values)
-Dynamically changing relationships between attributes

8) Benefits of NoSQL?

-Faster solutions for new generation problems
-Lower cost
-Modern computer systems don't exist in a vacuum; they're always communicating with someone or something. NoSQL databases are commonly paired with particular complementary computer software, from search engines to semantic web technologies and Hadoop. Leveraging these technologies can make deployment of NoSQL more productive and useful.

9) How many core types of NOSQL databases?

Columnar: Extension to traditional table structures. Supports variable sets of columns (column families) and is optimized for column-wide operations (such as count, sum, and mean average).


Key-value: A very simple structure. Sets of named keys and their value(s), typically an uninterpreted chunk of data. Sometimes that simple value may in fact be a JSON or binary document.


Triple: A single fact represented by three elements:

The subject you're describing
The name of its property or relationship to another subject
The value — either an intrinsic value (such as an integer) or the unique ID of another subject (if it's a relationship) For example, Adam likes Cheese. Adam is the subject, likes is the predicate, and Cheese is the object.

Document: XML, JSON, text, or binary blob. Any treelike structure can be represented as an XML or JSON document, including things such as an order that includes a delivery address, billing details, and a list of products and quantities.


10) What are the most modern databases?

In-memory and flash databases: Some great advances have been made in real-time online transaction processing (OLTP) and analytics using in-memory databases. In-memory databases are very specialized and are targeted to particular problem domains. NoSQL databases that take advantage of flash or memory caching to aid real-time analytics.


Complex proprietary stacks: NoSQL software such as Oracle NoSQL, MarkLogic, Microsoft's Document DB, and IBM Cloudant, though.

NewSQL: This is a new database access paradigm. It applies the software design lessons of NoSQL to RDBMS, creating a new breed of products, which is a great idea, but fundamentally these products still use traditional relational math and structures, which is why they aren't included. 

Comments

Popular posts from this blog

Four Tableau products a quick review and explanation

I want to share you what are the Products most popular.

Total four products. Read the details below.

Tableau desktop-(Business analytics anyone can use) - Tableau  Desktop  is  based  on  breakthrough technology  from  Stanford  University  that  lets  you drag & drop to analyze data. You can connect to  data in a few clicks, then visualize and create interactive dashboards with a few more.

We’ve done years of research to build a system that supports people’s natural  ability  to  think visually. Shift fluidly between views, following your natural train of thought. You’re not stuck in wizards or bogged down writing scripts. You just create beautiful, rich data visualizations.  It's so easy to use that any Excel user can learn it. Get more results for less effort. And it’s 10 –100x faster than existing solutions.

Tableau server
Tableau  Server  is  a  business  intelligence  application  that  provides  browser-based  analytics anyone can use. It’s a rapid-fire alternative to th…

The Sqoop in Hadoop story to process structural data

Why Sqoop you need while working on Hadoop-The Sqoop and its primary reason is to import data from structural data sources such as Oracle/DB2 into HDFS(also called Hadoop file system).
To our readers, I have collected a good video from Edureka which helps you to understand the functionality of Sqoop.

The comparison between Sqoop and Flume

The Sqoop the word came from SQL+Hadoop Sqoop word came from SQL+HADOOP=SQOOP. And Sqoop is a data transfer tool. The main use of Sqoop is to import and export the large amount of data from RDBMS to HDFS and vice versa. List of basic Sqoop commands Codegen- It helps to generate code to interact with database records.Create-hive-table- It helps to Import a table definition into a hiveEval- It helps to evaluateSQL statement and display the resultsExport-It helps to export an HDFS directory into a database tableHelp- It helps to list the available commandsImport- It helps to import a table from a database to HDFSImport-all-tables- It helps to import tables …

The best 5 differences of AWS EMR and Hadoop

With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in the Amazon cloud. The cluster is managed using an open-source framework called Hadoop.

Amazon EMR has made enhancements to Hadoop and other open-source applications to work seamlessly with AWS. For example, Hadoop clusters running on Amazon EMR use EC2 instances as virtual Linux servers for the master and slave nodes, Amazon S3 for bulk storage of input and output data, and CloudWatch to monitor cluster performance and raise alarms.

You can also move data into and out of DynamoDB using Amazon EMR and Hive. All of this is orchestrated by Amazon EMR control software that launches and manages the Hadoop cluster. This process is called an Amazon EMR cluster.


What does Hadoop do...

Hadoop uses a distributed processing architecture called MapReduce in which a task is mapped to a set of servers for proce…