Skip to main content

5 Essential features of HBASE Storage Architecture

Many analytics prgrammers have confusion about HBASE. The question is if we have HDFS, then why we need HBASE. This post covers how HBASE and HDFS are related in HADOOP big data framework.

HBase is a distributed, versioned, column-oriented, multidimensional storage system, designed for high performance and high availability. To be able to successfully leverage HBase, you first must understand how it is implemented and how it works.
A region server's implementation can have:


HBase is an open source implementation of Google's BigTable architecture. Similar to traditional relational database management systems (RDBMSs), data in HBase is organized in tables. Unlike RDBMSs, however, HBase supports a very loose schema definition, and does not provide any joins, query language, or SQL.


Although HBase does not support real-time joins and queries, batch joins and/or queries via MapReduce can be easily implemented. In fact, they are well-supported by higher-level systems such as Pig and Hive, which use a limited SQL dialect to execute those operations.

The main focus of HBase is on Create, Read, Update, and Delete (CRUD) operations on wide sparse tables. Currently, HBase does not support transactions (but provides limited locking support and some atomic operations) and secondary indexing (several community projects are trying to implement this functionality, but they are not part of the core HBase implementation). As a result, most HBase-based implementations are using highly denormalized data.
Similar to HDFS, HBase implements master/slave (HMaster/region server) architecture.

HBase leverages HDFS for its persistent data storage. This allows HBase to leverage all advanced features that HDFS provides, including checksums, replication, and failover. HBase data management is implemented by distributed region servers, which are managed by HBase master (HMaster).


memstore is HBase's implementation of in-memory data cache, which allows improving the overall performance of HBase by serving as much data as possible directly from memory. The memstore holds in-memory modifications to the store in the form of key/values. A write-ahead-log (WAL) records all changes to the data. This is important in case something happens to the primary storage. If the server crashes, it can effectively replay that log to get everything up to where the server should have been just before the crash. It also means that if writing the record to the WAL fails, the whole operation must be considered a failure.
One of the HBase optimization techniques is disabling the writes to the WAL. This represents a trade-off between performance and reliability. Disabling writes to the WAL prevents recovery when a region server fails before a write operation completes. You should use such an optimization with care, and only in cases when either data loss is acceptable, or a write operation can be "replayed" based on an additional data source.

HFile is a specialized HDFS file format for HBase. The implementation of HFile in a region server is responsible for reading and writing HFiles to and from HDFS.

Comments

Popular posts from this blog

Blue Prism complete tutorials download now

Blue prism is an automation tool useful to execute repetitive tasks without human effort. To learn this tool you need the right material. Provided below quick reference materials to understand detailed elements, architecture and creating new bots. Useful if you are a new learner and trying to enter into automation career.
The number one and most popular tool in automation is a Blue prism. In this post, I have given references for popular materials and resources so that you can use for your interviews.
Why You Need to Learn RPA blue prism tutorial popular resources I have given in this post. You can download quickly. Learning Blue Prism is a really good option if you are a learner of Robotic process automation.
RPA Advantages The RPA is also called "Robotic Process Automation"- Real advantages are you can automate any business process and you can complete the customer requests in less time.

The Books Available on Blue Prism 
Blue Prism resourcesDavid chappal PDF bookBlue Prism…

Topologies in Apache Storm the concept you need to know

There are two main reasons why Apache Storm is so popular. The number one is it can connect to many sources. The number two is scalable. The other advantage is fault tolerant. That means, guaranteed data processing.
The map-reduce jobs process the data analytics in Hadoop. The topology in Storm is the real data processor. The co-ordination between Nimbus and Supervisor carried by Zookeeper What are topologiesThe jobs in Hadoop are similar to topology. The jobs run as per schedule defined.In Storm, the topology runs forever.A topology consists of many worker processes spread across many machines. A topology is a pre-defined design to get end product using your data.A topology comprises of 2 parts. These are Spout and bolts.The Spout is a funnel for topology Two nodes in StormMaster Node: similar to Hadoop job tracker. It runs on a daemon called Nimbus.Worker Node: It runs on a daemon called Supervisor. The Supervisor listens to the work assigned to each machine.Master NodeNimbus is re…

Three popular RPA tools functional differences

Robotic process automation is growing area and many IT developers across the board started up-skill in this popular area. I have written this post for the benefit of Software developers who are interested in RPA also called Robotic Process Automation.

In my previous post, I have described that total 12 tools are available in the market. Out of those 3 tools are most popular. Those are Automation anywhere, BluePrism and Uipath. Many programmers asked what are the differences between these tools. I have given differences of all these three RPA tools.

BluePrismBlue Prism has taken a simple concept, replicating user activity on the desktop, and made it enterprise strength. The technology is scalable, secure, resilient, and flexible and is supported by a comprehensive methodology, operational framework and provided as packaged software.The technology is developed and deployed within a “corridor of IT governance” and has sophisticated error handling and process modelling capabilities to ensu…