Skip to main content

The story Hadoop data value less in cost than ETL

Traditional data warehouse

That isn’t to say that Hadoop can’t be used for structured data that is readily available in a raw format; because it can.In addition, when you consider where data should be stored, you need to understand how data is stored today and what features characterize your persistence options. 
  • Consider your experience with storing data in a traditional data warehouse. Typically, this data goes through a lot of rigor to make it into the warehouse.
  •  Builders and consumers of warehouses have it etched in their minds that the data they are looking at in their warehouses must shine with respect to quality; subsequently, it’s cleaned up via cleansing, enrichment, matching, glossary, metadata, master data management, modeling, and other services before it’s ready for analysis. 
  • Obviously, this can be an expensive process. Because of that expense, it’s clear that the data that lands in the warehouse is deemed not just of high value, but it has a broad purpose: it’s going to go places and will be used in reports and dashboards where the accuracy of that data is key. 
Big data in Hadoop

Big Data repositories rarely undergo (at least initially) the full quality control rigors of data being injected into a warehouse, because not only is prepping data for some of the newer analytic methods characterized by Hadoop use cases cost prohibitive (which we talk about in the next chapter), but the data isn’t likely to be distributed like data warehouse data. We could say that data warehouse data is trusted enough to be “public,” while Hadoop data isn’t as trusted (public can mean vastly distributed within the company and not for external consumption), and although this will likely change in the future, today this is something that experience suggests characterizes these repositories.

Specific pieces of data have been stored based on their perceived value, and therefore any information beyond those pre-selected pieces is unavailable. This is in contrast to a Hadoop-based repository scheme where the entire business entity is likely to be stored and the fidelity of the Tweet, transaction, Facebook post, and more is kept intact. 

Data in Hadoop might seem of low value today, or its value nonquantified, but it can in fact be the key to questions yet unasked. IT departments pick and choose high-valued data and put it through rigorous cleansing and transformation processes because they know that data has a high known value per byte (a relative phrase, of course).

ETL and Big data
Stockphotos.io

Why else would a company put that data through so many quality control processes? 

Of course, since the value per byte is high, the business is willing to store it on relatively higher cost infrastructure to enable that interactive, often public, navigation with the end user communities, and the CIO is willing to invest in cleansing the data to increase its value per byte.
  • With Big Data, you should consider looking at this problem from the opposite view: With all the volume and velocity of today’s data, there’s just no way that you can afford to spend the time and resources required to cleanse and document every piece of data properly, because it’s just not going to be economical. 

What’s more, how do you know if this Big Data is even valuable? 

Are you going to go to your CIO and ask her to increase her capital expenditure (CAPEX) and operational expenditure (OPEX) costs by fourfold to quadruple the size of your warehouse on a hunch? 

For this reason, we like to characterize the initial nonanalyzed raw Big Data as having a low value per byte, and, therefore, until it’s proven otherwise, you can’t afford to take the path to the warehouse; however, given the vast amount of data, the potential for great insight (and therefore greater competitive advantage in your own market) is quite high if you can analyze all of that data.
  • The idea of cost per compute, which follows the same pattern as the value per byte ratio. If you consider the focus on the quality data in traditional systems we outlined earlier, you can conclude that the cost per compute in a traditional data warehouse is relatively high (which is fine, because it’s a proven and known higher value per byte), versus the cost of Hadoop, which is low.
Of course, other factors can indicate that certain data might be of high value yet never make its way into the warehouse, or there’s a desire for it to make its way out of the warehouse into a lower cost platform; either way, you might need to cleanse some of that data in Hadoop, and IBM can do that (a key differentiator). 

For example, unstructured data can’t be easily stored in a warehouse.

Indeed, some warehouses are built with a predefined corpus of questions in mind. Although such a warehouse provides some degree of freedom for query and mining, it could be that it’s constrained by what is in the schema (most unstructured data isn’t found here) and often by a performance envelope that can be a functional/operational hard limit. Again, as we’ll reiterate often in this book, we are not saying a Hadoop platform such as IBM InfoSphere BigInsights is a replacement for your warehouse; instead, it’s a complement.
  • A Big Data platform lets you store all of the data in its native business object format and get value out of it through massive parallelism on readily available components. For your interactive navigational needs, you’ll continue to pick and choose sources and cleanse that data and keep it in warehouses. But you can get more value out of analyzing more data (that may even initially seem unrelated) in order to paint a more robust picture of the issue at hand. 
Indeed, data might sit in Hadoop for a while, and when you discover its value, it might migrate its way into the warehouse when its value is proven and sustainable.

Comments

Popular posts from this blog

10 Tricky Interview Questions On Storm

Storm is real time computation system. It is a flagship software from Apache foundation. Has the capability to process in stream data. Storm is capable to integrate traditional databases. The list given below are tricky and highly useful for your next interview.
Bench mark for Storm is a million tuples processed per second per node. Tricky Interview Questions1) Real uses of Storm?

A) You can use in realtime analytics, online machine learning, continuous computation, distributed RPC, ETL

2) What are different availble layers on Storm?
FluxSQLStreams APITrident3)  Real use of SQL API on top of Storm?
A) You can run SQL queries on stream data
4) Most popular integrations to Storm? HDFSCassandraJDBCHIVEHBase 5) What are different possible Containers integration with Storm? YARNDOCKERMESOS6) What is Local Mode?

A) Running topologies in Local server we can say as Local Mode.

7) Where all the Events Stored in Storm?
A) Event Logger mechanism saves all events

8) What are Serializable data types in …

Blue Prism complete tutorials download now

Blueprsim is an automation tool useful to execute repetitive tasks without human effort. To learn this tool you need right material. Provided below quick reference materials to understand detailed elements, architecture and creating new bots. Useful if you are a new learner and trying to enter into automation career.
The number one and most popular tool in automation is Blue prism. In this post I have given references for popular materials and resources, so that you can use for your interviews.
Why You Need to Learn RPA blue prsim tutorial popular resources I have given in this post. You can download quickly. Learning Blue Prism is really good option if you are learner of Robotic process automation.
RPA Advantages The RPA is also called "Robotic Process Automation"- Real advantages are you can automate any business process and you can complete the customer requests in less time.

The Books Available on Blue Prism 
Blue Prism resourcesDavid chappal PDF bookBlue Prism BlogsVideo…

Blockchain Smart contract behind mechanism you need to learn quickly

Smart contract in Blockchain is a kind of software application that works without human intervention based on the transaction logs and provide solution to user request. I want to share the back end mechanism in Smart Contract of Blockchain. Smart Contract Mechanism What is Smart ContractA smart contract is a protocol which can auto execute, facilitate, verify or enforce the negotiation of a contract.Agreement between two parties you can say as a contract.Incorporating the rules of physical contract into computing world, you can say as smart contractBlockchain supports you to create smart contracts.Smart Contracts are self-executing programs which run on the blockchain and are capable of enforcing rulesUsing Blockchain as platform and making an agreement or contract between more than two parties, you can say as Smart Contract.Traditional Markets  4 Top Benefits of Smart ContractCurrently smart contracts are being used only in Crypto CurrenciesNow Smart Contracts being used in all financ…