Skip to main content

How Hadoop is best suitable for large legacy data

I have selected a good interview on legacy data. You all know that lot of data is available on legacy systems. Hadoop is the mechanism you can use to process the data to get great business insights.

How should we be thinking about migrating data from legacy systems?
Treat legacy data as you would any other complex data type. HDFS acts as an active archive, enabling you to cost effectively store data in any form for as long as you like and access it when you wish to explore the data. And with the latest generation of data wrangling and ETL tools, you can transform, enrich, and blend that legacy data with other, newer data types to gain a unique perspective on what’s happening across your business.
Hadoop and Legacy data
Stockphotos.io

What are your thoughts on getting combined insights from the existing data warehouse and Hadoop?
Typically one of the starter use cases for moving relational data off a warehouse and into Hadoop is active archiving. This is the opportunity to take data that might have otherwise gone to archive and keep it available for historical analysis. The clear benefit is being able to analyze data for the types of extended time periods that would not otherwise be cost feasible (or possible) in traditional data warehouses.  An example would be looking at sales, not just in the current economic cycle, but going back 3 – 5 years or more across multiple economic cycles.

You should look at Hadoop as a platform for data transformation and discovery, compute intensive tasks that aren’t a fit for a warehouse. Then consider feeding some of the new data and insights back into the data warehouse to increase its value.

What’s the value of putting Hadoop in Cloud? 

The cloud presents a number of opportunities for Hadoop users. Time to benefit through quicker deployment and eliminating the need to maintain cluster infrastructure Good environment for running proof-of-concepts and experimenting with Hadoop Most Internet of Things data is cloud data. Running Hadoop in the cloud enables you to minimize the movement of that data The elasticity of the cloud enables you to rapidly scale your cluster to address new use cases or add more storage and compute.

Comments

Popular posts from this blog

The best 5 differences of AWS EMR and Hadoop

With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in the Amazon cloud. The cluster is managed using an open-source framework called Hadoop.

Amazon EMR has made enhancements to Hadoop and other open-source applications to work seamlessly with AWS. For example, Hadoop clusters running on Amazon EMR use EC2 instances as virtual Linux servers for the master and slave nodes, Amazon S3 for bulk storage of input and output data, and CloudWatch to monitor cluster performance and raise alarms.

You can also move data into and out of DynamoDB using Amazon EMR and Hive. All of this is orchestrated by Amazon EMR control software that launches and manages the Hadoop cluster. This process is called an Amazon EMR cluster.


What does Hadoop do...

Hadoop uses a distributed processing architecture called MapReduce in which a task is mapped to a set of servers for proce…

Top 20 ultimate ETL Questions really good for interviews

How to print/display the first line of a file?  there are many ways to do this. However the easiest way to display the first line of a file is using the [head] command.  $> head -1 file. Txt no prize in guessing that if you specify [head -2] then it would print first 2 records of the file.  another way can be by using [sed] command. [sed] is a very powerful text editor which can be used for various text manipulation purposes like this.  $> sed '2,$ d' file. Txt how does the above command work?  The 'd' parameter basically tells [sed] to delete all the records from display from line 2 to last line of the file (last line is represented by $ symbol). Of course it does not actually delete those lines from the file, it just does not display those lines in standard output screen. So you only see the remaining line which is the 1st line.  how to print/display the last line of a file?  the easiest way is to use the [tail] command.  $> tail -1 file. Txt if you want to do it using…

5 Things About AWS EC2 You Need to Focus!

Amazon Elastic Compute Cloud (Amazon EC2) - is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction.

The basic functions of EC2... 
It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios. 
Key Points for Interviews:
EC2 is the basic fundamental block around which the AWS are structured.EC2 provides remote ope…