Skip to main content

5 Top Data warehousing Skills in the age of Big data

5 Top Data warehousing Skills in the age of Big data
#5 Top Data warehousing Skills in the age of Big data:
A data warehouse is a home for "secondhand" data that originates in either other corporate applications, such as the one your company uses to fill customer orders for its products, or some data source external to your company, such as a public database that contains sales information gathered from all your competitors.

What is Data warehousing

If your company's data warehouse were advertised as a used car, for example, it may be described this way: "Contains late-model, previously owned data, all of which has undergone a 25-point quality check and is offered to you with a brand-new warranty to guarantee hassle-free ownership."

Most organizations build a data warehouse in a relatively straightforward manner:
  • The data warehousing team selects a focus area, such as tracking and reporting the company's product sales activity against that of its competitors.
  • The team in charge of building the data warehouse assigns a group of business users and other key individuals within the company to play the role of Subject-Matter Experts. Together, these people compile a list of different types of data that enable them to use the data warehouse to help track sales activity (or whatever the focus is for the project).
  • The group then goes through the list of data, item by item, and figures out where it can obtain that particular piece of information. In most cases, the group can get it from at least one internal (within the company) database or file, such as the one the application uses to process orders by mail or the master database of all customers and their current addresses. In other cases, a piece of information is not available from within the company's computer applications but could be obtained by purchasing it from some other company. Although the credit ratings and total outstanding debt for all of a bank's customers, for example, aren't known internally, that information can be purchased from a credit bureau.
  • After completing the details of where each piece of data comes from, the data warehousing team (usually computer analysts and programmers) create extraction programs. These programs collect data from various internal databases and files, copy certain data to a staging area (a work area outside the data warehouse), ensure that the data has no errors, and then copy it all into the data warehouse. Extraction programs are created either by hand (custom-coded) or by using specialized data warehousing products.
Different roles in Data warehousing projects:

Data modeling.: Design and implementation of data models are required for both the integration and presentation repositories. Relational data models are distinctly different from dimensional data models, and each has unique properties. Moreover, relational data modelers may not have dimensional modeling expertise and vice versa.

ETL development: ETL refers to the extraction of data from source systems into staging, the transformations necessary to recast source data for analysis, and the loading of transformed data into the presentation repository. ETL includes the selection criteria to extract data from source systems, performing any necessary data transformations or derivations needed, data quality audits, and cleansing.

Data cleansing: Source data is typically not perfect. Furthermore, merging data from multiple sources can inject new data quality issues. Data hygiene is an important aspect of data warehouse that requires specific skills and techniques.

OLAP design: Typically data warehouses support some variety of online analytical processing (HOLAP, MOLAP, or ROLAP). Each OLAP technique is different but requires special design skills to balance the reporting requirements against performance constraints.

Application development: Users commonly require an application interface into the data warehouse that provides an easy-to-use front end combined with comprehensive analytical capabilities, and one that is tailored to the way the users work. This often requires some degree of custom programming or commercial application customization.

Production automation: Data warehouses are generally designed for periodic automated updates when new and modified data is slurped into the warehouse so that users can view the most recent data available. These automated update processes must have built-in fail-over strategies and must ensure data consistency and correctness.

General systems and database administration: Data warehouse developers must have many of the same skills held by the typical network administrator and database administrator. They must understand the implications of efficiently moving possibly large volumes of data across the network, and the issues of effectively storing changing data.

Comments

Popular posts from this blog

The best 5 differences of AWS EMR and Hadoop

With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in the Amazon cloud. The cluster is managed using an open-source framework called Hadoop.

Amazon EMR has made enhancements to Hadoop and other open-source applications to work seamlessly with AWS. For example, Hadoop clusters running on Amazon EMR use EC2 instances as virtual Linux servers for the master and slave nodes, Amazon S3 for bulk storage of input and output data, and CloudWatch to monitor cluster performance and raise alarms.

You can also move data into and out of DynamoDB using Amazon EMR and Hive. All of this is orchestrated by Amazon EMR control software that launches and manages the Hadoop cluster. This process is called an Amazon EMR cluster.


What does Hadoop do...

Hadoop uses a distributed processing architecture called MapReduce in which a task is mapped to a set of servers for proce…

Top 20 ultimate ETL Questions really good for interviews

How to print/display the first line of a file?  there are many ways to do this. However the easiest way to display the first line of a file is using the [head] command.  $> head -1 file. Txt no prize in guessing that if you specify [head -2] then it would print first 2 records of the file.  another way can be by using [sed] command. [sed] is a very powerful text editor which can be used for various text manipulation purposes like this.  $> sed '2,$ d' file. Txt how does the above command work?  The 'd' parameter basically tells [sed] to delete all the records from display from line 2 to last line of the file (last line is represented by $ symbol). Of course it does not actually delete those lines from the file, it just does not display those lines in standard output screen. So you only see the remaining line which is the 1st line.  how to print/display the last line of a file?  the easiest way is to use the [tail] command.  $> tail -1 file. Txt if you want to do it using…

5 Things About AWS EC2 You Need to Focus!

Amazon Elastic Compute Cloud (Amazon EC2) - is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction.

The basic functions of EC2... 
It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios. 
Key Points for Interviews:
EC2 is the basic fundamental block around which the AWS are structured.EC2 provides remote ope…