|Hadoop Real Process|
Hadoop comes into picture to process large volume of Unstructured data. The structured data is already taken care by traditional databases.
Role of traditional databases
Traditional relational databases have been able to store massive data sets for a long time. An Oracle 10g database can store over 8 Petabytes while for many years DB2 databases have been capable of storing well over 500 Petabytes. Of course, this is all theoretical.
- No customer has an Oracle or DB2 database that approaches sizes even close to that. Why? Because the speed, or velocity, at which data can be loaded and queries can be executed approaches zero well before then.
- Similarly, all traditional relational databases can store any variety of data as text or binary large objects. The problem is that large volumes of unstructured data cannot be moved fast enough to enable rapid search and retrieval.
ETL role in the age of Hadoop
- Unlike structured data, found within the tidy confines of records, spreadsheets and files, semi-structured and unstructured data is raw, complex, and pours in from multiple sources such as emails, text documents, videos, photos, social media posts, Twitter feeds, sensors and clickstreams.
- Hadoop and MapReduce enable organizations to distribute the search simultaneously across many machines, reducing the time to find relevant nuggets of information in large volumes of data in a scalable way. That’s why Hadoop is being adopted by bleeding edge enterprises moving into the multi-petabyte club. There are already some environments that break the 100 Petabyte level, and theoretically can continue to scale.