Skip to main content

Big Data: IBM InfoSphere BigInsights Basics

I am explaining here why you need IBM infoSphere. You all know about what is file system in Hadoop. 

Hadoop is a distributed file system and data processing engine that is designed to handle extremely high volumes of data in any structure.In simpler terms, just imagine that you've got dozens, or even hundreds (or thousands!) of individual computers racked and networked together. Each computer (often referred to as a node in Hadoop-speak) has its own processors and a dozen or so 2TB or 3TB hard disk drives.
IBM Infosphere
IBM Infosphere
All of these nodes are running software that unifies them into a single cluster, where, instead of seeing the individual computers, you see an extremely large volume where you can store your data.

The beauty of this Hadoop system is that you can store anything in this space: millions of digital image scans of mortgage contracts, days and weeks of security camera footage, trillions of sensor-generated log records, or all of the operator transcription notes from a call center. This ingestion of data, without worrying about the data model, is actually a key tenet of the NoSQL movement.

IBM InfoSphere BigInsights


BigInsights features Apache Hadoop and its related open source projects as a core component. This is informally known as the IBM Distribution for Hadoop. IBM remains committed to the integrity of these open source projects, and will ensure 100 percent compatibility with them.

This fidelity to open source provides a number of benefits. For people who have developed code against other 100 percent open source–compatible distributions, their applications will also run on BigInsights, and vice versa. This open source compatibility has enabled IBM to amass over 100 partners, including dozens of software vendors, for BigInsights.

Simply put, if the software vendor uses the libraries and interfaces for open source Hadoop, they'll work with BigInsights as well.

Components in IBM Infosphere Biginsights


Hadoop (common utilities, HDFS, and the MapReduce framework)

1.0.3

Avro (data serialization)

1.6.3

Chukwa (monitoring large clustered systems)

0.5.0

Flume (data collection and aggregation)

0.9.4

HBase (real-time read and write database)

0.94.0

HCatalog (table and storage management)

0.4.0

Hive (data summarization and querying)

0.9.0

Lucene (text search)

3.3.0

Oozie (work flow and job orchestration)

3.2.0

Pig (programming and query language)

0.10.1

Sqoop (data transfer between Hadoop and databases)

1.4.1

ZooKeeper (process coordination)

Comments

Popular posts from this blog

Blue Prism complete tutorials download now

Blue prism is an automation tool useful to execute repetitive tasks without human effort. To learn this tool you need the right material. Provided below quick reference materials to understand detailed elements, architecture and creating new bots. Useful if you are a new learner and trying to enter into automation career.
The number one and most popular tool in automation is a Blue prism. In this post, I have given references for popular materials and resources so that you can use for your interviews.
Why You Need to Learn RPA blue prism tutorial popular resources I have given in this post. You can download quickly. Learning Blue Prism is a really good option if you are a learner of Robotic process automation.
RPA Advantages The RPA is also called "Robotic Process Automation"- Real advantages are you can automate any business process and you can complete the customer requests in less time.

The Books Available on Blue Prism 
Blue Prism resourcesDavid chappal PDF bookBlue Prism…

Topologies in Apache Storm the concept you need to know

There are two main reasons why Apache Storm is so popular. The number one is it can connect to many sources. The number two is scalable. The other advantage is fault tolerant. That means, guaranteed data processing.
The map-reduce jobs process the data analytics in Hadoop. The topology in Storm is the real data processor. The co-ordination between Nimbus and Supervisor carried by Zookeeper What are topologiesThe jobs in Hadoop are similar to topology. The jobs run as per schedule defined.In Storm, the topology runs forever.A topology consists of many worker processes spread across many machines. A topology is a pre-defined design to get end product using your data.A topology comprises of 2 parts. These are Spout and bolts.The Spout is a funnel for topology Two nodes in StormMaster Node: similar to Hadoop job tracker. It runs on a daemon called Nimbus.Worker Node: It runs on a daemon called Supervisor. The Supervisor listens to the work assigned to each machine.Master NodeNimbus is re…

Three popular RPA tools functional differences

Robotic process automation is growing area and many IT developers across the board started up-skill in this popular area. I have written this post for the benefit of Software developers who are interested in RPA also called Robotic Process Automation.

In my previous post, I have described that total 12 tools are available in the market. Out of those 3 tools are most popular. Those are Automation anywhere, BluePrism and Uipath. Many programmers asked what are the differences between these tools. I have given differences of all these three RPA tools.

BluePrismBlue Prism has taken a simple concept, replicating user activity on the desktop, and made it enterprise strength. The technology is scalable, secure, resilient, and flexible and is supported by a comprehensive methodology, operational framework and provided as packaged software.The technology is developed and deployed within a “corridor of IT governance” and has sophisticated error handling and process modelling capabilities to ensu…