Featured Post

How to Build CI/CD Pipeline: GitHub to AWS

Image
 Creating a CI/CD pipeline to deploy a project from GitHub to AWS can be done using various AWS services like AWS CodePipeline, AWS CodeBuild, and optionally AWS CodeDeploy or Amazon ECS for application deployment. Below is a high-level guide on how to set up a basic GitHub to AWS pipeline: Prerequisites AWS Account : Ensure access to the AWS account with the necessary permissions. GitHub Repository : Have your application code hosted on GitHub. IAM Roles : Create necessary IAM roles with permissions to interact with AWS services (e.g., CodePipeline, CodeBuild, S3, ECS, etc.). AWS CLI : Install and configure the AWS CLI for easier management of services. Step 1: Create an S3 Bucket for Artifacts AWS CodePipeline requires an S3 bucket to store artifacts (builds, deployments, etc.). Go to the S3 service in the AWS Management Console. Create a new bucket, ensuring it has a unique name. Note the bucket name for later use. Step 2: Set Up AWS CodeBuild CodeBuild will handle the build proces

15 Awesome Features Should Present in Big Data System

Really good post. I have given useful points on the features of big data system. If there are no right features, you will miss the benefits that you get from big data.

What does traditional BI tools....

Read next step...

Traditional tools quickly can become overwhelmed by the large volume of big data. Latency—the time it takes to access the data—is as an important a consideration as volume.

A little difference is there...

Suppose you might need to run an ad hoc query against the large data set or a predefined report.

A large data storage system is not a data warehouse, however, and it may not respond to queries in a few seconds. It is, rather, the organization-wide repository that stores all of its data and is the system that feeds into the data warehouses for management reporting.
Big data top components
Image courtesy|Stockphotos.io
Big data needs to be considered in terms of how the data will be manipulated. The size of the data set will impact data capture, movement, storage, processing, presentation, analytics, reporting, and latency.

Key features of Big data system
  1. A method of collecting and categorizing data
  2. A method of moving data into the system safely and without data loss
  3. A storage system that is distributed across many servers
  4. Is scalable to thousands of servers
  5. Will offer data redundancy and backup
  6. Will offer redundancy in case of hardware failure
  7. Will be cost-effective
  8. A rich tool set and community support
  9. A method of distributed system configuration
  10. Parallel data processing
  11. System-monitoring tools
  12. Reporting tools: ETL-like tools (preferably with a graphic interface) that can be used to build tasks that process the data and monitor their progress
  13. Scheduling tools to determine when tasks will run and show task status
  14. The ability to monitor data trends in real time
  15. Local processing where the data is stored to reduce network bandwidth usage 
Related Content: 13 Must Read Blogs in Data and Analytics

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

How to Check Kafka Available Brokers

SQL Query: 3 Methods for Calculating Cumulative SUM