Featured Post

How to Build CI/CD Pipeline: GitHub to AWS

Image
 Creating a CI/CD pipeline to deploy a project from GitHub to AWS can be done using various AWS services like AWS CodePipeline, AWS CodeBuild, and optionally AWS CodeDeploy or Amazon ECS for application deployment. Below is a high-level guide on how to set up a basic GitHub to AWS pipeline: Prerequisites AWS Account : Ensure access to the AWS account with the necessary permissions. GitHub Repository : Have your application code hosted on GitHub. IAM Roles : Create necessary IAM roles with permissions to interact with AWS services (e.g., CodePipeline, CodeBuild, S3, ECS, etc.). AWS CLI : Install and configure the AWS CLI for easier management of services. Step 1: Create an S3 Bucket for Artifacts AWS CodePipeline requires an S3 bucket to store artifacts (builds, deployments, etc.). Go to the S3 service in the AWS Management Console. Create a new bucket, ensuring it has a unique name. Note the bucket name for later use. Step 2: Set Up AWS CodeBuild CodeBuild will handle the build proces

Here's Quick Guide on Hadoop Security

Here is a topic of security and tools in Hadoop. These are security things that everyone needs to take care of while working with the Hadoop cluster.


Quick guide on Hadoop security and its features with top references.

Hadoop Security

Security

  • We live in a very insecure world. For instance, your home's front door to all-important virtual keys, your passwords, everything needs to be secured. In Big data systems, where humongous amounts of data are processed, transformed, and stored. So security you need for the data.
  • Imagine if your company spent a couple of million dollars installing a Hadoop cluster to gather and analyze your customers' spending habits for a product category using a Big Data solution. Here lack of data security leads to customer apprehension.

Security Concerns

  • Because that solution was not secure, your competitor got access to that data, and your sales dropped 20% for that product category.
  • How did the system allow unauthorized access to data? Wasn't there any authentication mechanism in place? Why were there no alerts?
  • This scenario should make you think about the importance of security, especially where sensitive data is involved.
  • Hadoop has inherent security concerns due to its distributed architecture. The installation that has clearly defined user roles and multiple levels of authentication (and encryption) for sensitive data will not let any unauthorized access go through.

Hadoop Security.

  • When talking about Hadoop security, you have to consider how Hadoop conceptualized. When Doug Cutting and Mike Cafarella started developing Hadoop, security was not the priority.
  • Hadoop meant to process large amounts of web data in the public domain, and hence security was not the focus of development. That's why it lacked a security model and only provided basic authentication for HDFS—which was not very useful since it was easy to impersonate another user.
  • Another issue is that Hadoop was not designed and developed as a cohesive system with pre-defined programs. But it was as a collage of modules that either correspond to various open-source projects or a set of (proprietary) extensions developed by different vendors to supplement functionality lacking within the Hadoop ecosystem.
  • Therefore, Hadoop expects a secure environment for data processing. In real-time, there are some glitches to have secure processing. You can read more about it from references.

Keep Reading
White Papers

Comments

Popular posts from this blog

How to Fix datetime Import Error in Python Quickly

How to Check Kafka Available Brokers

SQL Query: 3 Methods for Calculating Cumulative SUM