29 November 2015

The awesome points to learn from DB2 NoSQL GraphStore

The awesome points to learn from db2 graphstore
 #db2 graphstore:
One best example, prior to understanding the RDF format for Graph data modelIf the graph data model is the model the semantic web uses to store data, RDF is the format in which it is written. 


Summary of DB2 Graph Store:
  • DB2-RDF support is officially called "NoSQL Graph Support".  
  • The API extends the Jena API (Graph layer).  Developers familiar with Jena TDB will have the Model layer capabilities they are accustomed to.
  • Although the DB2-RDF functionality is being released with DB2 LUW 10.1, it is also compatible with DB2 9.7.
  • Full supports for SPARQL 1.0 and a subset of SPARQL 1.1.  Full SPARQL 1.1 support (which is till a W3C working draft) will be forthcoming.
  • While RDBMS implementations of RDF graphs have typically been non-performant, that is not the case here*.  Some very impressive and innovative work has been put into optimization capabilities.  Out-of-the box performance is comparable with native triple stores, and read/write performance in the optimized schema has been seen to surpass these speeds.
Related: Presentation on DB2 NoSQL Graph Store

What is RDF data model(ref:wiki)

The RDF data model is similar to classical conceptual modeling approaches such as entity–relationship or class diagrams, as it is based upon the idea of making statements about resources (in particular web resources) in the form of subject–predicate–object expressions.  


These expressions are known as triples in RDF terminology. The subject denotes the resource, and the predicate denotes traits or aspects of the resource and expresses a relationship between the subject and the object. For example, one way to represent the notion "The sky has the color blue" in RDF is as the triple: a subject denoting "the sky", a predicate denoting "has", and an object denoting "the color blue". Therefore, RDF swaps object for subject that would be used in the classical notation of an entity–attribute–value model within object-oriented design; Entity (sky), attribute (color) and value (blue). RDF is an abstract model with several serialization formats (i.e., file formats), and so the particular way in which a resource or triple is encoded varies from format to format. 


This mechanism for describing resources is a major component in the W3C's Semantic Web activity: an evolutionary stage of the World Wide Web in which automated software can store, exchange, and use machine-readable information distributed throughout the Web, in turn enabling users to deal with the information with greater efficiency and certainty. 

RDF's simple data model and ability to model disparate, abstract concepts has also led to its increasing use in knowledge management applications unrelated to Semantic Web activity. 
A collection of RDF statements intrinsically represents a labeled, directed multi-graph. As such, an RDF-based data model is more naturally suited to certain kinds of knowledge representation than the relational model and other ontological models. However, in practice, RDF data is often persisted in relational database or native representations also called Triplestores, or Quad stores if context (i.e. the named graph) is also persisted for each RDF triple.[3] ShEX, or Shape Expressions,[4] is a language for expressing constraints on RDF graphs. It includes the cardinality constraints from OSLC Resource Shapes and Dublin Core Description Set Profiles as well as logical connectives for disjunction and polymorphism. As RDFS and OWL demonstrate, one can build additional ontology languages upon RDF.

Related:

28 November 2015

The Ultimate Cheat Sheet On Hadoop

Top 20 frequently questions to check your hadoop knowledge given in the below hadoop sheet sheet

Question 1
You have written a MapReduce job that will process 500 million input records and generate 500 million key-value pairs. The data is not uniformly distributed. Your MapReduce job will create a significant amount of intermediate data that it needs to transfer between mappers and reducers which is a potential bottleneck. A custom implementation of which of the following interfaces is most likely to reduce the amount of intermediate data transferred across the network?
  • A. Writable
  • B. WritableComparable
  • C. InputFormat
  • D. OutputFormat
  • E. Combiner
  • F. Partitioner
Ans: e
Question 2
Where is Hive metastore stored by default ?
  • A. In HDFS
  • B. In client machine in the form of a flat file.
  • C. In client machine in a derby database
  • D. In lib directory of HADOOP_HOME, and requires HADOOP_CLASSPATH to be modified.
Ans; c
Question 3
The NameNode uses RAM for the following purpose:
  • A. To store the contents in HDFS.
  • B. To store the filenames, list of blocks and other meta information.
  • C. To store log that keeps track of changes in HDFS.
  • D. To manage distributed read and write locks on files in HDFS.
Ans: b
Question 4
What is true about reduce-side joining?
  • A. It requires a lot of in-memory process.
  • B. The amount of data written in the local of disk of the DataNode running the reduce task increases.
  • C. The reduce task generates more output data than input data.
  • D. It requires to declare custom partitioner and group comparator in the JobConf object.
Ans: a 
Question 5
Consider the below query:
INSERT OVERWRITE TABLE newTable
SELECT s.word, s.freq, k.freq FROM
shakespeare s JOIN kjv k ON
(s.word = k.word)
WHERE s.freq >= 5;
Is the output result stored in HDFS?
  • A. Yes, inside newTable
  • B. Yes, inside shakespeare.
  • C. No, not at all.
  • D. Maybe, depends on the permission given to the client
Ans: a
Question 6
One of the business analyst in your organization has very good expertise on C coding. He wants to clean and model the business data which is stored in HDFS. Which of the among is best suited for him?
  • A. HIVE
  • B. PIG
  • C. MAPRDEDUCE
  • D. OOZIE
  • E. HadoopStreaming
Ans: c
Question 7
Which process describes the life cycle of a mapper?
  • A. The jobTracker calls the TaskTracker’s configure () method, then its map() method and finally its close() method.
  • B. Task Tracker spawns a new mapper process to process all records of a single InputSplit.
  • C. Task Tracker spawns a new mapper process to process each key-value pair.
  • D. JobTracker spawns a new mapper process to process all records of single input file.
Ans: c
Question 8
How does the NameNode detect that a DataNode has failed?
A. The NameNode does not need to know that DataNode has failed.
B. When the NameNode fails to receive periodic heartbeats from the DataNode, it considers the DataNode as failed.
C. The NameNode pings the datanode. If the DataNode does not respond, the NameNode considers the DataNode failed.
D. When HDFS starts up, the NameNode tries to communicate with the DataNode and considers the DataNodes failed if it does not respond.
Ans: b
QUESTION 9
Two files needs to be joined over a common column. Which technique is faster and why?
A. The reduce-side joining is faster as it receives the records sorted by keys.
B. The reduce side joining is faster as it uses secondary sort.
C. The map-side joining faster as it caches the data from one file in-memory.
D. The map-side joining faster as it writes the intermediate data on local file system.
Ans: b
QUESTION 10
You want to run two different jobs which may use same lookup data (For example, US state code). While submitting the first job you used the distributed cache to copy the lookup data file in each data node. Both the jobs have mapper configure method where the distributed file is retrieved programmatically and values are cached in a hash map. Both the job uses ToolRunner so that the file for distributed cache can be provided at the command prompt. You run the first job with data file passed to the distributed cache. When the job is complete you fire the second job without passing the lookup file to distributed cache. What is consequence? (Select one)
A. The first job runs but the second job fails. This is because, distributed cache is persistent as long as the job is not complete. After the job is complete the distributed cache gets removed.
B. The first and second job completes without any problem as Distributed caches are once set those are permanently copied.
C. The first and second job will be successfully completed if the number of reducer is set to zero. Because, distributed cache works only with map only jobs.
D. Both the jobs are successful if those are chained using chain mapper or chain reducer. Because, distributed cache only works with ChainMapper or ChainReducer.
Ans: d
QUESTION 11
You want to run Hadoop jobs on your development workstation for testing before you submit them to your production cluster. Which mode of operation in Hadoop allows you to most closely simulate a production cluster while using a single machine?
A. Run all the nodes in your production cluster as virtual machines on your development workstation.
B. Run the hadoop command with the –jt local and the –fs file:/// options.
C. Run the DataNode, TaskTracker, JobTracker and NameNode daemons on a single machine.
D. Run simpldoop, Apache open source software for simulating Hadoop cluster.
Ans: c
Question 12
MapReduce is well-suited for all of the following EXCEPT? (Choose one)
A. Text mining on large collections of unstructured documents.
B. Analysis of large amounts of web logs (queries, clicks etc.).
C. Online transaction processing (OLTP) for an e-commerce Website.
D. Graph mining on a large social network (e.g. Facebook friend’s network).
Ans: a
Question 13
Your cluster has 10 Datanodes, each with a single 1 TB hard drive. You utilize all your disk capacity for HDFS, reserving none for MapReduce. You implement default replication settings. What is the storage capacity of your Hadoop cluster (assuming no compression)?
A. About 3 TB
B. About 5 TB
C. About 10TB
D. About 11TB
Ans: c
Question 14
Combiners increase the efficiency of a MapReduce program because:
A. They provide a mechanism for different mappers to communicate with each other, thereby reducing synchronization overhead.
B. They provide an optimization and reduce the total number of computations that are needed to execute an algorithm by a factor of n, where n are the number of reducers.
C. They aggregate map output locally in each individual machine and therefore reduce the amount of data that needs to be shuffled across the network to the reducers.
D. They aggregate intermediate map output to a small number of nearby (i.e. rack local) machines and therefore reduce the amount of data that needs to be shuffled across the network to the reducers.
Ans: c
Question 15
When is the reduce method first called in a MapReduce Job?
A. Reduce methods and map methods all start at the beginning of a job, in order to provide optimal performance for map-only and reduce-only jobs.
B. Reducers start copying the intermediate key-value pairs from each Mapper as soon as it has completed. The reduce method is called as soon as the intermediate key-value pairs start to arrive.
C. Reducers start copying intermediate key-value pairs from each Mapper as soon as it has completed. The reduce method is called only after all intermediate data has been copied and sorted.
D. Reducers start copying intermediate key-value pairs from each Mapper as soon as it has completed. The programmer can configure in the job what percentage of the intermediate data should arrive before the reduce method begins.
Ans:
Question 16
Your client application submits a MapReduce job to your Hadoop cluster. Identify the Hadoop daemon on which the Hadoop framework will look for an available slot to schedule a MapReduce operation.
A. TaskTracker
B. NameNode
C. DataNode
D. JobTracker
E. Secondary Namenode.
Question 17
What is the maximum limit for key-value pair that a mapper can emit ?
A. Its equivalent to number of lines in input files.
B. Its equivalent to number of times map() method is called in mapper task.
C. There is no such restriction. It depends on the use case and logic.
D. 1000
Ans:b
Question 18
What is the disadvantage of using multiple reducers with default HashPartioner and distributing your workload across your cluster.
A. You will not be able to compress your intermediate data.
B. You will no longer will be able to take the advantage of a Combiner.
C. The output files may not be in global sorted order.
D. There is no problem.
Ans: d
Question 19
You are developing a combiner that takes as input Text keys, IntWritable values, and emits Text keys, IntWritable values. Which interface should your class implement?
A. Combiner
B. Mapper
C. Reducer
D. Reducer
E. Combiner
Ans:a
Question 20
During the standard sort and shuffle phase of MapReduce, keys and values are passed to reducers. Which of the following is true?
A. Keys are presented to a reducer in sorted order; values for a given key are not sorted.
B. Keys are presented to a reducer in sorted order; values for a given key are sorted in ascending order.
C. Keys are presented to a reducer in random order; values for a given key are not sorted.
D. Keys are presented to a reducer in random order; values for a given key are sorted in ascending order.
Ans;c

How to learn Tableau best way with self Study tutorials


#Tips To Mastering Tableau Self Study Video Tutorials
#Tips To Mastering Tableau Self Study Video Tutorials
The Tableau training website offers a multitude of resources for Tableau users with many of the videos being brief and addressing specific topics. This self-study syllabus organizes those videos to help you find the training you need on specific topics quickly. Tableau 9 for Data Science Engineers.

To learn any Software Tool, you need to follow these steps:
  1. Tutorials - Either on-line or class room
  2. Books -Read theory from the scratch
  3. Hands on Training - Just practice what you learnt
  4. Materials -Written by experienced developers
  5. Blogs/Websites/Forums give you much insights
The below link contains valuable video tutorials. You can learn Tableau quickly in a just few days.

Take Video Lessons Here

26 November 2015

The 12 best QlikView Interview Questions with answers

1) What is QlikView?
QlikView is a program that makes it possible to retrieve and assimilate data from different sources. Once loaded into the program, the data is presented in a way that is easy to understand and work with.
2) In how many flavors QlikView is available? QlikView comes in three flavours called QlikView Enterprise, QlikView Profesisonal and QlikView Analyzer. If you are running QlikView Enterprise all parts of this tutorial will be relevant for you. If you are running QlikView Professional only the first part “Working with QlikView” is relevant. For those running QlikView Analyzer only the very first lesson may be relevant Related: QlikView+Jobs+technical+Skills
3)How to start QlikView? You start QlikView by double-clicking the QlikView icon in the QlikView group (created during the installation procedure). You will also find QlikView on the Start menu, under Programs. It is also possible to start QlikView by double-clicking the icon of a QlikView file. After QlikView has started, the file will be opened
4)How to OPEN a file in QlikView? Use the Open command on the File menu or the Open button on the toolbar to open an existing file. If the file was one of the latest QlikView documents used, you can also open it by choosing the file name from the File menu. Several files can be open simultaneously. If this is the case, you can activate another file by choosing it from the list on the Window menu, or by using the key combination CTRL+TAB.

5)How to SAVE a document in QlikView? Use the Save command on the File menu or the Save button on the toolbar to save an open document. When developing applications, you should save periodically so that you do not lose your work in the event of hardware or software problems or a power failure.

6) How to close the document in QlikView?
Each document appears in its own window. You can close a document at any time by using the Close command in the File menu. If you have made any changes, QlikView will display a message asking whether you want to save the changes or not. Selections are considered as changes. Choose the Yes button to save, the No button to close the document without saving, or the Cancel button to cancel the closing procedure
7) What is QlikView help? QlikView Help is a conventional Help program. To find out how to use the Help program, choose Using Help from the Help menu. For specific help on QlikView, choose Contents from the Help menu and browse through the topics.
8) QlikView Short cut commands? Ctrl+Shift+S – Shows all objects/tabs in the QVW, regardless of hide/show condition Ctrl+Shift+D – Clear all, same as the clear button but it’s faster if you’re just typing Ctrl+Shift+Q – Opens Document Support Information, which provides detailed information about the QVW and computer you’re using Ctrl+Shift+B – Opens Bookmark Overview Ctrl+Shift+O – Opens Connect to Server dialogue Ctrl+Shift+L / Ctrl+Shift+U – Locks and unlocks whichever listbox is activated Alt+Click – Holding Alt and hovering over any object allows you to move the object, which is particularly useful for tables so you don’t have to grab the caption Alt+Ctrl+Click – Copies an object to another part of the sheet or another tab (NB: Allow Move/Size checkbox must be enabled in Properties > Layout if you want to move to another tab) Ctrl+Alt+E – Opens Expression Overview Ctrl+Alt+V – Opens Variable Overview Ctrl+Alt+D – Opens Document Properties (NB: This is a good way to view and change variables if your application has many variables. The Variable Overview will calculate each Variable before opening, but the Variables tab in the Document Properties will not.) Ctrl+Alt+S – Opens Sheet Properties Ctrl+Q+Q – In the Script Editor, writes code that generates sample data Ctrl+Q – Displays Current Selections Ctrl+K+C / Ctrl+K+U – In the Script or Expression Editor, comments and uncomments the selected block of code Ctrl+G – Activates Grid Mode. In addition to the obvious segmentation of the screen, Grid Mode also allows you to right-click inside one cell of a table and format that cell alone, using the Custom Format Cell option in the context menu.
9) Basic QlikView Terminology?
Read all
10) How you will make a Query in QlikView? In QlikView, the main way of making queries is through the selection of field values. When you make a selection, the program instantaneously shows all the field values in the document that are related to the selected field value. To make a query, or a search, in the database, you just click on something you want to know more about
11) What is QlikSense? Qlik Sense: is basically useful tool for the end user who just want to analyze its spread sheet in visual form. With launch of this tool QlikView has introduced the self-service BI. Qlik Sense engage the end user with data visualization and also reduce the dependency from developers for small app development.
With this tool user can easily convert spread sheets into interactive visualizations and they also not need to wait for weeks to see dashboard developed by IT team, they can do it by them self. This also eliminates the steps involved in getting the requirement documented and freeze the mock design and take approval from stakeholders. Infect it promots the agile development and implementation of the innovative ideas that’s pops up during the development Qlik Sense today is only the first version of something that will evolve further and get more features and functions as time goes on.

12) Basic functionality of QlikView? One of the QlikView’s primary differentiators is the associative user experience it delivers.QlikView is the leading Business Discovery platform. It enables users to explore data, make discoveries, and uncover insights that enable them to solve business problems in new ways. Business users conduct searches and interact with dynamic dashboards and analytics from any device.

21 November 2015

Scrum Vs Agile Methodology best explained with more details


Life cycle of scrum with more details
#Life cycle of scrum with more details:
Scrum is part of the Agile movement. Agile is a response to the failure of the dominant software development project management paradigms (including waterfall) and borrows many principles from lean manufacturing. In 2001, 17 pioneers of similar methods met at the Snowbird Ski Resort in Utah and wrote the Agile Manifesto, a declaration of four values and twelve principles. 

These values and principles stand in stark contrast to the traditional Project Manager’s Body Of Knowledge (PMBOK). The Agile Manifesto placed a new emphasis on communication and collaboration, functioning software, team self organization, and the flexibility to adapt to emerging business realities.


How Does Scrum Fit With Agile?
The Agile Manifesto doesn’t provide concrete steps. Organizations usually seek more specific methods within the Agile movement. These include Crystal Clear, Extreme Programming, Feature Driven Development, Dynamic Systems Development Method (DSDM), Scrum, and others. While I like all the Agile approaches, for my own team Scrum was the one that enabled our initial breakthroughs. Scrum’s simple definitions gave our team the autonomy we needed to do our best work while helping our boss (who became our Product Owner) get the business results he wanted. Scrum opened our door to other useful Agile practices such as test-driven development (TDD). Since then we’ve helped businesses around the world use Scrum to become more agile. A truly agile enterprise would not have a “business side” and a “technical side.” It would have teams working directly on delivering business value. We get the best results when we involve the whole business in this, so those are the types of engagements I’m personally the most interested in.

What’s The Philosophy Behind Scrum?
Scrum’s early advocates were inspired by empirical inspect and adapt feedback loops to cope with complexity and risk. Scrum emphasizes decision making from real-world results rather than speculation. Time is divided into short work cadences, known as sprints, typically one week or two weeks long. The product is kept in a potentially shippable (properly integrated and tested) state at all times. At the end of each sprint, stakeholders and team members meet to see a demonstrated potentially shippable product increment and plan its next steps.


Scrum is a simple set of roles, responsibilities, and meetings that never change. By removing unnecessary unpredictability, we’re better able to cope with the necessary unpredictability of continuous discovery and learning.

(Ref: Scrummethodology)

20 November 2015

Tableau top features useful for data analysis

Tableau is one of the most popular tool in data analysis. Learning the Tableau gives you so many options in data analysis career.

You can download Tableau Software free version here. Get complete understanding document on how Tableau works here. Read this post for advancing in your Tableau Career.

Unique functionality in Tableau: Tableau Software was founded on the idea that analysis and visualization should not be isolated activities but must be synergistically integrated into a visual analysis process. Visual analysis means specifically:
Data Exploration. Visual analysis is designed to support analytical reasoning. The goal of visual analysis is to answer important questions using data and facts. In order to support analysis, it is not enough to only access and report on the data. Analysis requires computational support throughout the process. Typical steps in analysis include such operations as (1) filtering to focus on items of interest, (2) sorting to rank and prioritize, (3) grouping and aggregating to summarize, and (4) creating on-the-fly calculations to express numbers in useful ways. A visual analysis application exposes these exploratory operations to ordinary people through easy-to-use interfaces.

Next Steps: Tableau 9 Advanced Training
Data Visualization. Visual analysis means presenting information in ways that support visual thinking. Data is displayed using the best practices of information visualization. The right presentation makes it easy to organize and understand the information. For example, critical information may be quickly found, and features, trends, and outliers may be easily recognized. One powerful way to evaluate any analysis tool is to test its effectiveness in answering specific questions. At the most fundamental level, does the tool have the analytical power needed to answer the question? At another level, how long does it take to answer the question? A successful visual analysis application unites data exploration and data visualization in an easy-to-use application that anyone can use.

Daily use Tableau commands
  1. addusers (to group)
  2. creategroup
  3. createproject
  4. createsite
  5. createsiteusers
  6. createusers
  7. delete workbook-name or datasource-name
  8. deletegroup
  9. deleteproject
  10. deletesite
  11. deletesiteusers
  12. deleteusers
  13. editdomain
  14. editsite
  15. export
  16. get url
  17. listdomains
  18. listsites
  19. login
  20. logout
  21. publish
  22. refreshextracts
  23. removeusers
  24. runschedule
  25. set
  26. syncgroup
  27. version

YouTube tutorial for beginners:


Related: Tableau Job Opportunities,Career Options

18 November 2015

The big revelution of Indutrial Internet of Things

To understand why General Electric is plowing $1 billion into the idea of using software to transform industry, put yourself in the shoes of Jeff Immelt, its CEO. As recently as 2004, GE had reigned as the most valuable company on the planet. But these days, it’s not even the largest in America. Apple, Microsoft, and Google are all bigger. Software is king of the hill. And,as Immelt came to realize, GE is not that great at software.Internal surveys had discovered that GE sold $4 billion worth of industrial software a year—the kind used to run pumps or monitor wind turbines.

That’s as much as the total revenue of Salesforce.com. But these efforts were scattered and not always state-of-the-art. And that gap was turning dangerous. GE had always believed that since it knew the materials and the physics of its jet engines and medical scanners, no one could best it in understanding those machines. But companies that specialize in analytics, like IBM, were increasingly spooking GE by figuring out when bigticket machines like a gas turbine might fail—just by studying raw feeds from gauges or vibration monitors. This was no small thing. GE sells $60 billion a year in industrial equipment. But its most lucrative business is servicing the machines.

Now software companies were looking to take a part of that pie, to get between GE and its largest source of profits.As Immelt would later say, “We cannot afford to concede how the data gathered in our industry is used by other companies.” In 2012, GE unveiled its answer to these threats, a campaign it calls the “industrial Internet.” 

It included a new research lab across the bay from Silicon Valley, where it has hired 800 people,
many of them programmers and data scientists. “People have told companies like GE for years that they can’t be in the software business,” Immelt said last year. “We’re too slow. We’re big and dopey. But you know what? We are extremely dedicated to winning in the markets we’re in. And this is a to-the-death fight to remain relevant to our customers.”

Peter Evans, then a GE executive, was given the job of shaping what he calls the “meta-narrative” around GE’s big launch. Industrial companies, which prize reliability,aren’t nearly as quick to jump for new technology as consumers. So GE’s industrial-Internet pitch was structured around the huge economic gains even a 1 percent improvement in efficiency might bring to a number of industries if they used more analytics software. That number was fairly arbitrary—something safe, “just 1 percent,” recalls Evans. But here Immelt’s marketing skills came into play. “Not ‘just 1 percent’,” he said, flipping it around. GE’s slogan would be “The Power of 1 Percent.” In a stroke, GE had shifted the discussion about where the Internet was going next. Other companies had been talking about connecting cars and people and toasters. But anufacturing and industry account for a giant slice of global GDP. “All the appliances in your home could be wired up and monitored, but the kind of money you make in airlines or health care dwarfs that,” Immelt remarked.

There is another constituency for the campaign: engineers inside GE. To them,operational software isn’t anything new. Nor are control systems—even a steam locomotive has one. But here Immelt was betting they could reinvent these systems. “You do embedded systems? My God, how boring is that? It’s like, put a bullet in your head,” says Brian Courtney, a GE manager based in Lisle, Illinois. “Now it’s the hottest job around.” At the Lisle center, part of GE’s Intelligent Platforms division,
former field engineers sit in cubicles monitoring squiggles of data coming off turbines in Pakistan and oil rigs in onetime Soviet republics.

Call this version 1.0 of the industrial Internet. On the walls, staff hang pictures of fish; each represents a problem, like a cracked turbine blade, that was caught early. More and more, GE will be using data to anticipate maintenance needs, says Courtney. 

17 November 2015

Top Daily Use LINUX Commands for Programmers

Top Daily Use LINUX Commands for Programmers
#Top Daily Use LINUX Commands for Programmers:
1.1 What is a command shell?
A program that interprets commands Allows a user to execute commands by typing them manually at a terminal, or automatically in programs called shell scripts. A shell is not an operating system. It is a way to interface with the operating system and run commands.

1.2 What is BASH?
BASH = Bourne Again SHell
Bash is a shell written as a free replacement to the standard Bourne Shell (/bin/sh) originally written by Steve Bourne for UNIX systems. It has all of the features of the original Bourne Shell, plus additions that make it easier to program with and use from the command line. Since it is Free Software, it has been adopted as the default shell on most Linux systems.

1.3 How is BASH different from the DOS command prompt?
Case Sensitivity: In Linux/UNIX, commands and filenames are case sensitive, meaning that typing “EXIT” instead of the proper “exit” is a mistake.
“\” vs. “/”: In DOS, the forward-slash “/” is the command argument delimiter, while the backslash “\” is a directory separator. In Linux/UNIX, the “/” is the directory separator, and the “\” is an escape character. More about these special characters in a minute!

Filenames: The DOS world uses the “eight dot three” filename convention, meaning that all files followed a format that allowed up to 8 characters in the filename, followed by a period (“dot”), followed by an option extension, up to 3 characters long (e.g. FILENAME.TXT). In UNIX/Linux, there is no such thing as a file extension. Periods can be placed at any part of the filename, and “extensions” may be interpreted differently by all programs, or not at all.

1.4 Special Characters
Before we continue to learn about Linux shell commands, it is important to know that there are
many symbols and characters that the shell interprets in special ways. This means that certain
typed characters: a) cannot be used in certain situations, b) may be used to perform special
operations, or, c) must be “escaped” if you want to use them in a normal way.

Character Description
\ Escape character. If you want to reference a special character, you must “escape” it
with a backslash first.
Example: touch /tmp/filename\*
/ Directory separator, used to separate a string of directory names.
Example: /usr/src/linux
. Current directory. Can also “hide” files when it is the first character in a filename.
.. Parent directory
~ User's home directory
* Represents 0 or more characters in a filename, or by itself, all files in a directory.
Example: pic*2002 can represent the files pic2002, picJanuary2002,
picFeb292002, etc.
? Represents a single character in a filename.
Example: hello?.txt can represent hello1.txt, helloz.txt, but not
hello22.txt
[ ] Can be used to represent a range of values, e.g. [0-9], [A-Z], etc.
Example: hello[0-2].txt represents the names hello0.txt,
hello1.txt, and hello2.txt
| “Pipe”. Redirect the output of one command into another command.
Example: ls | more
> Redirect output of a command into a new file. If the file already exists, over-write it.
Example: ls > myfiles.txt
>> Redirect the output of a command onto the end of an existing file.
Example: echo “Mary 555-1234” >> phonenumbers.txt
< Redirect a file as input to a program.
Example: more < phonenumbers.txt
; Command separator. Allows you to execute multiple commands on a single line.
Example: cd /var/log ; less messages
&& Command separator as above, but only runs the second command if the first one
finished without errors.
Example: cd /var/logs && less messages
& Execute a command in the background, and immediately get your shell back.
Example: find / -name core > /tmp/corefiles.txt &

1.5 Executing Commands
The Command PATH:

Most common commands are located in your shell's “PATH”, meaning that you can just
type the name of the program to execute it.
Example: Typing “ ls” will execute the “ ls” command.
Your shell's “PATH” variable includes the most common program locations, such as
/bin, /usr/bin, /usr/X11R6/bin, and others.
To execute commands that are not in your current PATH, you have to give the complete
location of the command.
Examples: /home/bob/myprogram
./program (Execute a program in the current directory)
~/bin/program (Execute program from a personal bin directory)
Command Syntax
Commands can be run by themselves, or you can pass in additional arguments to make them do
different things. Typical command syntax can look something like this:
command [-argument] [-argument] [--argument] [file]
Examples: ls List files in current directory
ls -l Lists files in “long” format
ls -l --color As above, with colourized output
cat filename Show contents of a file
cat -n filename Show contents of a file, with line numbers

2.0 Getting Help
When you're stuck and need help with a Linux command, help is usually only a few keystrokes
away! Help on most Linux commands is typically built right into the commands themselves,
available through online help programs (“man pages” and “info pages”), and of course online.

2.1 Using a Command's Built-In Help
Many commands have simple “help” screens that can be invoked with special command flags.
These flags usually look like “-h” or “--help”.
Example: grep --help

2.2 Online Manuals: “Man Pages”
The best source of information for most commands can be found in the online manual pages,
known as “man pages” for short. To read a command's man page, type “man command”.
Examples: man ls Get help on the “ls” command.
man man A manual about how to use the manual!
To search for a particular word within a man page, type “/word”. To quit from a man page, just
type the “Q” key.
Sometimes, you might not remember the name of Linux command and you need to search for it.
For example, if you want to know how to change a file's permissions, you can search the man page
descriptions for the word “permission” like this:
man -k permission
If you look at the output of this command, you will find a line that looks something like:
chmod (1) - change file access permissions
Now you know that “chmod” is the command you were looking for. Typing “man chmod” will
show you the chmod command's manual page!

2.3 Info Pages
Some programs, particularly those released by the Free Software Foundation, use info pages as
their main source of online documentation. Info pages are similar to man page, but instead of
being displayed on one long scrolling screen, they are presented in shorter segments with links to
other pieces of information. Info pages are accessed with the “info” command, or on some
Linux distributions, “pinfo” (a nicer info browser).
For example: info df Loads the “df” info page.

3.0 Navigating the Linux Filesystem
The Linux filesystem is a tree-like hierarchy hierarchy of directories and files. At the base of the
filesystem is the “/” directory, otherwise known as the “root” (not to be confused with the root
user). Unlike DOS or Windows filesystems that have multiple “roots”, one for each disk drive, the
Linux filesystem mounts all disks somewhere underneath the / filesystem. The following table
describes many of the most common Linux directories.

3.1 The Linux Directory Layout
Directory Description
The nameless base of the filesystem. All other directories, files, drives, and
devices are attached to this root. Commonly (but incorrectly) referred to as
the “slash” or “/” directory. The “/” is just a directory separator, not a
directory itself.
/bin Essential command binaries (programs) are stored here (bash, ls, mount,
tar, etc.)
/boot Static files of the boot loader.
/dev Device files. In Linux, hardware devices are acceessd just like other files, and
they are kept under this directory.
/etc Host-specific system configuration files.
/home Location of users' personal home directories (e.g. /home/susan).
/lib Essential shared libraries and kernel modules.
/proc Process information pseudo-filesystem. An interface to kernel data structures.
/root The root (superuser) home directory.
/sbin Essential system binaries (fdisk, fsck, init, etc).
/tmp Temporary files. All users have permission to place temporary files here.
/usr The base directory for most shareable, read-only data (programs, libraries,
documentation, and much more).
/usr/bin Most user programs are kept here (cc, find, du, etc.).
/usr/include Header files for compiling C programs.
/usr/lib Libraries for most binary programs.
/usr/local “Locally” installed files. This directory only really matters in environments
where files are stored on the network. Locally-installed files go in
/usr/local/bin, /usr/local/lib, etc.). Also often used for
software packages installed from source, or software not officially shipped
with the distribution.
/usr/sbin Non-vital system binaries (lpd, useradd, etc.)
/usr/share Architecture-independent data (icons, backgrounds, documentation, terminfo,
man pages, etc.).
/usr/src Program source code. E.g. The Linux Kernel, source RPMs, etc.
/usr/X11R6 The X Window System.
/var Variable data: mail and printer spools, log files, lock files, etc.

3.2 Commands for Navigating the Linux Filesystems
The first thing you usually want to do when learning about the Linux filesystem is take some time to look around and see what's there! These next few commands will:
a) Tell you where you are, b) take you somewhere else, and c) show you what's there. The following table describes the basic operation of the pwd, cd, and ls commands, and compares them to certain DOS commands that you might already be familiar with.

Linux Command DOS Command Description
pwd cd “Print Working Directory”. Shows the current location in the directory tree.
cd cd, chdir “Change Directory”. When typed all by itself, it returns you to your home directory.
cd directory cd directory Change into the specified directory name.

Example: cd /usr/src/linux
cd ~ “~” is an alias for your home directory. It can be used as a shortcut to your “home”, or other
directories relative to your home.

cd .. cd.. Move up one directory. For example, if you are in /home/vic and you type “cd ..”, you will end up in /home.
cd - Return to previous directory. An easy way to get back to your previous location!
ls dir /w List all files in the current directory, in column format.
ls directory dir directory List the files in the specified directory.
Example: ls /var/log
ls -l dir List files in “long” format, one file per line. This also shows you additional info about the file, such as ownership, permissions, date, and size.
ls -a dir /a List all files, including “hidden” files. Hidden files are those files that begin with a “.”, e.g. The .bash_history file in your home directory.
ls -ld directory
A “long” list of “directory”, but instead of showing the directory contents, show the directory's detailed information. For example, compare the output of the following two commands:
ls -l /usr/bin
ls -ld /usr/bin
ls /usr/bin/d* dir d*.* List all files whose names begin with the letter “d”
in the /usr/bin directory.

>> Read more

Limitations of Mobile Computing

What is Mobile Computing?
Mobile computing ─ ability to use the technology to wirelessly connect to and use centrally located information and/or application software through the application of small, portable, and wireless computing and communication devices voice, data and multimedia communication standards

Limitations
  • Resource constraints: Battery
  • Interference: the quality of service (QoS)
  • Bandwidth: connection latency
  • Dynamic changes in communication environment: variations in signal power within a region, thus link delays and connection losses
  • Network Issues: discovery of the connection-service to destination and connection stability
  • Interoperability issues: the varying protocol standards
  • Security constraints: Protocols conserving privacy of communication 

16 November 2015

Benefits of having Certified SAS Base Programmer

Why certification is beneficial?

Professionals in data management, data warehousing or in a business intelligence role would find the certification ideal. In addition, recent college graduates having an inclination to logically solve problems and pursuing to enter the data analysis field will find the certification beneficial to kick start their careers. 
Certification in Base SAS
Base SAS Programmer

This course is also ideal, if you are a working professional OR a recent graduate who is
  • Aspiring to be in fast growing career
  • Looking for a more challenging position
  • Aiming to get into a more skillful role
  • Aspiring to be one of the coolest scientists of 21st century
What is Base Sas?

It's the foundation for all SAS software. Along with an easy-to-learn, flexible programming language, you get a web-based programming interface; ready-to-use programs for data manipulation, information storage and retrieval, descriptive statistics and reporting; a centralized metadata repository; and a macro facility that reduces programming time and maintenance headaches.

13 November 2015

IT Jobs on Internet of Things - Best Group


( Best on-line Training for IoT) 
All freshers and experienced software developers can join in this group who wish to take their career on Internet-of-things(IoT)

IT JOBS on Internet of Things - Join Today to get benefit.

Imagine a world where billions of objects can sense, communicate and share information, all
interconnected over public or private Internet Protocol (IP) networks. These interconnected
objects have data regularly collected, analysed and used to initiate action, providing a
wealth of intelligence for planning, management and decision making. This is the world of
the Internet of Things (IOT).

The IOT concept was coined by a member of the Radio Frequency Identification (RFID)
development community in 1999, and it has recently become more relevant to the practical
world largely because of the growth of mobile devices, embedded and ubiquitous
communication, cloud computing and data analytics.

Best on-line Training for Internet of Things

Since then, many visionaries have seized on the phrase “Internet of Things” to refer to the
general idea of things, especially everyday objects, that are readable, recognisable,
locatable, addressable, and/or controllable via the Internet, irrespective of the
communication means (whether via RFID, wireless LAN, wide- area networks, or other
means). Everyday objects include not only the electronic devices we encounter or the
products of higher technological development such as vehicles and equipment but things
that we do not ordinarily think of as electronic at all - such as food and clothing. Examples of
“things” include:

-People;
-Location (of objects);
-Time Information (of objects);
-Condition (of objects).

These “things” of the real world shall seamlessly integrate into the virtual world, enabling
anytime, anywhere connectivity. In 2010, the number of everyday physical objects and
devices connected to the Internet was around 12.5 billion. Cisco forecasts that this figure is
expected to double to 25 billion in 2015 as the number of more smart devices per person
increases, and to a further 50 billion by 2020

12 November 2015

Top advantages of Agile Project Management vs Waterfall model

(Agile and Scrum Certification)
(Agile and Scrum Certification)
The Agile Project Management has its roots in the iterative project management. The Agile project management is highly flexible and interactive model where the requirements and resultant plan to meet those requirements keeps changing with inputs from stakeholders, suppliers and customers.

The traditional practice of project management, often referred to as “waterfall” project management suffers from various drawbacks, especially when it came to meeting the needs of complex projects where the requirements cannot be stated fully till the time a prototype is developed or on wider projects where there are multiple facets of the product being produced. In addition, when planning happens much in advance, there are chances that requirements may change by the time project comes to closing phase rendering the product ineffective or partially effective. Compare this to a project where one module is developed in short period of time (may be 2–3 weeks), is implemented, feedback taken from users, any shortcomings identified, and feedback and identified shortcomings built in as requirements in to the next short phase of development. The benefits of Agile Project Management thus become apparent.

Another key differentiator between Agile and traditional project management is the focus on people, relationships and working software at the end vs. focus on processes and tools, documentation and following project plan.

Agile Project Management is the result of collaboration between APMG-International and The DSDM Consortium. DSDM (Dynamic Systems Development Method) is the longest-established Agile method, launched in 1995, and is the only Agile method to focus on the management of Agile projects.

Approaches to Agile
In the Agile world, there are a number of approaches available; the most common of these are DSDM Atern, eXtreme Programming (XP), SCRUM and Lean. To put these Agile approaches into context:

XP – focusing on I.T. development, XP provides developer techniques and practices such as Pair Programming, Continuous Integration etc. There is no concept of a Project in XP, and with the exception of planning, little guidance around management, since the primary purpose of XP is to provide Agile delivery techniques.

Simplilearn +Certification+Courses + On+ Agile and Scrum

Typically where XP is to be used to deliver Agile Projects, it is often combined with other Agile approaches which add-on the Project and Management elements. Examples of this would be XP with DSDM Atern, XP with Scrum.

Scrum provides an excellent team based approach to allow work to be prioritized and delivered, using the concept of a constantly evolving “backlog” to provide the team’s workload. The strength of Scrum is its simplicity, and since it is so easy to describe and to start to use, this has driven its popularity to date.

However in Scrum, there is no concept of a project, simply a Product Backlog of work to be done. For those wishing to scale Scrum to work as a corporate-wide Agile approach, or to use it for management of projects and releases, there is usually significant extra work needed to overlay the project/release concept onto the basic Scrum process. Scrum does offer a very simple version of corporate-wide Scrum (referred to as “Scrum of Scrums”), but in the complex corporate world, there is little confidence in the successful practical application of this.

Scrum is also often combined with DSDM Atern, where Scrum is used at the development team level, and DSDM Atern sits above the team to position the work within a project and to provide the project management elements.

Lean – an approach which originated in the Toyota manufacturing environment in the 1940s. Lean drives work to be done in an efficient way through its main principle of “Eliminate Waste”. In practice, this means avoiding anything that does not produce value for the customer. Examples of Lean thinking are “don’t do all detailed analysis up front, because it will change/some will not progress to delivery” and “test throughout, then you don’t waste time working on things that do not fit the business”. A lean approach can be applied at development level, but it is also often used at the organizational level. Lean is often used in conjunction with other Agile approaches, since it is complementary to most of them, e.g. Lean and DSDM Atern, Lean and Scrum, Lean and XP.

For more than a decade, agile project management has been used and it wonderfully continues to grow in popularity. It is one effective methodology.

Agility is the ability to both create and respond to change in order to profit in a turbulent business environment - Jim Highsmith, Agile Project Management

The difference between Agile & traditional project management can also be elaborated by imagining a relay race where each member passes on the baton to, for the next part, and a football team where the entire team assumes responsibility and tries to go the distance as a unit, passing the ball back & forth.

Agile Project Management enables organizations to gain the benefits of an agile approach without introducing unnecessary risks. This ensures ‘going agile’ becomes a measured and balanced change, keeping what is good in the current organization and retaining existing good practices around project management and delivery.

Further Reading: SrinimfJobs - Agile and Scrum

How to find Right Career to Suit Your attitude

(Join IoT Jobs Group in Facebook)
(Join IoT Jobs Group in Facebook)
75% of the American workforce are actively looking for jobs at any given point in time, of which 69% of them are currently employed.

Simplilearn Advanced certification courses in New Technologies

Tapping into your Network, Creating and Maximizing your Personal Brand and Researching Companies in your Field of Interest will help you in finding jobs that suit your interest.
Trying to randomly find a job will get you nowhere. PREPARE, PLAN and RESEARCH, and you will find a job that is meant for you.

If you have attended a get-together of friends or a social gathering off-late, chances are that you would have come across at least half a dozen people, talking about their jobs; how they are unhappy with it and looking for a change, or how difficult it seems to get one in the first place.

Searching for a job has become one of the most time-occupying tasks in recent times. Almost as difficult as getting one, a job search means researching, identifying and applying for jobs and going through the functions of interviews, discussions, negotiations and such. It can therefore get very frustrating, daunting and overwhelming for most of us.

Srinimfjobs recommends lot of jobs in new technologies. Job Seeker Survey has revealed that 75% of the American workforce comprises of active job seekers, up from 69% in 2011. This means that while jobs are increasing and becoming more versatile and varied, the number of people looking for a job has also increased manifold.

So how does one go about searching for a job, and finding the most relevant one based on personal capabilities?

Below are a few important steps that can help give you a head-start in finding the job you have always wanted, whether just starting out, or jumping ship from your current one.

Tapping into your Network of Family, Friends and Past Employers:
When looking out for a job, the first thing you should do is look around you and get in touch with the people you know. A network has tremendous potential, because it has an outreach that is unlimited. People know other people, and reaching out to them, may just get you what you want, since they have you to fill in the job vacancy they have heard about.

If you haven’t been in touch with anyone off-late, do it now. Identify the most influential people in your group, and find ways to strengthen your relationship with them in the long run.

Attend events; whether for Career Advancement or for Personal Growth:
Meeting new people is all about the experience, as well as developing relationships that will prove beneficial in the short and long term. It is therefore important to attend every possible networking event, career fest and get-togethers conducted by professional organizations to find the right job that fits your skill sets. You can IoT Jobs group to discuss latest job opportunities.

Make it a habit to talk and introduce yourself to least one new person at every event. Follow this up by calling or interacting with them again, or subtly ask them for someone else’s referral.
Create your Brand and Maximize its reach through Social Media.

Social Media has been considered an unseen vehicle with a highly impactful reach. Make the best use of it, to advertise yourself to potential employers and let them find you. The first step towards this is creating a BRAND that represents you. Your identity in the online space, will make a strong positive impression on any recruiter, and also help him find out more about you as a candidate.

Almost 88% of all job seekers in the Jobvite survey, have at least one social networking profile. And almost 23% of them have been asked for their social media information during an interview. This is a strong fact that underlines the impact of an online presence.

Using Social tools like LinkedIn, Facebook and Twitter, will help you tailor your resume to fit in with the employers requirements. You can even search your target areas of interest based on your preferences; industry, skill sets and qualifications and reach out to people accordingly.

Contact Head hunters/ HR Professionals
It is a known fact, that while most jobs are either come referred by people or are got through social media contacts, the mid and senior level professionals are almost always recruited through HR recommendations. These are the kind of jobs that are never advertised, because they need the kind of expertise that can only be sourced specifically by such head professionals.

Short-listing companies in your field of interest Rather than randomly sending across your resume to companies for any job requirement what matters is targeting those that will be most suited to your interests.

Always take the time to research companies whose culture and mission match your work style and ethics. This will reap benefits in the long run and will ensure you are selected and remain with a company that you truly like working for.

To understand how to do this;

Hot IT Skills in 2016 which have Great Demand
  • Search online for Company listing and other resources based on the area of expertise
  • The local Chamber of Commerce is a good way to find out relevant companies locally
  • Professional career networking associations usually have a list of companies that can be contacted for your preferred requirement.
Finding a Job in today’s market isn’t as easy task. The amount of preparation that goes into it, the research involved and the effort taken to apply for a position that is most suited to your area of interest, will not just get you the job of your choice but also one that will keep you involved for years to come. If you have attended a get-together of friends or a social gathering off-late, chances are that you would have come across at least half a dozen people, talking about their jobs.

11 November 2015

How Netezza a powerful appliance for Data Warehouse

(Jobs on NeteZZA)
The IBM Netezza data warehouse appliance is easy-to-use and dramatically accelerates the entire analytic process. The programming interfaces and parallelization options make it straightforward to move a majority of analytics inside the appliance, regardless of whether they are being performed
using tools from such vendors as IBM SPSS, SAS, or Revolution Analytics, or written in languages such as Java,Lua, Perl, Python, R or Fortran. Additionally, IBM Netezza data warehouse appliances are delivered with a built-in library of parallelized analytic functions, purpose-built for large data
volumes, to kick-start and accelerate any analytic application development and deployment.

The simplicity and ease of development is what truly sets IBM Netezza apart. It is the first appliance of its kind – packing the power and scalability of hundreds of processing cores in an architecture ideally suited for parallel analytics. Instead of a fragmented analytics infrastructure with multiple systems where data is replicated, IBM Netezza Analytics consolidates all analytics activity in a powerful appliance. 

It is easy to deploy and requires minimal ongoing administration, for an overall low total cost of ownership. Simplifying the process of exploring, calculating, modeling and scoring data are key drivers for successful adoption of analytics companywide. With IBM Netezza, business users
can run their own analytics in near real time, which helps analytics-backed, data-driven decisions to become pervasive throughout an enterprise.

What is Netezza (Ref: wiki)

Netezza (pronounced Ne-Tease-Ah) designs and markets high-performance data warehouse appliances and advanced analytics applications for uses including enterprise data warehousing, business intelligence, predictive analytics and business continuity planning.

Founded in 1999 by Foster Hinshaw, Netezza was purchased by IBM in 2010 for $1.7 billion. Netezza and Hinshaw are credited with creating the data warehouse appliance category to address consumer analytics efficiently by providing a modular, scalable, easy-to-manage database system that’s cost effective. This class of machine is necessary to manage the "data-intense" workloads of modern analytics and discovery that are not well handled with legacy technologies, most of which are designed around traditional "computer-centric" workloads.

Netezza's implementation is characterized by:

(a) data-intelligent shared-nothing architecture, where the entire query is executed on the nodes with emphasis on minimizing data movement; 
(b) use of commodity FPGA's to augment the CPU's and minimize network bus traffic; and 
(c) embedded analytics at the storage level.

IT Skills You need for SAS Data Analyst

(Read SAS Career Options)
(Read SAS Career Options)
Want to know what will happen in the future? Find the most lucrative opportunities? Get insights into impending outcomes? No problem. With our data mining software, you can:

Simplify data preparation. Interact with your data quickly and intuitively using dynamic charts and graphs to understand key relationships.

Quickly and easily create better models. Take the guesswork out of building models that are both stable and accurate using proven techniques and a drag-and-drop interface that's both easy-to-use and powerful.

Put your best models into service. Fast. Spend less time and effort scoring new data using automated, interactive processes that work in both batch and real-time environments.

The requirement is varies from company to company. I am giving here basic skills needed for SAS data analyst:

- Experience in SAS or R analytics
- Scripting languages of Python/Java Script/VB Script
- SQL and PL/SQL
- Databases knowledge in Oracle, DB2, SQL Server
- Hadoop and Big data KNowledge

08 November 2015

Role of 'Information Architect' in an Enterprise

[Jobs and Career for Information Architect]
[Jobs and Career for Information Architect]
Gartner defines "enterprise information architecture" as that part of the enterprise architecture process that describes — through a set of requirements, principles and models — the current state, future state, and guidance necessary to flexibly share and exchange information assets to achieve effective enterprise change. Also called EIA - Enterprise information Architect.

The transition from information that is isolated within applications to a flexible, comprehensive enterprise information architecture will require changes in technology, process, organizational structure and orientation.
In particular, EA practitioners comfortable with technical architecture must now devote time to understanding this emerging discipline. 

Gartner projects that, EA teams will be forced by the business to spend as much time on information
architecture as they currently spend on technical architecture. Changes will also impact a range of disciplines across the organization and will require coordination to drive efficiencies and achieve objectives.

Roles that will participate in the organization's desire to maximize the value and effectiveness of
information assets include:
  1. Architects (including solution architects)
  2. Application designers
  3. Data modelers
  4. Database administrators
  5. Business intelligence specialists
  6. Master data management specialists
  7. Data quality specialists
  8. Data integration specialists
  9. Metadata management specialists
  10. Business analysts
  11. Content management specialists
  12. Professionals in security, compliance, privacy and related disciplines
Some of these roles will be done concurrently, depending on the size of the teams, the rules of
engagement (that is, "who does what at which point in the activity cycle"), the depth of domain
knowledge, and resource availability. Understanding the different roles impacted by EIA is a
critical first step.

The role of enterprise architects is to act as facilitator, planner, change agent, champion and coach during these activities (but never dictator). Their job is to advocate the adoption and assurance of all enterprise architecture deliverables. Enterprise architects should be well equipped  to handle this challenge, because of their strong relationships with the business and their strategic planning skills. However, some will need to retool their skills.

Those who step in, and step up, to develop an EIA will help their organizations deliver new enterprise capabilities

Featured post

10 top Blockchain real features useful to financial projects

Blockchain is basically a shared ledger and it has many special features. Why you need it. Business transactions take place every second...

Most Viewed