In short, the term Big Data applies to information that can’t be processed or analyzed using traditional processes or tools.
Organizations today are facing more and more Big Data challenges. They have access to a wealth of information, but they don’t know how to get value out of it because it is sitting in its most raw form or in a semi -structured or unstructured format; and as a result, they don’t even know whether it’s worth keeping (or even able to keep it for that matter).
Characteristics of Big Data:
Three characteristics define Big Data: volume, variety, and velocity (as shown in figure below). Together, these characteristics define as “Big Data.” They have created the need for a new class of capabilities to augment the way things are done today to provide better line of site and controls over our existing knowledge domains and the ability to act on them.
Volume of data being stored today is exploding. In the year 2000, 800,000 petabytes (PB) of data were stored in the world. Of course, a lot of the data that’s being created today isn’t analyzed at all and we expect this number to reach 35 zettabytes (ZB) by 2020. Twitter alone generates more than 7 terabytes (TB) of data every day, Facebook 10 TB, and some enterprises generate terabytes of data every hour of every day of the year. It’s no longer unheard of for individual enterprises to have storage clusters holding petabytes of data.
As implied by the term “Big Data,” organizations are facing massive volumes of data. Organizations that don’t know how to manage this data are overwhelmed by it.
But the opportunity exists, with the right technology platform, to analyze almost all of the data (or at least more of it by identifying the data that’s useful to you) to gain a better understanding of your business, your customers, and the marketplace.
As the amount of data available to the enterprise is on the rise, the percent of data it can process, understand, and analyze is on the decline,
The Variety of Data:
With the explosion of sensors, and smart devices, as well as social collaboration technologies, data in an enterprise has become complex, because it includes not only traditional relational data, but also raw, semi-structured, and unstructured data from web pages, web log files (including click-stream data), search indexes, social media forums, E-Mail, documents, sensor data from active and passive systems, and so on.
Velocity of Data:
A conventional understanding of velocity typically considers how quickly the data is arriving and stored, and its associated rates of retrieval. While managing all of that quickly is good—and the volumes of data that we are looking at are a con- sequence of how quick the data arrives.
To accommodate velocity, a new way of thinking about a problem must start at the inception point of the data. Rather than confining the idea of velocity to the growth rates associated with your data repositories, we suggest you apply this definition to data in motion: The speed at which the data is flowing.
After all, we’re in agreement that today’s enterprises are dealing with petabytes of data instead of terabytes, and the increase in RFID sensors and other information streams has led to a constant flow of data at a pace that has made it impossible for traditional systems to handle.
Dealing effectively with Big Data requires that you perform analytics against the volume and a variety of data while it is still in motion, not just after it is at rest. Consider examples from tracking neonatal health to financial markets; in every case, they require handling the volume and variety of data in new ways.
Big data analytics is clearly a game changer, enabling organisations to gain insights from new sources of data that haven’t been mined in the past. Here’s more about what big data analytics is and isn’t.
The new shared nothing architecture can scale with the huge volumes, variety, and speed requirements of big data by distributing the work across dozens, hundreds, or even thousands of commodity servers that process the data in parallel.
First implemented by large community research projects such as SETI@home and online services such as Google* and Amazon*, each node is independent and stateless, so that shared nothing architecture scales easily— simply add another node—enabling systems to handle growing processing loads.
Processing is pushed out to the nodes where the data resides. This is completely different from a traditional approach, which retrieves data for processing at a central point.
Shared nothing architecture is possible because of the convergence of advances in hardware, data management, and analytic applications technologies.
Shared nothing is popular for web development because of its scalability. As Google has demonstrated, a pure SN system can scale almost infinitely simply by adding nodes in the form of inexpensive computers, since there is no single bottleneck to slow the system down Google calls this sharding.
A SN system typically partitions its data among many nodes on different databases (assigning different computers to deal with different users or queries), or may require every node to maintain its own copy of the application's data, using some kind of co-ordination protocol. This is often referred to as database sharding.