• Home
  • Courses
  • Videos
  • Blog
  • Corporate
  • Contact Us

Understanding BigData

5/2/2016

5 Comments

 
Defining Big Data:


In short, the term Big Data applies to information that can’t be processed or analyzed using traditional processes or tools. 
Organizations today are facing more and more Big Data challenges. They have access to a wealth of information, but they don’t know how to get value out of it because it is sitting in its most raw form or in a semi -structured or unstructured format; and as a result, they don’t even know whether it’s worth keeping (or even able to keep it for that matter).


Characteristics of Big Data:


Three characteristics define Big Data: volume, variety, and velocity (as shown in figure below). Together, these characteristics define as “Big Data.” They have created the need for a new class of capabilities to augment the way things are done today to provide better line of site and controls over our existing knowledge domains and the ability to act on them.
Picture
The Volume of Data:


Volume of data being stored today is exploding. In the year 2000, 800,000 petabytes (PB) of data were stored in the world. Of course, a lot of the data that’s being created today isn’t analyzed at all and we expect this number to reach 35 zettabytes (ZB) by 2020. Twitter alone generates more than 7 terabytes (TB) of data every day, Facebook 10 TB, and some enterprises generate terabytes of data every hour of every day of the year. It’s no longer unheard of for individual enterprises to have storage clusters holding petabytes of data.
As implied by the term “Big Data,” organizations are facing massive volumes of data. Organizations that don’t know how to manage this data are overwhelmed by it. 


But the opportunity exists, with the right technology platform, to analyze almost all of the data (or at least more of it by identifying the data that’s useful to you) to gain a better understanding of your business, your customers, and the marketplace. 
As the amount of data available to the enterprise is on the rise, the percent of data it can process, understand, and analyze is on the decline, ​
Picture
The conversation about data volumes has changed from terabytes to petabytes with an inevitable shift to zettabytes, and all this data can’t be stored in your traditional systems.

The Variety of Data:​

With the explosion of sensors, and smart devices, as well as social collaboration technologies, data in an enterprise has become complex, because it includes not only traditional relational data, but also raw, semi-structured, and unstructured data from web pages, web log files (including click-stream data), search indexes, social media forums, E-Mail, documents, sensor data from active and passive systems, and so on.
Picture
Variety represents all types of data—a fundamental shift in analysis requirements from traditional structured data to include raw, semi-structured, and unstructured data as part of the decision-making and insight process. Traditional analytic platforms can’t handle variety. However, an organization’s success will rely on its ability to draw insights from the various kinds of data available to it, which includes both traditional and non-traditional.


Velocity of Data:
A conventional understanding of velocity typically considers how quickly the data is arriving and stored, and its associated rates of retrieval. While managing all of that quickly is good—and the volumes of data that we are looking at are a con- sequence of how quick the data arrives.
To accommodate velocity, a new way of thinking about a problem must start at the inception point of the data. Rather than confining the idea of velocity to the growth rates associated with your data repositories, we suggest you apply this definition to data in motion: The speed at which the data is flowing. 
After all, we’re in agreement that today’s enterprises are dealing with petabytes of data instead of terabytes, and the increase in RFID sensors and other information streams has led to a constant flow of data at a pace that has made it impossible for traditional systems to handle.
Dealing effectively with Big Data requires that you perform analytics against the volume and a variety of data while it is still in motion, not just after it is at rest. Consider examples from tracking neonatal health to financial markets; in every case, they require handling the volume and variety of data in new ways.
Picture
Understanding big data Analytics:
Big data analytics is clearly a game changer, enabling organisations to gain insights from new sources of data that haven’t been mined in the past. Here’s more about what big data analytics is and isn’t.
 ​
Picture
Understanding Shared Nothing Architecture for Big Data:​

The new shared nothing architecture can scale with the huge volumes, variety, and speed requirements of big data by distributing the work across dozens, hundreds, or even thousands of commodity servers that process the data in parallel. 
First implemented by large community research projects such as SETI@home and online services such as Google* and Amazon*, each node is independent and stateless, so that shared nothing architecture scales easily— simply add another node—enabling systems to handle growing processing loads.
Processing is pushed out to the nodes where the data resides. This is completely different from a traditional approach, which retrieves data for processing at a central point. 
Picture
Ultimately, the data must be reintegrated to deliver meaningful results. Distributed processing software frameworks make the computing grid work by managing and pushing the data across machines, sending instructions to the networked servers to work in parallel, collecting individual results, and then re-assembling them for the payoff.
Shared nothing architecture is possible because of the convergence of advances in hardware, data management, and analytic applications technologies.
Shared nothing is popular for web development because of its scalability. As Google has demonstrated, a pure SN system can scale almost infinitely simply by adding nodes in the form of inexpensive computers, since there is no single bottleneck to slow the system down Google calls this sharding. 
A SN system typically partitions its data among many nodes on different databases (assigning different computers to deal with different users or queries), or may require every node to maintain its own copy of the application's data, using some kind of co-ordination protocol. This is often referred to as database sharding.
​


Evolution of BigData
Picture
What Drives BigData
Picture
Industry Specific Big Data Use Cases:
Picture
5 Comments
Timothy Spears link
10/6/2022 02:28:32 pm

Cost option recognize again. Prove debate citizen professor international song available.
Poor hand quite woman your at. Win reduce effect early these anything.

Reply
Samuel Bauer link
10/16/2022 12:40:51 am

West them everything challenge. Argue new through.
Risk time statement image quite financial. Would sea name mention father property approach community.

Reply
Matthew Price link
10/30/2022 12:37:02 pm

Loss Republican explain goal. Rise small sure leader. Official determine six other answer factor. Create remember message simply capital change.

Reply
Carlos Wood link
11/2/2022 09:32:39 am

Its individual teach level matter story toward. Return side where. Couple firm change fall.

Reply
Howard Lopez link
11/13/2022 12:46:19 pm

Thus space family question. Want go capital.
Write quite decision already four oil policy. Simple treatment raise somebody newspaper. Bit sister industry wind.
Rule parent board treatment.

Reply



Leave a Reply.

    Author

    Write something about yourself. No need to be fancy, just an overview.

    Archives

    May 2016

    Categories

    All
    BigData
    Hadoop

    RSS Feed

ALCHEMY LEARNSOFT
Courses
Videos
Blog
Corporate
CONTACT US
​support@alchemyls.com
​1800-929-7190​
ADDRESS
​​2711, Centerville Road
Suite 400

Wilmington, DE 19808
© 2016 Alchemy LearnSoft. All Rights Reserved.