• Home
  • Courses
  • Videos
  • Blog
  • Corporate
  • Contact Us

Working with Gradle

5/11/2016

2 Comments

 
2 Comments

CONFIGURING KERBEROS SECURITY IN HORTONWORKS DATA PLATFORM 2.0

5/3/2016

0 Comments

 
Hadoop was originally created without any external security in mind. It was meant to be used by trusted users in a secure environment, and the constraints that were put in place were intended to prevent users from making mistakes, not from preventing malicious characters from harming the system.


This lab will help guide you in configuring security.


Step 1:
Configuring Kerberos in HDP 2.0
  1. Install the Kerberos server and client packages
sudo yum install krb5-server krb5-workstation


  1. Modify /etc/krb5.conf with the correct realm and hostnames. Here the one I used for a single Kerberos server, containing both the Key Distribution Center (KDC) and the Kerberos Admin service:


[logging]


 default = FILE:/var/log/krb5libs.log


 kdc = FILE:/var/log/krb5kdc.log


 admin_server = FILE:/var/log/kadmind.log


[libdefaults]


 default_realm = WEBAGE.DEV.COM


 dns_lookup_realm = false


 dns_lookup_kdc = false


 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true


[realms]
WEBAGE.DEV.COM = {
  kdc = vm-LINUX6-4-anastetsky
  admin_server = vm-LINUX6-4-anastetsky
 }


[domain_realm]
                        vm-centos6-4-anastetsky = spry.dev.com


Replace WEBAGE.DEV.COM with the name of the Kerberos realm.
Replace vm-cLINUX-4-anastetsky with the host name of the Kerberos server.


  1. Create the initial Kerberos database and supply a master password.


sudo kdb5_util create -s


  1. Update /var/kerberos/krb5kdc/kadm5.acl for principals who have administrative access to the Kerberos database.
*/admin@WEBAGE.DEV.COM


  1. Start the kadmin service
sudo service kadmin start 


  1. Use kadmin.local to create an admin principal (e.g. alex/admin)
addprinc alex/admin


  1. Start the Kerberos service (krb5kdc)
sudo service krb5kdc start


  1. Make sure you open the right ports:


sudo iptables -I INPUT -p udp --dport 88 -j ACCEPT
sudo iptables -I INPUT -p tcp --dport 749 -j ACCEPT 
 sudo iptables -I INPUT -p udp --dport 464 -j ACCEPT
 sudo service iptables save


When Kerbeors is configured we will use Ambari to Setup the Required Authentications.


  • Log in to your Ambari web interface as an admin user.

  • Go to Admin > Security, and click Enable Security.
Picture
  1. Click Next.​​
Picture
  1. Enter your realm name, e.g. WEBAGE.DEV.COM
Picture
​
  • Click Next.

  • Click Download CSV (host-principal-keytab-list.csv). 


Picture
0 Comments

Understanding BigData

5/2/2016

5 Comments

 
Defining Big Data:


In short, the term Big Data applies to information that can’t be processed or analyzed using traditional processes or tools. 
Organizations today are facing more and more Big Data challenges. They have access to a wealth of information, but they don’t know how to get value out of it because it is sitting in its most raw form or in a semi -structured or unstructured format; and as a result, they don’t even know whether it’s worth keeping (or even able to keep it for that matter).


Characteristics of Big Data:


Three characteristics define Big Data: volume, variety, and velocity (as shown in figure below). Together, these characteristics define as “Big Data.” They have created the need for a new class of capabilities to augment the way things are done today to provide better line of site and controls over our existing knowledge domains and the ability to act on them.
Picture
The Volume of Data:


Volume of data being stored today is exploding. In the year 2000, 800,000 petabytes (PB) of data were stored in the world. Of course, a lot of the data that’s being created today isn’t analyzed at all and we expect this number to reach 35 zettabytes (ZB) by 2020. Twitter alone generates more than 7 terabytes (TB) of data every day, Facebook 10 TB, and some enterprises generate terabytes of data every hour of every day of the year. It’s no longer unheard of for individual enterprises to have storage clusters holding petabytes of data.
As implied by the term “Big Data,” organizations are facing massive volumes of data. Organizations that don’t know how to manage this data are overwhelmed by it. 


But the opportunity exists, with the right technology platform, to analyze almost all of the data (or at least more of it by identifying the data that’s useful to you) to gain a better understanding of your business, your customers, and the marketplace. 
As the amount of data available to the enterprise is on the rise, the percent of data it can process, understand, and analyze is on the decline, ​
Picture
The conversation about data volumes has changed from terabytes to petabytes with an inevitable shift to zettabytes, and all this data can’t be stored in your traditional systems.

The Variety of Data:​

With the explosion of sensors, and smart devices, as well as social collaboration technologies, data in an enterprise has become complex, because it includes not only traditional relational data, but also raw, semi-structured, and unstructured data from web pages, web log files (including click-stream data), search indexes, social media forums, E-Mail, documents, sensor data from active and passive systems, and so on.
Picture
Variety represents all types of data—a fundamental shift in analysis requirements from traditional structured data to include raw, semi-structured, and unstructured data as part of the decision-making and insight process. Traditional analytic platforms can’t handle variety. However, an organization’s success will rely on its ability to draw insights from the various kinds of data available to it, which includes both traditional and non-traditional.


Velocity of Data:
A conventional understanding of velocity typically considers how quickly the data is arriving and stored, and its associated rates of retrieval. While managing all of that quickly is good—and the volumes of data that we are looking at are a con- sequence of how quick the data arrives.
To accommodate velocity, a new way of thinking about a problem must start at the inception point of the data. Rather than confining the idea of velocity to the growth rates associated with your data repositories, we suggest you apply this definition to data in motion: The speed at which the data is flowing. 
After all, we’re in agreement that today’s enterprises are dealing with petabytes of data instead of terabytes, and the increase in RFID sensors and other information streams has led to a constant flow of data at a pace that has made it impossible for traditional systems to handle.
Dealing effectively with Big Data requires that you perform analytics against the volume and a variety of data while it is still in motion, not just after it is at rest. Consider examples from tracking neonatal health to financial markets; in every case, they require handling the volume and variety of data in new ways.
Picture
Understanding big data Analytics:
Big data analytics is clearly a game changer, enabling organisations to gain insights from new sources of data that haven’t been mined in the past. Here’s more about what big data analytics is and isn’t.
 ​
Picture
Understanding Shared Nothing Architecture for Big Data:​

The new shared nothing architecture can scale with the huge volumes, variety, and speed requirements of big data by distributing the work across dozens, hundreds, or even thousands of commodity servers that process the data in parallel. 
First implemented by large community research projects such as SETI@home and online services such as Google* and Amazon*, each node is independent and stateless, so that shared nothing architecture scales easily— simply add another node—enabling systems to handle growing processing loads.
Processing is pushed out to the nodes where the data resides. This is completely different from a traditional approach, which retrieves data for processing at a central point. 
Picture
Ultimately, the data must be reintegrated to deliver meaningful results. Distributed processing software frameworks make the computing grid work by managing and pushing the data across machines, sending instructions to the networked servers to work in parallel, collecting individual results, and then re-assembling them for the payoff.
Shared nothing architecture is possible because of the convergence of advances in hardware, data management, and analytic applications technologies.
Shared nothing is popular for web development because of its scalability. As Google has demonstrated, a pure SN system can scale almost infinitely simply by adding nodes in the form of inexpensive computers, since there is no single bottleneck to slow the system down Google calls this sharding. 
A SN system typically partitions its data among many nodes on different databases (assigning different computers to deal with different users or queries), or may require every node to maintain its own copy of the application's data, using some kind of co-ordination protocol. This is often referred to as database sharding.
​


Evolution of BigData
Picture
What Drives BigData
Picture
Industry Specific Big Data Use Cases:
Picture
5 Comments

    Author

    Write something about yourself. No need to be fancy, just an overview.

    Archives

    May 2016

    Categories

    All
    BigData
    Hadoop

    RSS Feed

ALCHEMY LEARNSOFT
Courses
Videos
Blog
Corporate
CONTACT US
​support@alchemyls.com
​1800-929-7190​
ADDRESS
​​2711, Centerville Road
Suite 400

Wilmington, DE 19808
© 2016 Alchemy LearnSoft. All Rights Reserved.