hadoop readme update

pull/63/head
bennojoy 11 years ago
parent c537976273
commit 591963a32f
  1. 43
      hadoop/README.md

@ -1,6 +1,7 @@
### Deploying Hadoop Clusters using Ansible.
##### Preface
## Deploying Hadoop Clusters using Ansible.
--------------------------------------------------
### Preface
The Playbooks in this example are made to deploy Hadoop Clusters for users, these playbooks can be used to:
@ -12,7 +13,7 @@ The Playbooks in this example are made to deploy Hadoop Clusters for users, the
4) Verify the cluster by deploying MapReduce jobs
##### Brief introduction to diffrent components of Hadoop Cluster.
### Brief introduction to diffrent components of Hadoop Cluster.
The following diagram depicts a Hadoop Cluster with HA and automated failover which would be deployed by the Ansible Playbooks.
@ -21,31 +22,43 @@ The two major categories of machines roles in a Hadoop cluster are Hadoop Master
The Hadoop masters consists of:
NameNode: The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these files itself. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.
####NameNode:
The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these files itself. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.
JobTracker: The JobTracker is the service within Hadoop that gives out MapReduce tasks to specific nodes in the cluster, Applications submit jobs to the Job tracker and JobTracker talks to the NameNode to determine the location of the data , once located the JobTracker submits the work to the chosen TaskTracker nodes.
####JobTracker:
The JobTracker is the service within Hadoop that gives out MapReduce tasks to specific nodes in the cluster, Applications submit jobs to the Job tracker and JobTracker talks to the NameNode to determine the location of the data , once located the JobTracker submits the work to the chosen TaskTracker nodes.
The Hadoop Slaves consists of:
DataNode: A DataNode is responsible for storing data in the HadoopFileSystem. A functional hdfs filesystem has more than one DataNode, and data is replicated across them.
####DataNode:
A DataNode is responsible for storing data in the HadoopFileSystem. A functional hdfs filesystem has more than one DataNode, and data is replicated across them.
TaskTracker: A TaskTracker is a node in the cluster that accepts tasks - Map, Reduce and Shuffle operations from a JobTracker.
####TaskTracker:
A TaskTracker is a node in the cluster that accepts tasks - Map, Reduce and Shuffle operations from a JobTracker.
The Hadoop Master processes does not have high availability built into them as thier counterparts (datanode, tasktracker). Inorder to have HA for the NameNode and Jobtracker we have the following processes.
Quorum Journal Nodes: The journal nodes are responsible for maintaining a journal of any modifications made to the HDFS namespace, The active namenode logs any modifications to the jounal nodes and the standby namenode reads the changes from the journal nodes and applies it to it's local namespace. In a production environment the mininum recommended number of journal nodes is 3, these nodes can also be colocated with namenode/Jobtracker.
####Quorum Journal Nodes:
The journal nodes are responsible for maintaining a journal of any modifications made to the HDFS namespace, The active namenode logs any modifications to the jounal nodes and the standby namenode reads the changes from the journal nodes and applies it to it's local namespace. In a production environment the mininum recommended number of journal nodes is 3, these nodes can also be colocated with namenode/Jobtracker.
####Zookeeper Nodes:
Zookeeper Nodes: The purpose of Zookeepr is cluster management, Do remember that Hadoop HA is an active/passive cluster so the cluster requires stuff's like hearbeats, locks, leader election, quorum etc.. these service are provided by the zookeeper services. The recommended number for a production use is 3.
The purpose of Zookeepr is cluster management, Do remember that Hadoop HA is an active/passive cluster so the cluster requires stuff's like hearbeats, locks, leader election, quorum etc.. these service are provided by the zookeeper services. The recommended number for a production use is 3.
zkfc namenode: zkfc (zookeeper failover controller) is a zookeeper client application that runs on each namenode server, it's responsibilites include health monitoring, zookeeper session management, leader election etc.. i,e incase of a namenode failure the zkfc process running on that machine detects the failure and informs the zookeeper as a result of which re-election takes place and a new active namenode is selected.
zkfc JobTracker: The zkfc Tasktracker performs the same functionalities as that of zkfc namenode, the diffrence being the process that zkfc is resposible for is the jobtracker
#### Deploying a Hadoop Cluster with HA
### Deploying a Hadoop Cluster with HA
#####Pre-requesite's
####Pre-requesite's
The Playbooks have been tested using Ansible v1.2, and Centos 6.x (64 bit)
Modify group_vars/all to choose the interface for hadoop communication.
@ -103,7 +116,7 @@ To get the state of the Jobtracker process login as mapred user in any hadoop ma
Once the active and the standby has been detected kill the namenode/jobtracker process in the server listed as active and issue the same commands as above
and you should get a result where the standby has been promoted to the active state. Later you can start the killed process and see those processes listed as the passive processes.
#### Running a mapreduce job on the cluster.
### Running a mapreduce job on the cluster.
To run a mapreduce job on the cluster a sample playbook has been written, this playbook runs a job on the cluster which counts the occurance of the word 'hello' on an inputfile. A sample inputfile file has been created in the playbooks/inputfile file, modify the file to match your testing.
To deploy the mapreduce job run the following command.( Below -e server=<any of your hadoop master server>
@ -112,13 +125,13 @@ To deploy the mapreduce job run the following command.( Below -e server=<any of
to verify the result read the file on your ansible server located at /tmp/zhadoop1/tmp/outputfile/part-00000, which should give you the count.
####Scale the Cluster
###Scale the Cluster
The Hadoop cluster when reaches it's maximum capacity it can be scaled by adding nodes, this can be easily accomplished by adding the node entry in the invetory file (hosts) under the hadoop_slaves group and running the following command.
ansible-playbook -i hosts site.yml --tags=slaves
#### Deploy a non HA Hadoop Cluster
### Deploy a non HA Hadoop Cluster
The following diagram illustrates a standalone hadoop cluster.