Merge pull request #54 from soulmachine/use-relative-path

use relative paths for images
pull/63/head
Tim Gerla 10 years ago
commit 3bfcf1b0d3
  1. 14
      hadoop/README.md
  2. 12
      mongodb/README.md
  3. 6
      openshift/README.md
  4. 2
      play-webapp/README.md

@ -27,7 +27,7 @@ Let's have a closer look at each of these components.
## HDFS
![Alt text](/images/hdfs.png "HDFS")
![Alt text](images/hdfs.png "HDFS")
The above diagram illustrates an HDFS filesystem. The cluster consists of three
DataNodes which are responsible for storing/replicating data, while the NameNode
@ -89,7 +89,7 @@ to run the MapReduce job on "File1"
Let's have closer look at the anotomy of a Map job.
![Alt text](/images/map.png "Map job")
![Alt text](images/map.png "Map job")
As the figure above shows, when the client instructs the JobTracker to run a
job on File1, the JobTracker first contacts the NameNode to determine where the
@ -117,7 +117,7 @@ and Reduce process.
Let's have closer look at the Shuffle-Reduce job.
![Alt text](/images/reduce.png "Reduce job")
![Alt text](images/reduce.png "Reduce job")
As the figure above demonstrates, the first thing that the JobTracker does is
spawn a Reducer job on the DataNode/Tasktracker nodes for each "key" in the job
@ -133,7 +133,7 @@ is the Reducer/partition number.
## Hadoop Deployment
![Alt text](/images/hadoop.png "Reduce job")
![Alt text](images/hadoop.png "Reduce job")
The above diagram depicts a typical Hadoop deployment. The NameNode and
JobTracker usually reside on the same machine, though they can run on seperate
@ -162,7 +162,7 @@ metadata is handled by the Quorum Journal Manager.
### Quorum Journal Manager
![Alt text](/images/qjm.png "QJM")
![Alt text](images/qjm.png "QJM")
As the figure above shows the Quorum Journal manager consists of the journal
manager client and journal manager nodes. The journal manager clients reside
@ -182,7 +182,7 @@ the secondary node to tell if the primary node is running properly, and if not
it has to take up the role of the primary. Zookeeper provides Hadoop with a
mechanism to coordinate in this way.
![Alt text](/images/zookeeper.png "Zookeeper")
![Alt text](images/zookeeper.png "Zookeeper")
As the figure above shows, the Zookeeper services are client/server baseds
service. The server component itself is replicated over a set of machines that
@ -204,7 +204,7 @@ fences the node/service and takes over the primary role.
## Hadoop HA Deployment
![Alt text](/images/hadoopha.png "Hadoop_HA")
![Alt text](images/hadoopha.png "Hadoop_HA")
The above diagram depicts a fully HA Hadoop Cluster with no single point of
failure and automated failover.

@ -7,7 +7,7 @@
### A Primer
---------------------------------------------
![Alt text](/images/nosql_primer.png "Primer NoSQL")
![Alt text](images/nosql_primer.png "Primer NoSQL")
The above diagram shows how MongoDB differs from the traditional relational
database model. In an RDBMS, the data associated with 'user' is stored in a
@ -23,7 +23,7 @@ additonal field of 'last name'.
### Data Replication
------------------------------------
![Alt text](/images/replica_set.png "Replica Set")
![Alt text](images/replica_set.png "Replica Set")
Data backup is achieved in MongoDB via _replica sets_. As the figure above shows,
a single replication set consists of a replication master (active) and several
@ -36,7 +36,7 @@ recommended number of slave servers are 3.
### Sharding (Horizontal Scaling) .
------------------------------------------------
![Alt text](/images/sharding.png "Sharding")
![Alt text](images/sharding.png "Sharding")
Sharding works by partioning the data into seperate chunks and allocating
diffent ranges of chunks to diffrent shard servers. The figure above shows a
@ -69,7 +69,7 @@ collection is split and balanced across shards.
#### Deploy the Cluster
----------------------------
![Alt text](/images/site.png "Site")
![Alt text](images/site.png "Site")
The diagram above illustrates the deployment model for a MongoDB cluster deployed by
Ansible. This deployment model focuses on deploying three shard servers,
@ -190,7 +190,7 @@ step.
-------------------------------------------------------------------------------------------------------------------------------------------------------------
![Alt text](/images/check.png "check")
![Alt text](images/check.png "check")
The above mentioned steps can be tested with an automated playbook.
@ -227,7 +227,7 @@ the number of chunks spread across the shards.
### Scaling the Cluster
---------------------------------------
![Alt text](/images/scale.png "scale")
![Alt text](images/scale.png "scale")
To add a new node to the existing MongoDB Cluster, modify the inventory file as follows:

@ -31,7 +31,7 @@ Here's a list and a brief overview of the diffrent components used by OpenShift.
### An Overview of application creation process in OpenShift.
![Alt text](/images/app_deploy.png "App")
![Alt text](images/app_deploy.png "App")
The above figure depicts an overview of the different steps involved in creating an application in OpenShift. If a developer wants to create or deploy a JBoss & MySQL application, they can request the same from different client tools that are available, the choice can be an Eclipse IDE , command line tool (RHC) or even a web browser (management console).
@ -40,7 +40,7 @@ Once the user has instructed the client tool to deploy a JBoss & MySQL applicati
### Deployment Diagram of OpenShift via Ansible.
![Alt text](/images/arch.png "App")
![Alt text](images/arch.png "App")
The above diagram shows the Ansible playbooks deploying a highly-available Openshift PaaS environment. The deployment has two servers running LVS (Piranha) for load balancing and provides HA for the Brokers. Two instances of Brokers also run for fault tolerance. Ansible also configures a DNS server which provides name resolution for all the new apps created in the OpenShift environment.
@ -165,7 +165,7 @@ Once the stack has been succesfully deployed, we can check if the diffrent compo
http://ec2-54-226-116-175.compute-1.amazonaws.com:8161/admin/network.jsp
![Alt text](/images/mq.png "App")
![Alt text](images/mq.png "App")
- Broker: To check if the broker node is installed/configured succesfully, issue the following command on any broker node and a similar output should be displayed. Make sure there is a PASS at the end.

@ -48,7 +48,7 @@ Once the playbooks complete, you can check the deployment by logging into the
server console at http://<server-ip>:9000/. You should get a page similar to
image below.
![Alt text](/images/play_webapp.png "webapp")
![Alt text](images/play_webapp.png "webapp")
## Fetching Facts from Hosts