@ -1,4 +0,0 @@ |
||||
Copyright (C) 2013 AnsibleWorks, Inc. |
||||
|
||||
This work is licensed under the Creative Commons Attribution 3.0 Unported License. |
||||
To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/deed.en_US. |
@ -1,363 +0,0 @@ |
||||
# Deploying Hadoop Clusters using Ansible |
||||
|
||||
## Preface |
||||
|
||||
The playbooks in this example are designed to deploy a Hadoop cluster on a |
||||
CentOS 6 or RHEL 6 environment using Ansible. The playbooks can: |
||||
|
||||
1) Deploy a fully functional Hadoop cluster with HA and automatic failover. |
||||
|
||||
2) Deploy a fully functional Hadoop cluster with no HA. |
||||
|
||||
3) Deploy additional nodes to scale the cluster |
||||
|
||||
These playbooks require Ansible 1.2, CentOS 6 or RHEL 6 target machines, and install |
||||
the open-source Cloudera Hadoop Distribution (CDH) version 4. |
||||
|
||||
## Hadoop Components |
||||
|
||||
Hadoop is framework that allows processing of large datasets across large |
||||
clusters. The two main components that make up a Hadoop cluster are the HDFS |
||||
Filesystem and the MapReduce framework. Briefly, the HDFS filesystem is responsible |
||||
for storing data across the cluster nodes on its local disks. The MapReduce |
||||
jobs are the tasks that would run on these nodes to get a meaningful result |
||||
using the data stored on the HDFS filesystem. |
||||
|
||||
Let's have a closer look at each of these components. |
||||
|
||||
## HDFS |
||||
|
||||
![Alt text](images/hdfs.png "HDFS") |
||||
|
||||
The above diagram illustrates an HDFS filesystem. The cluster consists of three |
||||
DataNodes which are responsible for storing/replicating data, while the NameNode |
||||
is a process which is responsible for storing the metadata for the entire |
||||
filesystem. As the example illustrates above, when a client wants to write a |
||||
file to the HDFS cluster it first contacts the NameNode and lets it know that |
||||
it want to write a file. The NameNode then decides where and how the file |
||||
should be saved and notifies the client about its decision. |
||||
|
||||
In the given example "File1" has a size of 128MB and the block size of the HDFS |
||||
filesystem is 64 MB. Hence, the NameNode instructs the client to break down the |
||||
file into two different blocks and write the first block to DataNode 1 and the |
||||
second block to DataNode 2. Upon receiving the notification from the NameNode, |
||||
the client contacts DataNode 1 and DataNode 2 directly to write the data. |
||||
|
||||
Once the data is recieved by the DataNodes, they replicate the block across the |
||||
other nodes. The number of nodes across which the data would be replicated is |
||||
based on the HDFS configuration, the default value being 3. Meanwhile the |
||||
NameNode updates its metadata with the entry of the new file "File1" and the |
||||
locations where the parts are stored. |
||||
|
||||
## MapReduce |
||||
|
||||
MapReduce is a Java application that utilizes the data stored in the |
||||
HDFS filesystem to get some useful and meaningful result. The whole job is |
||||
split into two parts: the "Map" job and the "Reduce" Job. |
||||
|
||||
Let's consider an example. In the previous step we had uploaded the "File1" |
||||
into the HDFS filesystem and the file was broken down into two different |
||||
blocks. Let's assume that the first block had the data "black sheep" in it and |
||||
the second block has the data "white sheep" in it. Now let's assume a client |
||||
wants to get count of all the words occurring in "File1". In order to get the |
||||
count, the first thing the client would have to do is write a "Map" program |
||||
then a "Reduce" program. |
||||
|
||||
Here's a psudeo code of how the Map and Reduce jobs might look: |
||||
|
||||
mapper (File1, file-contents): |
||||
for each word in file-contents: |
||||
emit (word, 1) |
||||
|
||||
reducer (word, values): |
||||
sum = 0 |
||||
for each value in values: |
||||
sum = sum + value |
||||
emit (word, sum) |
||||
|
||||
The work of the Map job is to go through all the words in the file and emit |
||||
a key/value pair. In this case the key is the word itself and value is always |
||||
1. |
||||
|
||||
The Reduce job is quite simple: it increments the value of sum by 1, for each |
||||
value it gets. |
||||
|
||||
Once the Map and Reduce jobs are ready, the client would instruct the |
||||
"JobTracker" (the process resposible for scheduling the jobs on the cluster) |
||||
to run the MapReduce job on "File1" |
||||
|
||||
Let's have closer look at the anotomy of a Map job. |
||||
|
||||
|
||||
![Alt text](images/map.png "Map job") |
||||
|
||||
As the figure above shows, when the client instructs the JobTracker to run a |
||||
job on File1, the JobTracker first contacts the NameNode to determine where the |
||||
blocks of the File1 are. Then the JobTracker sends the Map job's JAR file down |
||||
to the nodes having the blocks, and the TaskTracker process those nodes to run |
||||
the application. |
||||
|
||||
In the above example, DataNode 1 and DataNode 2 havw the blocks, so the |
||||
TaskTrackers on those nodes run the Map jobs. Once the jobs are completed the |
||||
two nodes would have key/value results as below: |
||||
|
||||
MapJob Results: |
||||
|
||||
TaskTracker1: |
||||
"Black: 1" |
||||
"Sheep: 1" |
||||
|
||||
TaskTracker2: |
||||
"White: 1" |
||||
"Sheep: 1" |
||||
|
||||
|
||||
Once the Map phase is completed the JobTracker process initiates the Shuffle |
||||
and Reduce process. |
||||
|
||||
Let's have closer look at the Shuffle-Reduce job. |
||||
|
||||
![Alt text](images/reduce.png "Reduce job") |
||||
|
||||
As the figure above demonstrates, the first thing that the JobTracker does is |
||||
spawn a Reducer job on the DataNode/Tasktracker nodes for each "key" in the job |
||||
result. In this case we have three keys: "black, white, sheep" in our result, |
||||
so three Reducers are spawned: one for each key. The Map jobs shuffle the keys |
||||
out to the respective Reduce jobs. Then the Reduce job code runs and the sum is |
||||
calculated, and the result is written into the HDFS filesystem in a common |
||||
directory. In the above example the output directory is specified as |
||||
"/home/ben/output" so all the Reducers will write their results into this |
||||
directory under different filenames; the file names being "part-00xx", where x |
||||
is the Reducer/partition number. |
||||
|
||||
|
||||
## Hadoop Deployment |
||||
|
||||
![Alt text](images/hadoop.png "Reduce job") |
||||
|
||||
The above diagram depicts a typical Hadoop deployment. The NameNode and |
||||
JobTracker usually reside on the same machine, though they can run on seperate |
||||
machines. The DataNodes and TaskTrackers run on the same node. The size of the |
||||
cluster can be scaled to thousands of nodes with petabytes of storage. |
||||
|
||||
The above deployment model provides redundancy for data as the HDFS filesytem |
||||
takes care of the data replication. The only single point of failure are the |
||||
NameNode and the JobTracker. If any of these components fail the cluster will |
||||
not be usable. |
||||
|
||||
|
||||
## Making Hadoop HA |
||||
|
||||
To make the Hadoop cluster highly available we would have to add another set of |
||||
JobTracker/NameNodes, and make sure that the data updated by the master is also |
||||
somehow also updated by the client. In case of failure of the primary node, the |
||||
secondary node takes over that role. |
||||
|
||||
The first thing that has to be dealt with is the data held by the NameNode. As |
||||
we recall, the NameNode holds all of the metadata about the filesystem, so any |
||||
update to the metadata should also be reflected on the secondary NameNode's |
||||
metadata copy. The synchronization of the primary and seconary NameNode |
||||
metadata is handled by the Quorum Journal Manager. |
||||
|
||||
|
||||
### Quorum Journal Manager |
||||
|
||||
![Alt text](images/qjm.png "QJM") |
||||
|
||||
As the figure above shows the Quorum Journal manager consists of the journal |
||||
manager client and journal manager nodes. The journal manager clients reside |
||||
on the same node as the NameNodes, and in case of primary node, collects all the |
||||
edits logs happening on the NameNode and sends it out to the Journal nodes. The |
||||
journal manager client residing on the secondary namenode regurlary contacts |
||||
the journal nodes and updates its local metadata to be consistant with the |
||||
master node. In case of primary node failure the secondary NameNode updates |
||||
itself to the latest edit logs and takes over as the primary NameNode. |
||||
|
||||
|
||||
### Zookeeper |
||||
|
||||
Apart from data consistency, a distributed cluster system also needs a |
||||
mechanism for centralized coordination. For example, there should be a way for |
||||
the secondary node to tell if the primary node is running properly, and if not |
||||
it has to take up the role of the primary. Zookeeper provides Hadoop with a |
||||
mechanism to coordinate in this way. |
||||
|
||||
![Alt text](images/zookeeper.png "Zookeeper") |
||||
|
||||
As the figure above shows, the Zookeeper services are client/server baseds |
||||
service. The server component itself is replicated over a set of machines that |
||||
comprise the service. In short, high availability is built into the Zookeeper |
||||
servers. |
||||
|
||||
For Hadoop, two Zookeeper clients have been built: ZKFC (Zookeeper Failover |
||||
Controller), one for the NameNode and one for JobTracker. These clients run on |
||||
the same machines as the NameNode/JobTrackers themselves. |
||||
|
||||
When a ZKFC client is started, it establishes a connection with one of the |
||||
Zookeeper nodes and obtains a session ID. The client then keeps a health check |
||||
on the NameNode/JobTracker and sends heartbeats to the ZooKeeper. |
||||
|
||||
If the ZKFC client detects a failure of the NameNode/JobTracker, it removes |
||||
itself from the ZooKeeper active/standby election, and the other ZKFC client |
||||
fences the node/service and takes over the primary role. |
||||
|
||||
|
||||
## Hadoop HA Deployment |
||||
|
||||
![Alt text](images/hadoopha.png "Hadoop_HA") |
||||
|
||||
The above diagram depicts a fully HA Hadoop Cluster with no single point of |
||||
failure and automated failover. |
||||
|
||||
|
||||
## Deploying Hadoop Clusters with Ansible |
||||
|
||||
Setting up a Hadoop cluster without HA itself can be a challenging and |
||||
time-consuming task, and with HA, things become even more difficult. |
||||
|
||||
Ansible can automate the whole process of deploying a Hadoop cluster with or |
||||
without HA with the same playbook, in a matter of minutes. This can be used for |
||||
quick environment rebuild, or in case of disaster or node failures, recovery |
||||
time can be greatly reduced with Ansible automation. |
||||
|
||||
Let's have a look to see how this is done. |
||||
|
||||
## Deploying a Hadoop cluster with HA |
||||
|
||||
### Prerequisites |
||||
|
||||
These playbooks have been tested using Ansible v1.2, and CentOS 6.x (64 bit) |
||||
|
||||
Modify group_vars/all to choose the network interface for Hadoop communication. |
||||
|
||||
Optionally you change the Hadoop-specific parameters like ports or directories |
||||
by editing group_vars/all file. |
||||
|
||||
Before launching the deployment playbook make sure the inventory file (hosts) |
||||
is set up properly. Here's a sample: |
||||
|
||||
[hadoop_master_primary] |
||||
zhadoop1 |
||||
|
||||
[hadoop_master_secondary] |
||||
zhadoop2 |
||||
|
||||
[hadoop_masters:children] |
||||
hadoop_master_primary |
||||
hadoop_master_secondary |
||||
|
||||
[hadoop_slaves] |
||||
hadoop1 |
||||
hadoop2 |
||||
hadoop3 |
||||
|
||||
[qjournal_servers] |
||||
zhadoop1 |
||||
zhadoop2 |
||||
zhadoop3 |
||||
|
||||
[zookeeper_servers] |
||||
zhadoop1 zoo_id=1 |
||||
zhadoop2 zoo_id=2 |
||||
zhadoop3 zoo_id=3 |
||||
|
||||
Once the inventory is set up, the Hadoop cluster can be setup using the following |
||||
command |
||||
|
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
Once deployed, we can check the cluster sanity in different ways. To check the |
||||
status of the HDFS filesystem and a report on all the DataNodes, log in as the |
||||
'hdfs' user on any Hadoop master server, and issue the following command to get |
||||
the report: |
||||
|
||||
hadoop dfsadmin -report |
||||
|
||||
To check the sanity of HA, first log in as the 'hdfs' user on any Hadoop master |
||||
server and get the current active/standby NameNode servers this way: |
||||
|
||||
-bash-4.1$ hdfs haadmin -getServiceState zhadoop1 |
||||
active |
||||
-bash-4.1$ hdfs haadmin -getServiceState zhadoop2 |
||||
standby |
||||
|
||||
To get the state of the JobTracker process login as the 'mapred' user on any |
||||
Hadoop master server and issue the following command: |
||||
|
||||
-bash-4.1$ hadoop mrhaadmin -getServiceState hadoop1 |
||||
standby |
||||
-bash-4.1$ hadoop mrhaadmin -getServiceState hadoop2 |
||||
active |
||||
|
||||
Once you have determined which server is active and which is standby, you can |
||||
kill the NameNode/JobTracker process on the server listed as active and issue |
||||
the same commands again, and you should see that the standby has been promoted |
||||
to the active state. Later, you can restart the killed process and see that node |
||||
listed as standby. |
||||
|
||||
### Running a MapReduce Job |
||||
|
||||
To deploy the mapreduce job run the following script from any of the hadoop |
||||
master nodes as user 'hdfs'. The job would count the number of occurance of the |
||||
word 'hello' in the given inputfile. Eg: su - hdfs -c "/tmp/job.sh" |
||||
|
||||
#!/bin/bash |
||||
cat > /tmp/inputfile << EOF |
||||
hello |
||||
sf |
||||
sdf |
||||
hello |
||||
sdf |
||||
sdf |
||||
EOF |
||||
hadoop fs -put /tmp/inputfile /inputfile |
||||
hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep /inputfile /outputfile 'hello' |
||||
hadoop fs -get /outputfile /tmp/outputfile/ |
||||
|
||||
To verify the result, read the file on the server located at |
||||
/tmp/outputfile/part-00000, which should give you the count. |
||||
|
||||
## Scale the Cluster |
||||
|
||||
When the Hadoop cluster reaches its maximum capacity, it can be scaled by |
||||
adding nodes. This can be easily accomplished by adding the node hostname to |
||||
the Ansible inventory under the hadoop_slaves group, and running the following |
||||
command: |
||||
|
||||
ansible-playbook -i hosts site.yml --tags=slaves |
||||
|
||||
## Deploy a non-HA Hadoop Cluster |
||||
|
||||
The following diagram illustrates a standalone Hadoop cluster. |
||||
|
||||
To deploy this cluster fill in the inventory file as follows: |
||||
|
||||
[hadoop_all:children] |
||||
hadoop_masters |
||||
hadoop_slaves |
||||
|
||||
[hadoop_master_primary] |
||||
zhadoop1 |
||||
|
||||
[hadoop_master_secondary] |
||||
|
||||
[hadoop_masters:children] |
||||
hadoop_master_primary |
||||
hadoop_master_secondary |
||||
|
||||
[hadoop_slaves] |
||||
hadoop1 |
||||
hadoop2 |
||||
hadoop3 |
||||
|
||||
Edit the group_vars/all file to disable HA: |
||||
|
||||
ha_enabled: False |
||||
|
||||
And run the following command: |
||||
|
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
The validity of the cluster can be checked by running the same MapReduce job |
||||
that has documented above for an HA Hadoop cluster. |
||||
|
@ -1,56 +0,0 @@ |
||||
# Defaults to the first ethernet interface. Change this to: |
||||
# |
||||
# iface: eth1 |
||||
# |
||||
# ...to override. |
||||
# |
||||
iface: '{{ ansible_default_ipv4.interface }}' |
||||
|
||||
ha_enabled: False |
||||
|
||||
hadoop: |
||||
|
||||
#Variables for <core-site_xml> - common |
||||
|
||||
fs_default_FS_port: 8020 |
||||
nameservice_id: mycluster4 |
||||
|
||||
#Variables for <hdfs-site_xml> |
||||
|
||||
dfs_permissions_superusergroup: hdfs |
||||
dfs_namenode_name_dir: |
||||
- /namedir1/ |
||||
- /namedir2/ |
||||
dfs_replication: 3 |
||||
dfs_namenode_handler_count: 50 |
||||
dfs_blocksize: 67108864 |
||||
dfs_datanode_data_dir: |
||||
- /datadir1/ |
||||
- /datadir2/ |
||||
dfs_datanode_address_port: 50010 |
||||
dfs_datanode_http_address_port: 50075 |
||||
dfs_datanode_ipc_address_port: 50020 |
||||
dfs_namenode_http_address_port: 50070 |
||||
dfs_ha_zkfc_port: 8019 |
||||
qjournal_port: 8485 |
||||
qjournal_http_port: 8480 |
||||
dfs_journalnode_edits_dir: /journaldir/ |
||||
zookeeper_clientport: 2181 |
||||
zookeeper_leader_port: 2888 |
||||
zookeeper_election_port: 3888 |
||||
|
||||
#Variables for <mapred-site_xml> - common |
||||
mapred_job_tracker_ha_servicename: myjt4 |
||||
mapred_job_tracker_http_address_port: 50030 |
||||
mapred_task_tracker_http_address_port: 50060 |
||||
mapred_job_tracker_port: 8021 |
||||
mapred_ha_jobtracker_rpc-address_port: 8023 |
||||
mapred_ha_zkfc_port: 8018 |
||||
mapred_job_tracker_persist_jobstatus_dir: /jobdir/ |
||||
mapred_local_dir: |
||||
- /mapred1/ |
||||
- /mapred2/ |
||||
|
||||
|
||||
|
||||
|
@ -1,31 +0,0 @@ |
||||
[hadoop_all:children] |
||||
hadoop_masters |
||||
hadoop_slaves |
||||
qjournal_servers |
||||
zookeeper_servers |
||||
|
||||
[hadoop_master_primary] |
||||
hadoop1 |
||||
|
||||
[hadoop_master_secondary] |
||||
hadoop2 |
||||
|
||||
[hadoop_masters:children] |
||||
hadoop_master_primary |
||||
hadoop_master_secondary |
||||
|
||||
[hadoop_slaves] |
||||
hadoop1 |
||||
hadoop2 |
||||
hadoop3 |
||||
|
||||
[qjournal_servers] |
||||
hadoop1 |
||||
hadoop2 |
||||
hadoop3 |
||||
|
||||
[zookeeper_servers] |
||||
hadoop1 zoo_id=1 |
||||
hadoop2 zoo_id=2 |
||||
hadoop3 zoo_id=3 |
||||
|
Before Width: | Height: | Size: 64 KiB |
Before Width: | Height: | Size: 126 KiB |
Before Width: | Height: | Size: 148 KiB |
Before Width: | Height: | Size: 105 KiB |
Before Width: | Height: | Size: 82 KiB |
Before Width: | Height: | Size: 148 KiB |
Before Width: | Height: | Size: 106 KiB |
@ -1,19 +0,0 @@ |
||||
asdf |
||||
sdf |
||||
sdf |
||||
sd |
||||
f |
||||
sf |
||||
sdf |
||||
sd |
||||
fsd |
||||
hello |
||||
asf |
||||
sf |
||||
sd |
||||
fsd |
||||
f |
||||
sdf |
||||
sd |
||||
hello |
||||
|
@ -1,21 +0,0 @@ |
||||
--- |
||||
# Launch Job to count occurance of a word. |
||||
|
||||
- hosts: $server |
||||
user: root |
||||
tasks: |
||||
- name: copy the file |
||||
copy: src=inputfile dest=/tmp/inputfile |
||||
|
||||
- name: upload the file |
||||
shell: su - hdfs -c "hadoop fs -put /tmp/inputfile /inputfile" |
||||
|
||||
- name: Run the MapReduce job to count the occurance of word hello |
||||
shell: su - hdfs -c "hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep /inputfile /outputfile 'hello'" |
||||
|
||||
- name: Fetch the outputfile to local tmp dir |
||||
shell: su - hdfs -c "hadoop fs -get /outputfile /tmp/outputfile" |
||||
|
||||
- name: Get the outputfile to ansible server |
||||
fetch: dest=/tmp src=/tmp/outputfile/part-00000 |
||||
|
@ -1,5 +0,0 @@ |
||||
[cloudera-cdh4] |
||||
name=Cloudera's Distribution for Hadoop, Version 4 |
||||
baseurl=http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/4/ |
||||
gpgkey = http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera |
||||
gpgcheck = 1 |
@ -1,2 +0,0 @@ |
||||
- name: restart iptables |
||||
service: name=iptables state=restarted |
@ -1,28 +0,0 @@ |
||||
--- |
||||
# The playbook for common tasks |
||||
|
||||
- name: Deploy the Cloudera Repository |
||||
copy: src=etc/cloudera-CDH4.repo dest=/etc/yum.repos.d/cloudera-CDH4.repo |
||||
|
||||
- name: Install the libselinux-python package |
||||
yum: name=libselinux-python state=installed |
||||
|
||||
- name: Install the openjdk package |
||||
yum: name=java-1.6.0-openjdk state=installed |
||||
|
||||
- name: Create a directory for java |
||||
file: state=directory path=/usr/java/ |
||||
|
||||
- name: Create a link for java |
||||
file: src=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre state=link path=/usr/java/default |
||||
|
||||
- name: Create the hosts file for all machines |
||||
template: src=etc/hosts.j2 dest=/etc/hosts |
||||
|
||||
- name: Disable SELinux in conf file |
||||
selinux: state=disabled |
||||
|
||||
- name: Create the iptables file for all machines |
||||
template: src=iptables.j2 dest=/etc/sysconfig/iptables |
||||
notify: restart iptables |
||||
|
@ -1,5 +0,0 @@ |
||||
--- |
||||
# The playbook for common tasks |
||||
|
||||
- include: common.yml tags=slaves |
||||
|
@ -1,5 +0,0 @@ |
||||
127.0.0.1 localhost |
||||
{% for host in groups.all %} |
||||
{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }} |
||||
{% endfor %} |
||||
|
@ -1,25 +0,0 @@ |
||||
<?xml version="1.0"?> |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
||||
|
||||
<configuration> |
||||
<property> |
||||
<name>fs.defaultFS</name> |
||||
<value>hdfs://{{ hostvars[groups['hadoop_masters'][0]]['ansible_hostname'] + ':' ~ hadoop['fs_default_FS_port'] }}/</value> |
||||
</property> |
||||
</configuration> |
@ -1,75 +0,0 @@ |
||||
# Configuration of the "dfs" context for null |
||||
dfs.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "dfs" context for file |
||||
#dfs.class=org.apache.hadoop.metrics.file.FileContext |
||||
#dfs.period=10 |
||||
#dfs.fileName=/tmp/dfsmetrics.log |
||||
|
||||
# Configuration of the "dfs" context for ganglia |
||||
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) |
||||
# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# dfs.period=10 |
||||
# dfs.servers=localhost:8649 |
||||
|
||||
|
||||
# Configuration of the "mapred" context for null |
||||
mapred.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "mapred" context for file |
||||
#mapred.class=org.apache.hadoop.metrics.file.FileContext |
||||
#mapred.period=10 |
||||
#mapred.fileName=/tmp/mrmetrics.log |
||||
|
||||
# Configuration of the "mapred" context for ganglia |
||||
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) |
||||
# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# mapred.period=10 |
||||
# mapred.servers=localhost:8649 |
||||
|
||||
|
||||
# Configuration of the "jvm" context for null |
||||
#jvm.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "jvm" context for file |
||||
#jvm.class=org.apache.hadoop.metrics.file.FileContext |
||||
#jvm.period=10 |
||||
#jvm.fileName=/tmp/jvmmetrics.log |
||||
|
||||
# Configuration of the "jvm" context for ganglia |
||||
# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# jvm.period=10 |
||||
# jvm.servers=localhost:8649 |
||||
|
||||
# Configuration of the "rpc" context for null |
||||
rpc.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "rpc" context for file |
||||
#rpc.class=org.apache.hadoop.metrics.file.FileContext |
||||
#rpc.period=10 |
||||
#rpc.fileName=/tmp/rpcmetrics.log |
||||
|
||||
# Configuration of the "rpc" context for ganglia |
||||
# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# rpc.period=10 |
||||
# rpc.servers=localhost:8649 |
||||
|
||||
|
||||
# Configuration of the "ugi" context for null |
||||
ugi.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "ugi" context for file |
||||
#ugi.class=org.apache.hadoop.metrics.file.FileContext |
||||
#ugi.period=10 |
||||
#ugi.fileName=/tmp/ugimetrics.log |
||||
|
||||
# Configuration of the "ugi" context for ganglia |
||||
# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# ugi.period=10 |
||||
# ugi.servers=localhost:8649 |
||||
|
@ -1,44 +0,0 @@ |
||||
# |
||||
# Licensed to the Apache Software Foundation (ASF) under one or more |
||||
# contributor license agreements. See the NOTICE file distributed with |
||||
# this work for additional information regarding copyright ownership. |
||||
# The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
# (the "License"); you may not use this file except in compliance with |
||||
# the License. You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
# |
||||
|
||||
# syntax: [prefix].[source|sink].[instance].[options] |
||||
# See javadoc of package-info.java for org.apache.hadoop.metrics2 for details |
||||
|
||||
*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink |
||||
# default sampling period, in seconds |
||||
*.period=10 |
||||
|
||||
# The namenode-metrics.out will contain metrics from all context |
||||
#namenode.sink.file.filename=namenode-metrics.out |
||||
# Specifying a special sampling period for namenode: |
||||
#namenode.sink.*.period=8 |
||||
|
||||
#datanode.sink.file.filename=datanode-metrics.out |
||||
|
||||
# the following example split metrics of different |
||||
# context to different sinks (in this case files) |
||||
#jobtracker.sink.file_jvm.context=jvm |
||||
#jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out |
||||
#jobtracker.sink.file_mapred.context=mapred |
||||
#jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out |
||||
|
||||
#tasktracker.sink.file.filename=tasktracker-metrics.out |
||||
|
||||
#maptask.sink.file.filename=maptask-metrics.out |
||||
|
||||
#reducetask.sink.file.filename=reducetask-metrics.out |
||||
|
@ -1,57 +0,0 @@ |
||||
<?xml version="1.0"?> |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
||||
|
||||
<configuration> |
||||
<property> |
||||
<name>dfs.blocksize</name> |
||||
<value>{{ hadoop['dfs_blocksize'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.permissions.superusergroup</name> |
||||
<value>{{ hadoop['dfs_permissions_superusergroup'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.namenode.http.address</name> |
||||
<value>0.0.0.0:{{ hadoop['dfs_namenode_http_address_port'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.datanode.address</name> |
||||
<value>0.0.0.0:{{ hadoop['dfs_datanode_address_port'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.datanode.http.address</name> |
||||
<value>0.0.0.0:{{ hadoop['dfs_datanode_http_address_port'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.datanode.ipc.address</name> |
||||
<value>0.0.0.0:{{ hadoop['dfs_datanode_ipc_address_port'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.replication</name> |
||||
<value>{{ hadoop['dfs_replication'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.namenode.name.dir</name> |
||||
<value>{{ hadoop['dfs_namenode_name_dir'] | join(',') }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.datanode.data.dir</name> |
||||
<value>{{ hadoop['dfs_datanode_data_dir'] | join(',') }}</value> |
||||
</property> |
||||
</configuration> |
@ -1,219 +0,0 @@ |
||||
# Copyright 2011 The Apache Software Foundation |
||||
# |
||||
# Licensed to the Apache Software Foundation (ASF) under one |
||||
# or more contributor license agreements. See the NOTICE file |
||||
# distributed with this work for additional information |
||||
# regarding copyright ownership. The ASF licenses this file |
||||
# to you under the Apache License, Version 2.0 (the |
||||
# "License"); you may not use this file except in compliance |
||||
# with the License. You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
|
||||
# Define some default values that can be overridden by system properties |
||||
hadoop.root.logger=INFO,console |
||||
hadoop.log.dir=. |
||||
hadoop.log.file=hadoop.log |
||||
|
||||
# Define the root logger to the system property "hadoop.root.logger". |
||||
log4j.rootLogger=${hadoop.root.logger}, EventCounter |
||||
|
||||
# Logging Threshold |
||||
log4j.threshold=ALL |
||||
|
||||
# Null Appender |
||||
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender |
||||
|
||||
# |
||||
# Rolling File Appender - cap space usage at 5gb. |
||||
# |
||||
hadoop.log.maxfilesize=256MB |
||||
hadoop.log.maxbackupindex=20 |
||||
log4j.appender.RFA=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} |
||||
|
||||
log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize} |
||||
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex} |
||||
|
||||
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout |
||||
|
||||
# Pattern format: Date LogLevel LoggerName LogMessage |
||||
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
# Debugging Pattern format |
||||
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n |
||||
|
||||
|
||||
# |
||||
# Daily Rolling File Appender |
||||
# |
||||
|
||||
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender |
||||
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file} |
||||
|
||||
# Rollver at midnight |
||||
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd |
||||
|
||||
# 30-day backup |
||||
#log4j.appender.DRFA.MaxBackupIndex=30 |
||||
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout |
||||
|
||||
# Pattern format: Date LogLevel LoggerName LogMessage |
||||
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
# Debugging Pattern format |
||||
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n |
||||
|
||||
|
||||
# |
||||
# console |
||||
# Add "console" to rootlogger above if you want to use this |
||||
# |
||||
|
||||
log4j.appender.console=org.apache.log4j.ConsoleAppender |
||||
log4j.appender.console.target=System.err |
||||
log4j.appender.console.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n |
||||
|
||||
# |
||||
# TaskLog Appender |
||||
# |
||||
|
||||
#Default values |
||||
hadoop.tasklog.taskid=null |
||||
hadoop.tasklog.iscleanup=false |
||||
hadoop.tasklog.noKeepSplits=4 |
||||
hadoop.tasklog.totalLogFileSize=100 |
||||
hadoop.tasklog.purgeLogSplits=true |
||||
hadoop.tasklog.logsRetainHours=12 |
||||
|
||||
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender |
||||
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid} |
||||
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup} |
||||
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize} |
||||
|
||||
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
|
||||
# |
||||
# HDFS block state change log from block manager |
||||
# |
||||
# Uncomment the following to suppress normal block state change |
||||
# messages from BlockManager in NameNode. |
||||
#log4j.logger.BlockStateChange=WARN |
||||
|
||||
# |
||||
#Security appender |
||||
# |
||||
hadoop.security.logger=INFO,NullAppender |
||||
hadoop.security.log.maxfilesize=256MB |
||||
hadoop.security.log.maxbackupindex=20 |
||||
log4j.category.SecurityLogger=${hadoop.security.logger} |
||||
hadoop.security.log.file=SecurityAuth-${user.name}.audit |
||||
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} |
||||
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} |
||||
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} |
||||
|
||||
# |
||||
# Daily Rolling Security appender |
||||
# |
||||
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender |
||||
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} |
||||
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd |
||||
|
||||
# |
||||
# hdfs audit logging |
||||
# |
||||
hdfs.audit.logger=INFO,NullAppender |
||||
hdfs.audit.log.maxfilesize=256MB |
||||
hdfs.audit.log.maxbackupindex=20 |
||||
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} |
||||
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false |
||||
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log |
||||
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n |
||||
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} |
||||
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} |
||||
|
||||
# |
||||
# mapred audit logging |
||||
# |
||||
mapred.audit.logger=INFO,NullAppender |
||||
mapred.audit.log.maxfilesize=256MB |
||||
mapred.audit.log.maxbackupindex=20 |
||||
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} |
||||
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false |
||||
log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log |
||||
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n |
||||
log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize} |
||||
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex} |
||||
|
||||
# Custom Logging levels |
||||
|
||||
#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG |
||||
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG |
||||
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG |
||||
|
||||
# Jets3t library |
||||
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR |
||||
|
||||
# |
||||
# Event Counter Appender |
||||
# Sends counts of logging messages at different severity levels to Hadoop Metrics. |
||||
# |
||||
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter |
||||
|
||||
# |
||||
# Job Summary Appender |
||||
# |
||||
# Use following logger to send summary to separate file defined by |
||||
# hadoop.mapreduce.jobsummary.log.file : |
||||
# hadoop.mapreduce.jobsummary.logger=INFO,JSA |
||||
# |
||||
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger} |
||||
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log |
||||
hadoop.mapreduce.jobsummary.log.maxfilesize=256MB |
||||
hadoop.mapreduce.jobsummary.log.maxbackupindex=20 |
||||
log4j.appender.JSA=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file} |
||||
log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize} |
||||
log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex} |
||||
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n |
||||
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger} |
||||
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false |
||||
|
||||
# |
||||
# Yarn ResourceManager Application Summary Log |
||||
# |
||||
# Set the ResourceManager summary log filename |
||||
#yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log |
||||
# Set the ResourceManager summary log level and appender |
||||
#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY |
||||
|
||||
# Appender for ResourceManager Application Summary Log |
||||
# Requires the following properties to be set |
||||
# - hadoop.log.dir (Hadoop Log directory) |
||||
# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename) |
||||
# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender) |
||||
|
||||
#log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger} |
||||
#log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false |
||||
#log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender |
||||
#log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file} |
||||
#log4j.appender.RMSUMMARY.MaxFileSize=256MB |
||||
#log4j.appender.RMSUMMARY.MaxBackupIndex=20 |
||||
#log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout |
||||
#log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n |
@ -1,22 +0,0 @@ |
||||
<configuration> |
||||
|
||||
<property> |
||||
<name>mapred.job.tracker</name> |
||||
<value>{{ hostvars[groups['hadoop_masters'][0]]['ansible_hostname'] }}:{{ hadoop['mapred_job_tracker_port'] }}</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.local.dir</name> |
||||
<value>{{ hadoop["mapred_local_dir"] | join(',') }}</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.task.tracker.http.address</name> |
||||
<value>0.0.0.0:{{ hadoop['mapred_task_tracker_http_address_port'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>mapred.job.tracker.http.address</name> |
||||
<value>0.0.0.0:{{ hadoop['mapred_job_tracker_http_address_port'] }}</value> |
||||
</property> |
||||
|
||||
</configuration> |
@ -1,3 +0,0 @@ |
||||
{% for host in groups['hadoop_slaves'] %} |
||||
{{ host }} |
||||
{% endfor %} |
@ -1,80 +0,0 @@ |
||||
<?xml version="1.0"?> |
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<configuration> |
||||
|
||||
<property> |
||||
<name>ssl.client.truststore.location</name> |
||||
<value></value> |
||||
<description>Truststore to be used by clients like distcp. Must be |
||||
specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.truststore.password</name> |
||||
<value></value> |
||||
<description>Optional. Default value is "". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.truststore.type</name> |
||||
<value>jks</value> |
||||
<description>Optional. The keystore file format, default value is "jks". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.truststore.reload.interval</name> |
||||
<value>10000</value> |
||||
<description>Truststore reload check interval, in milliseconds. |
||||
Default value is 10000 (10 seconds). |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.keystore.location</name> |
||||
<value></value> |
||||
<description>Keystore to be used by clients like distcp. Must be |
||||
specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.keystore.password</name> |
||||
<value></value> |
||||
<description>Optional. Default value is "". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.keystore.keypassword</name> |
||||
<value></value> |
||||
<description>Optional. Default value is "". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.keystore.type</name> |
||||
<value>jks</value> |
||||
<description>Optional. The keystore file format, default value is "jks". |
||||
</description> |
||||
</property> |
||||
|
||||
</configuration> |
@ -1,77 +0,0 @@ |
||||
<?xml version="1.0"?> |
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<configuration> |
||||
|
||||
<property> |
||||
<name>ssl.server.truststore.location</name> |
||||
<value></value> |
||||
<description>Truststore to be used by NN and DN. Must be specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.truststore.password</name> |
||||
<value></value> |
||||
<description>Optional. Default value is "". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.truststore.type</name> |
||||
<value>jks</value> |
||||
<description>Optional. The keystore file format, default value is "jks". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.truststore.reload.interval</name> |
||||
<value>10000</value> |
||||
<description>Truststore reload check interval, in milliseconds. |
||||
Default value is 10000 (10 seconds). |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.keystore.location</name> |
||||
<value></value> |
||||
<description>Keystore to be used by NN and DN. Must be specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.keystore.password</name> |
||||
<value></value> |
||||
<description>Must be specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.keystore.keypassword</name> |
||||
<value></value> |
||||
<description>Must be specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.keystore.type</name> |
||||
<value>jks</value> |
||||
<description>Optional. The keystore file format, default value is "jks". |
||||
</description> |
||||
</property> |
||||
|
||||
</configuration> |
@ -1,25 +0,0 @@ |
||||
<?xml version="1.0"?> |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
||||
|
||||
<configuration> |
||||
<property> |
||||
<name>fs.defaultFS</name> |
||||
<value>hdfs://{{ hadoop['nameservice_id'] }}/</value> |
||||
</property> |
||||
</configuration> |
@ -1,75 +0,0 @@ |
||||
# Configuration of the "dfs" context for null |
||||
dfs.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "dfs" context for file |
||||
#dfs.class=org.apache.hadoop.metrics.file.FileContext |
||||
#dfs.period=10 |
||||
#dfs.fileName=/tmp/dfsmetrics.log |
||||
|
||||
# Configuration of the "dfs" context for ganglia |
||||
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) |
||||
# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# dfs.period=10 |
||||
# dfs.servers=localhost:8649 |
||||
|
||||
|
||||
# Configuration of the "mapred" context for null |
||||
mapred.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "mapred" context for file |
||||
#mapred.class=org.apache.hadoop.metrics.file.FileContext |
||||
#mapred.period=10 |
||||
#mapred.fileName=/tmp/mrmetrics.log |
||||
|
||||
# Configuration of the "mapred" context for ganglia |
||||
# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) |
||||
# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# mapred.period=10 |
||||
# mapred.servers=localhost:8649 |
||||
|
||||
|
||||
# Configuration of the "jvm" context for null |
||||
#jvm.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "jvm" context for file |
||||
#jvm.class=org.apache.hadoop.metrics.file.FileContext |
||||
#jvm.period=10 |
||||
#jvm.fileName=/tmp/jvmmetrics.log |
||||
|
||||
# Configuration of the "jvm" context for ganglia |
||||
# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# jvm.period=10 |
||||
# jvm.servers=localhost:8649 |
||||
|
||||
# Configuration of the "rpc" context for null |
||||
rpc.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "rpc" context for file |
||||
#rpc.class=org.apache.hadoop.metrics.file.FileContext |
||||
#rpc.period=10 |
||||
#rpc.fileName=/tmp/rpcmetrics.log |
||||
|
||||
# Configuration of the "rpc" context for ganglia |
||||
# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# rpc.period=10 |
||||
# rpc.servers=localhost:8649 |
||||
|
||||
|
||||
# Configuration of the "ugi" context for null |
||||
ugi.class=org.apache.hadoop.metrics.spi.NullContext |
||||
|
||||
# Configuration of the "ugi" context for file |
||||
#ugi.class=org.apache.hadoop.metrics.file.FileContext |
||||
#ugi.period=10 |
||||
#ugi.fileName=/tmp/ugimetrics.log |
||||
|
||||
# Configuration of the "ugi" context for ganglia |
||||
# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext |
||||
# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 |
||||
# ugi.period=10 |
||||
# ugi.servers=localhost:8649 |
||||
|
@ -1,44 +0,0 @@ |
||||
# |
||||
# Licensed to the Apache Software Foundation (ASF) under one or more |
||||
# contributor license agreements. See the NOTICE file distributed with |
||||
# this work for additional information regarding copyright ownership. |
||||
# The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
# (the "License"); you may not use this file except in compliance with |
||||
# the License. You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
# |
||||
|
||||
# syntax: [prefix].[source|sink].[instance].[options] |
||||
# See javadoc of package-info.java for org.apache.hadoop.metrics2 for details |
||||
|
||||
*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink |
||||
# default sampling period, in seconds |
||||
*.period=10 |
||||
|
||||
# The namenode-metrics.out will contain metrics from all context |
||||
#namenode.sink.file.filename=namenode-metrics.out |
||||
# Specifying a special sampling period for namenode: |
||||
#namenode.sink.*.period=8 |
||||
|
||||
#datanode.sink.file.filename=datanode-metrics.out |
||||
|
||||
# the following example split metrics of different |
||||
# context to different sinks (in this case files) |
||||
#jobtracker.sink.file_jvm.context=jvm |
||||
#jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out |
||||
#jobtracker.sink.file_mapred.context=mapred |
||||
#jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out |
||||
|
||||
#tasktracker.sink.file.filename=tasktracker-metrics.out |
||||
|
||||
#maptask.sink.file.filename=maptask-metrics.out |
||||
|
||||
#reducetask.sink.file.filename=reducetask-metrics.out |
||||
|
@ -1,103 +0,0 @@ |
||||
<?xml version="1.0"?> |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
||||
<configuration> |
||||
<property> |
||||
<name>dfs.nameservices</name> |
||||
<value>{{ hadoop['nameservice_id'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.ha.namenodes.{{ hadoop['nameservice_id'] }}</name> |
||||
<value>{{ groups.hadoop_masters | join(',') }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.blocksize</name> |
||||
<value>{{ hadoop['dfs_blocksize'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.permissions.superusergroup</name> |
||||
<value>{{ hadoop['dfs_permissions_superusergroup'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.ha.automatic-failover.enabled</name> |
||||
<value>true</value> |
||||
</property> |
||||
<property> |
||||
<name>ha.zookeeper.quorum</name> |
||||
<value>{{ groups.zookeeper_servers | join(':' ~ hadoop['zookeeper_clientport'] + ',') }}:{{ hadoop['zookeeper_clientport'] }}</value> |
||||
</property> |
||||
|
||||
{% for host in groups['hadoop_masters'] %} |
||||
<property> |
||||
<name>dfs.namenode.rpc-address.{{ hadoop['nameservice_id'] }}.{{ host }}</name> |
||||
<value>{{ host }}:{{ hadoop['fs_default_FS_port'] }}</value> |
||||
</property> |
||||
{% endfor %} |
||||
{% for host in groups['hadoop_masters'] %} |
||||
<property> |
||||
<name>dfs.namenode.http-address.{{ hadoop['nameservice_id'] }}.{{ host }}</name> |
||||
<value>{{ host }}:{{ hadoop['dfs_namenode_http_address_port'] }}</value> |
||||
</property> |
||||
{% endfor %} |
||||
<property> |
||||
<name>dfs.namenode.shared.edits.dir</name> |
||||
<value>qjournal://{{ groups.qjournal_servers | join(':' ~ hadoop['qjournal_port'] + ';') }}:{{ hadoop['qjournal_port'] }}/{{ hadoop['nameservice_id'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.journalnode.edits.dir</name> |
||||
<value>{{ hadoop['dfs_journalnode_edits_dir'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.client.failover.proxy.provider.{{ hadoop['nameservice_id'] }}</name> |
||||
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.ha.fencing.methods</name> |
||||
<value>shell(/bin/true )</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>dfs.ha.zkfc.port</name> |
||||
<value>{{ hadoop['dfs_ha_zkfc_port'] }}</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>dfs.datanode.address</name> |
||||
<value>0.0.0.0:{{ hadoop['dfs_datanode_address_port'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.datanode.http.address</name> |
||||
<value>0.0.0.0:{{ hadoop['dfs_datanode_http_address_port'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.datanode.ipc.address</name> |
||||
<value>0.0.0.0:{{ hadoop['dfs_datanode_ipc_address_port'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.replication</name> |
||||
<value>{{ hadoop['dfs_replication'] }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.namenode.name.dir</name> |
||||
<value>{{ hadoop['dfs_namenode_name_dir'] | join(',') }}</value> |
||||
</property> |
||||
<property> |
||||
<name>dfs.datanode.data.dir</name> |
||||
<value>{{ hadoop['dfs_datanode_data_dir'] | join(',') }}</value> |
||||
</property> |
||||
</configuration> |
@ -1,219 +0,0 @@ |
||||
# Copyright 2011 The Apache Software Foundation |
||||
# |
||||
# Licensed to the Apache Software Foundation (ASF) under one |
||||
# or more contributor license agreements. See the NOTICE file |
||||
# distributed with this work for additional information |
||||
# regarding copyright ownership. The ASF licenses this file |
||||
# to you under the Apache License, Version 2.0 (the |
||||
# "License"); you may not use this file except in compliance |
||||
# with the License. You may obtain a copy of the License at |
||||
# |
||||
# http://www.apache.org/licenses/LICENSE-2.0 |
||||
# |
||||
# Unless required by applicable law or agreed to in writing, software |
||||
# distributed under the License is distributed on an "AS IS" BASIS, |
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
# See the License for the specific language governing permissions and |
||||
# limitations under the License. |
||||
|
||||
# Define some default values that can be overridden by system properties |
||||
hadoop.root.logger=INFO,console |
||||
hadoop.log.dir=. |
||||
hadoop.log.file=hadoop.log |
||||
|
||||
# Define the root logger to the system property "hadoop.root.logger". |
||||
log4j.rootLogger=${hadoop.root.logger}, EventCounter |
||||
|
||||
# Logging Threshold |
||||
log4j.threshold=ALL |
||||
|
||||
# Null Appender |
||||
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender |
||||
|
||||
# |
||||
# Rolling File Appender - cap space usage at 5gb. |
||||
# |
||||
hadoop.log.maxfilesize=256MB |
||||
hadoop.log.maxbackupindex=20 |
||||
log4j.appender.RFA=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} |
||||
|
||||
log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize} |
||||
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex} |
||||
|
||||
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout |
||||
|
||||
# Pattern format: Date LogLevel LoggerName LogMessage |
||||
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
# Debugging Pattern format |
||||
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n |
||||
|
||||
|
||||
# |
||||
# Daily Rolling File Appender |
||||
# |
||||
|
||||
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender |
||||
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file} |
||||
|
||||
# Rollver at midnight |
||||
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd |
||||
|
||||
# 30-day backup |
||||
#log4j.appender.DRFA.MaxBackupIndex=30 |
||||
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout |
||||
|
||||
# Pattern format: Date LogLevel LoggerName LogMessage |
||||
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
# Debugging Pattern format |
||||
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n |
||||
|
||||
|
||||
# |
||||
# console |
||||
# Add "console" to rootlogger above if you want to use this |
||||
# |
||||
|
||||
log4j.appender.console=org.apache.log4j.ConsoleAppender |
||||
log4j.appender.console.target=System.err |
||||
log4j.appender.console.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n |
||||
|
||||
# |
||||
# TaskLog Appender |
||||
# |
||||
|
||||
#Default values |
||||
hadoop.tasklog.taskid=null |
||||
hadoop.tasklog.iscleanup=false |
||||
hadoop.tasklog.noKeepSplits=4 |
||||
hadoop.tasklog.totalLogFileSize=100 |
||||
hadoop.tasklog.purgeLogSplits=true |
||||
hadoop.tasklog.logsRetainHours=12 |
||||
|
||||
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender |
||||
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid} |
||||
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup} |
||||
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize} |
||||
|
||||
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
|
||||
# |
||||
# HDFS block state change log from block manager |
||||
# |
||||
# Uncomment the following to suppress normal block state change |
||||
# messages from BlockManager in NameNode. |
||||
#log4j.logger.BlockStateChange=WARN |
||||
|
||||
# |
||||
#Security appender |
||||
# |
||||
hadoop.security.logger=INFO,NullAppender |
||||
hadoop.security.log.maxfilesize=256MB |
||||
hadoop.security.log.maxbackupindex=20 |
||||
log4j.category.SecurityLogger=${hadoop.security.logger} |
||||
hadoop.security.log.file=SecurityAuth-${user.name}.audit |
||||
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} |
||||
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} |
||||
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} |
||||
|
||||
# |
||||
# Daily Rolling Security appender |
||||
# |
||||
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender |
||||
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} |
||||
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n |
||||
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd |
||||
|
||||
# |
||||
# hdfs audit logging |
||||
# |
||||
hdfs.audit.logger=INFO,NullAppender |
||||
hdfs.audit.log.maxfilesize=256MB |
||||
hdfs.audit.log.maxbackupindex=20 |
||||
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} |
||||
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false |
||||
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log |
||||
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n |
||||
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} |
||||
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} |
||||
|
||||
# |
||||
# mapred audit logging |
||||
# |
||||
mapred.audit.logger=INFO,NullAppender |
||||
mapred.audit.log.maxfilesize=256MB |
||||
mapred.audit.log.maxbackupindex=20 |
||||
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} |
||||
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false |
||||
log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log |
||||
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n |
||||
log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize} |
||||
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex} |
||||
|
||||
# Custom Logging levels |
||||
|
||||
#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG |
||||
#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG |
||||
#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG |
||||
|
||||
# Jets3t library |
||||
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR |
||||
|
||||
# |
||||
# Event Counter Appender |
||||
# Sends counts of logging messages at different severity levels to Hadoop Metrics. |
||||
# |
||||
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter |
||||
|
||||
# |
||||
# Job Summary Appender |
||||
# |
||||
# Use following logger to send summary to separate file defined by |
||||
# hadoop.mapreduce.jobsummary.log.file : |
||||
# hadoop.mapreduce.jobsummary.logger=INFO,JSA |
||||
# |
||||
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger} |
||||
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log |
||||
hadoop.mapreduce.jobsummary.log.maxfilesize=256MB |
||||
hadoop.mapreduce.jobsummary.log.maxbackupindex=20 |
||||
log4j.appender.JSA=org.apache.log4j.RollingFileAppender |
||||
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file} |
||||
log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize} |
||||
log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex} |
||||
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout |
||||
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n |
||||
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger} |
||||
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false |
||||
|
||||
# |
||||
# Yarn ResourceManager Application Summary Log |
||||
# |
||||
# Set the ResourceManager summary log filename |
||||
#yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log |
||||
# Set the ResourceManager summary log level and appender |
||||
#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY |
||||
|
||||
# Appender for ResourceManager Application Summary Log |
||||
# Requires the following properties to be set |
||||
# - hadoop.log.dir (Hadoop Log directory) |
||||
# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename) |
||||
# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender) |
||||
|
||||
#log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger} |
||||
#log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false |
||||
#log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender |
||||
#log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file} |
||||
#log4j.appender.RMSUMMARY.MaxFileSize=256MB |
||||
#log4j.appender.RMSUMMARY.MaxBackupIndex=20 |
||||
#log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout |
||||
#log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n |
@ -1,120 +0,0 @@ |
||||
<configuration> |
||||
|
||||
<property> |
||||
<name>mapred.job.tracker</name> |
||||
<value>{{ hadoop['mapred_job_tracker_ha_servicename'] }}</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.jobtrackers.{{ hadoop['mapred_job_tracker_ha_servicename'] }}</name> |
||||
<value>{{ groups['hadoop_masters'] | join(',') }}</value> |
||||
<description>Comma-separated list of JobTracker IDs.</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.ha.automatic-failover.enabled</name> |
||||
<value>true</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.ha.zkfc.port</name> |
||||
<value>{{ hadoop['mapred_ha_zkfc_port'] }}</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.ha.fencing.methods</name> |
||||
<value>shell(/bin/true)</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ha.zookeeper.quorum</name> |
||||
<value>{{ groups.zookeeper_servers | join(':' ~ hadoop['zookeeper_clientport'] + ',') }}:{{ hadoop['zookeeper_clientport'] }}</value> |
||||
</property> |
||||
|
||||
{% for host in groups['hadoop_masters'] %} |
||||
<property> |
||||
<name>mapred.jobtracker.rpc-address.{{ hadoop['mapred_job_tracker_ha_servicename'] }}.{{ host }}</name> |
||||
<value>{{ host }}:{{ hadoop['mapred_job_tracker_port'] }}</value> |
||||
</property> |
||||
{% endfor %} |
||||
{% for host in groups['hadoop_masters'] %} |
||||
<property> |
||||
<name>mapred.job.tracker.http.address.{{ hadoop['mapred_job_tracker_ha_servicename'] }}.{{ host }}</name> |
||||
<value>0.0.0.0:{{ hadoop['mapred_job_tracker_http_address_port'] }}</value> |
||||
</property> |
||||
{% endfor %} |
||||
{% for host in groups['hadoop_masters'] %} |
||||
<property> |
||||
<name>mapred.ha.jobtracker.rpc-address.{{ hadoop['mapred_job_tracker_ha_servicename'] }}.{{ host }}</name> |
||||
<value>{{ host }}:{{ hadoop['mapred_ha_jobtracker_rpc-address_port'] }}</value> |
||||
</property> |
||||
{% endfor %} |
||||
{% for host in groups['hadoop_masters'] %} |
||||
<property> |
||||
<name>mapred.ha.jobtracker.http-redirect-address.{{ hadoop['mapred_job_tracker_ha_servicename'] }}.{{ host }}</name> |
||||
<value>{{ host }}:{{ hadoop['mapred_job_tracker_http_address_port'] }}</value> |
||||
</property> |
||||
{% endfor %} |
||||
|
||||
<property> |
||||
<name>mapred.jobtracker.restart.recover</name> |
||||
<value>true</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.job.tracker.persist.jobstatus.active</name> |
||||
<value>true</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.job.tracker.persist.jobstatus.hours</name> |
||||
<value>1</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.job.tracker.persist.jobstatus.dir</name> |
||||
<value>{{ hadoop['mapred_job_tracker_persist_jobstatus_dir'] }}</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.client.failover.proxy.provider.{{ hadoop['mapred_job_tracker_ha_servicename'] }}</name> |
||||
<value>org.apache.hadoop.mapred.ConfiguredFailoverProxyProvider</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.client.failover.max.attempts</name> |
||||
<value>15</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.client.failover.sleep.base.millis</name> |
||||
<value>500</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.client.failover.sleep.max.millis</name> |
||||
<value>1500</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.client.failover.connection.retries</name> |
||||
<value>0</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.client.failover.connection.retries.on.timeouts</name> |
||||
<value>0</value> |
||||
</property> |
||||
|
||||
|
||||
<property> |
||||
<name>mapred.local.dir</name> |
||||
<value>{{ hadoop["mapred_local_dir"] | join(',') }}</value> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>mapred.task.tracker.http.address</name> |
||||
<value>0.0.0.0:{{ hadoop['mapred_task_tracker_http_address_port'] }}</value> |
||||
</property> |
||||
|
||||
</configuration> |
@ -1,3 +0,0 @@ |
||||
{% for host in groups['hadoop_slaves'] %} |
||||
{{ host }} |
||||
{% endfor %} |
@ -1,80 +0,0 @@ |
||||
<?xml version="1.0"?> |
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<configuration> |
||||
|
||||
<property> |
||||
<name>ssl.client.truststore.location</name> |
||||
<value></value> |
||||
<description>Truststore to be used by clients like distcp. Must be |
||||
specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.truststore.password</name> |
||||
<value></value> |
||||
<description>Optional. Default value is "". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.truststore.type</name> |
||||
<value>jks</value> |
||||
<description>Optional. The keystore file format, default value is "jks". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.truststore.reload.interval</name> |
||||
<value>10000</value> |
||||
<description>Truststore reload check interval, in milliseconds. |
||||
Default value is 10000 (10 seconds). |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.keystore.location</name> |
||||
<value></value> |
||||
<description>Keystore to be used by clients like distcp. Must be |
||||
specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.keystore.password</name> |
||||
<value></value> |
||||
<description>Optional. Default value is "". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.keystore.keypassword</name> |
||||
<value></value> |
||||
<description>Optional. Default value is "". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.client.keystore.type</name> |
||||
<value>jks</value> |
||||
<description>Optional. The keystore file format, default value is "jks". |
||||
</description> |
||||
</property> |
||||
|
||||
</configuration> |
@ -1,77 +0,0 @@ |
||||
<?xml version="1.0"?> |
||||
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<configuration> |
||||
|
||||
<property> |
||||
<name>ssl.server.truststore.location</name> |
||||
<value></value> |
||||
<description>Truststore to be used by NN and DN. Must be specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.truststore.password</name> |
||||
<value></value> |
||||
<description>Optional. Default value is "". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.truststore.type</name> |
||||
<value>jks</value> |
||||
<description>Optional. The keystore file format, default value is "jks". |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.truststore.reload.interval</name> |
||||
<value>10000</value> |
||||
<description>Truststore reload check interval, in milliseconds. |
||||
Default value is 10000 (10 seconds). |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.keystore.location</name> |
||||
<value></value> |
||||
<description>Keystore to be used by NN and DN. Must be specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.keystore.password</name> |
||||
<value></value> |
||||
<description>Must be specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.keystore.keypassword</name> |
||||
<value></value> |
||||
<description>Must be specified. |
||||
</description> |
||||
</property> |
||||
|
||||
<property> |
||||
<name>ssl.server.keystore.type</name> |
||||
<value>jks</value> |
||||
<description>Optional. The keystore file format, default value is "jks". |
||||
</description> |
||||
</property> |
||||
|
||||
</configuration> |
@ -1,40 +0,0 @@ |
||||
# Firewall configuration written by system-config-firewall |
||||
# Manual customization of this file is not recommended_ |
||||
*filter |
||||
:INPUT ACCEPT [0:0] |
||||
:FORWARD ACCEPT [0:0] |
||||
:OUTPUT ACCEPT [0:0] |
||||
{% if 'hadoop_masters' in group_names %} |
||||
-A INPUT -p tcp --dport {{ hadoop['fs_default_FS_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['dfs_namenode_http_address_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['mapred_job_tracker_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['mapred_job_tracker_http_address_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['mapred_ha_jobtracker_rpc-address_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['mapred_ha_zkfc_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['dfs_ha_zkfc_port'] }} -j ACCEPT |
||||
{% endif %} |
||||
|
||||
{% if 'hadoop_slaves' in group_names %} |
||||
-A INPUT -p tcp --dport {{ hadoop['dfs_datanode_address_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['dfs_datanode_http_address_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['dfs_datanode_ipc_address_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['mapred_task_tracker_http_address_port'] }} -j ACCEPT |
||||
{% endif %} |
||||
|
||||
{% if 'qjournal_servers' in group_names %} |
||||
-A INPUT -p tcp --dport {{ hadoop['qjournal_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['qjournal_http_port'] }} -j ACCEPT |
||||
{% endif %} |
||||
|
||||
{% if 'zookeeper_servers' in group_names %} |
||||
-A INPUT -p tcp --dport {{ hadoop['zookeeper_clientport'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['zookeeper_leader_port'] }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ hadoop['zookeeper_election_port'] }} -j ACCEPT |
||||
{% endif %} |
||||
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT |
||||
-A INPUT -p icmp -j ACCEPT |
||||
-A INPUT -i lo -j ACCEPT |
||||
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT |
||||
-A INPUT -j REJECT --reject-with icmp-host-prohibited |
||||
-A FORWARD -j REJECT --reject-with icmp-host-prohibited |
||||
COMMIT |
@ -1,14 +0,0 @@ |
||||
--- |
||||
# Handlers for the hadoop master services |
||||
|
||||
- name: restart hadoop master services |
||||
service: name=${item} state=restarted |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-jobtracker |
||||
- hadoop-hdfs-namenode |
||||
|
||||
- name: restart hadoopha master services |
||||
service: name=${item} state=restarted |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-jobtrackerha |
||||
- hadoop-hdfs-namenode |
@ -1,38 +0,0 @@ |
||||
--- |
||||
# Playbook for Hadoop master servers |
||||
|
||||
- name: Install the namenode and jobtracker packages |
||||
yum: name={{ item }} state=installed |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-jobtrackerha |
||||
- hadoop-hdfs-namenode |
||||
- hadoop-hdfs-zkfc |
||||
- hadoop-0.20-mapreduce-zkfc |
||||
|
||||
- name: Copy the hadoop configuration files |
||||
template: src=roles/common/templates/hadoop_ha_conf/{{ item }}.j2 dest=/etc/hadoop/conf/{{ item }} |
||||
with_items: |
||||
- core-site.xml |
||||
- hadoop-metrics.properties |
||||
- hadoop-metrics2.properties |
||||
- hdfs-site.xml |
||||
- log4j.properties |
||||
- mapred-site.xml |
||||
- slaves |
||||
- ssl-client.xml.example |
||||
- ssl-server.xml.example |
||||
notify: restart hadoopha master services |
||||
|
||||
- name: Create the data directory for the namenode metadata |
||||
file: path={{ item }} owner=hdfs group=hdfs state=directory |
||||
with_items: hadoop.dfs_namenode_name_dir |
||||
|
||||
- name: Create the data directory for the jobtracker ha |
||||
file: path={{ item }} owner=mapred group=mapred state=directory |
||||
with_items: hadoop.mapred_job_tracker_persist_jobstatus_dir |
||||
|
||||
- name: Format the namenode |
||||
shell: creates=/usr/lib/hadoop/namenode.formatted su - hdfs -c "hadoop namenode -format" && touch /usr/lib/hadoop/namenode.formatted |
||||
|
||||
- name: start hadoop namenode services |
||||
service: name=hadoop-hdfs-namenode state=started |
@ -1,38 +0,0 @@ |
||||
--- |
||||
# Playbook for Hadoop master servers |
||||
|
||||
- name: Install the namenode and jobtracker packages |
||||
yum: name={{ item }} state=installed |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-jobtracker |
||||
- hadoop-hdfs-namenode |
||||
|
||||
- name: Copy the hadoop configuration files for no ha |
||||
template: src=roles/common/templates/hadoop_conf/{{ item }}.j2 dest=/etc/hadoop/conf/{{ item }} |
||||
with_items: |
||||
- core-site.xml |
||||
- hadoop-metrics.properties |
||||
- hadoop-metrics2.properties |
||||
- hdfs-site.xml |
||||
- log4j.properties |
||||
- mapred-site.xml |
||||
- slaves |
||||
- ssl-client.xml.example |
||||
- ssl-server.xml.example |
||||
notify: restart hadoop master services |
||||
|
||||
- name: Create the data directory for the namenode metadata |
||||
file: path={{ item }} owner=hdfs group=hdfs state=directory |
||||
with_items: hadoop.dfs_namenode_name_dir |
||||
|
||||
- name: Format the namenode |
||||
shell: creates=/usr/lib/hadoop/namenode.formatted su - hdfs -c "hadoop namenode -format" && touch /usr/lib/hadoop/namenode.formatted |
||||
|
||||
- name: start hadoop namenode services |
||||
service: name=hadoop-hdfs-namenode state=started |
||||
|
||||
- name: Give permissions for mapred users |
||||
shell: creates=/usr/lib/hadoop/namenode.initialized su - hdfs -c "hadoop fs -chown hdfs:hadoop /"; su - hdfs -c "hadoop fs -chmod 0775 /" && touch /usr/lib/hadoop/namenode.initialized |
||||
|
||||
- name: start hadoop jobtracker services |
||||
service: name=hadoop-0.20-mapreduce-jobtracker state=started |
@ -1,9 +0,0 @@ |
||||
--- |
||||
# Playbook for Hadoop master primary servers |
||||
|
||||
- include: hadoop_master.yml |
||||
when: ha_enabled |
||||
|
||||
- include: hadoop_master_no_ha.yml |
||||
when: not ha_enabled |
||||
|
@ -1,14 +0,0 @@ |
||||
--- |
||||
# Handlers for the hadoop master services |
||||
|
||||
- name: restart hadoop master services |
||||
service: name=${item} state=restarted |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-jobtracker |
||||
- hadoop-hdfs-namenode |
||||
|
||||
- name: restart hadoopha master services |
||||
service: name=${item} state=restarted |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-jobtrackerha |
||||
- hadoop-hdfs-namenode |
@ -1,64 +0,0 @@ |
||||
--- |
||||
# Playbook for Hadoop master secondary server |
||||
|
||||
|
||||
- name: Install the namenode and jobtracker packages |
||||
yum: name=${item} state=installed |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-jobtrackerha |
||||
- hadoop-hdfs-namenode |
||||
- hadoop-hdfs-zkfc |
||||
- hadoop-0.20-mapreduce-zkfc |
||||
|
||||
- name: Copy the hadoop configuration files |
||||
template: src=roles/common/templates/hadoop_ha_conf/{{ item }}.j2 dest=/etc/hadoop/conf/{{ item }} |
||||
with_items: |
||||
- core-site.xml |
||||
- hadoop-metrics.properties |
||||
- hadoop-metrics2.properties |
||||
- hdfs-site.xml |
||||
- log4j.properties |
||||
- mapred-site.xml |
||||
- slaves |
||||
- ssl-client.xml.example |
||||
- ssl-server.xml.example |
||||
notify: restart hadoopha master services |
||||
|
||||
- name: Create the data directory for the namenode metadata |
||||
file: path={{ item }} owner=hdfs group=hdfs state=directory |
||||
with_items: hadoop.dfs_namenode_name_dir |
||||
|
||||
- name: Create the data directory for the jobtracker ha |
||||
file: path={{ item }} owner=mapred group=mapred state=directory |
||||
with_items: hadoop.mapred_job_tracker_persist_jobstatus_dir |
||||
|
||||
|
||||
- name: Initialize the secodary namenode |
||||
shell: creates=/usr/lib/hadoop/namenode.formatted su - hdfs -c "hadoop namenode -bootstrapStandby" && touch /usr/lib/hadoop/namenode.formatted |
||||
|
||||
- name: start hadoop namenode services |
||||
service: name=hadoop-hdfs-namenode state=started |
||||
|
||||
- name: Initialize the zkfc for namenode |
||||
shell: creates=/usr/lib/hadoop/zkfc.formatted su - hdfs -c "hdfs zkfc -formatZK" && touch /usr/lib/hadoop/zkfc.formatted |
||||
|
||||
- name: start zkfc for namenodes |
||||
service: name=hadoop-hdfs-zkfc state=started |
||||
delegate_to: ${item} |
||||
with_items: groups.hadoop_masters |
||||
|
||||
- name: Give permissions for mapred users |
||||
shell: creates=/usr/lib/hadoop/fs.initialized su - hdfs -c "hadoop fs -chown hdfs:hadoop /"; su - hdfs -c "hadoop fs -chmod 0774 /" && touch /usr/lib/hadoop/namenode.initialized |
||||
|
||||
- name: Initialize the zkfc for jobtracker |
||||
shell: creates=/usr/lib/hadoop/zkfcjob.formatted su - mapred -c "hadoop mrzkfc -formatZK" && touch /usr/lib/hadoop/zkfcjob.formatted |
||||
|
||||
- name: start zkfc for jobtracker |
||||
service: name=hadoop-0.20-mapreduce-zkfc state=started |
||||
delegate_to: '{{ item }}' |
||||
with_items: groups.hadoop_masters |
||||
|
||||
- name: start hadoop Jobtracker services |
||||
service: name=hadoop-0.20-mapreduce-jobtrackerha state=started |
||||
delegate_to: '{{ item }}' |
||||
with_items: groups.hadoop_masters |
@ -1,8 +0,0 @@ |
||||
--- |
||||
# Handlers for the hadoop slave services |
||||
|
||||
- name: restart hadoop slave services |
||||
service: name=${item} state=restarted |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-tasktracker |
||||
- hadoop-hdfs-datanode |
@ -1,4 +0,0 @@ |
||||
--- |
||||
# Playbook for Hadoop slave servers |
||||
|
||||
- include: slaves.yml tags=slaves |
@ -1,53 +0,0 @@ |
||||
--- |
||||
# Playbook for Hadoop slave servers |
||||
|
||||
- name: Install the datanode and tasktracker packages |
||||
yum: name=${item} state=installed |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-tasktracker |
||||
- hadoop-hdfs-datanode |
||||
|
||||
- name: Copy the hadoop configuration files |
||||
template: src=roles/common/templates/hadoop_ha_conf/${item}.j2 dest=/etc/hadoop/conf/${item} |
||||
with_items: |
||||
- core-site.xml |
||||
- hadoop-metrics.properties |
||||
- hadoop-metrics2.properties |
||||
- hdfs-site.xml |
||||
- log4j.properties |
||||
- mapred-site.xml |
||||
- slaves |
||||
- ssl-client.xml.example |
||||
- ssl-server.xml.example |
||||
when: ha_enabled |
||||
notify: restart hadoop slave services |
||||
|
||||
- name: Copy the hadoop configuration files for non ha |
||||
template: src=roles/common/templates/hadoop_conf/${item}.j2 dest=/etc/hadoop/conf/${item} |
||||
with_items: |
||||
- core-site.xml |
||||
- hadoop-metrics.properties |
||||
- hadoop-metrics2.properties |
||||
- hdfs-site.xml |
||||
- log4j.properties |
||||
- mapred-site.xml |
||||
- slaves |
||||
- ssl-client.xml.example |
||||
- ssl-server.xml.example |
||||
when: not ha_enabled |
||||
notify: restart hadoop slave services |
||||
|
||||
- name: Create the data directory for the slave nodes to store the data |
||||
file: path={{ item }} owner=hdfs group=hdfs state=directory |
||||
with_items: hadoop.dfs_datanode_data_dir |
||||
|
||||
- name: Create the data directory for the slave nodes for mapreduce |
||||
file: path={{ item }} owner=mapred group=mapred state=directory |
||||
with_items: hadoop.mapred_local_dir |
||||
|
||||
- name: start hadoop slave services |
||||
service: name={{ item }} state=started |
||||
with_items: |
||||
- hadoop-0.20-mapreduce-tasktracker |
||||
- hadoop-hdfs-datanode |
||||
|
@ -1,5 +0,0 @@ |
||||
--- |
||||
# The journal node handlers |
||||
|
||||
- name: restart qjournal services |
||||
service: name=hadoop-hdfs-journalnode state=restarted |
@ -1,22 +0,0 @@ |
||||
--- |
||||
# Playbook for the qjournal nodes |
||||
|
||||
- name: Install the qjournal package |
||||
yum: name=hadoop-hdfs-journalnode state=installed |
||||
|
||||
- name: Create folder for Journaling |
||||
file: path={{ hadoop.dfs_journalnode_edits_dir }} state=directory owner=hdfs group=hdfs |
||||
|
||||
- name: Copy the hadoop configuration files |
||||
template: src=roles/common/templates/hadoop_ha_conf/{{ item }}.j2 dest=/etc/hadoop/conf/{{ item }} |
||||
with_items: |
||||
- core-site.xml |
||||
- hadoop-metrics.properties |
||||
- hadoop-metrics2.properties |
||||
- hdfs-site.xml |
||||
- log4j.properties |
||||
- mapred-site.xml |
||||
- slaves |
||||
- ssl-client.xml.example |
||||
- ssl-server.xml.example |
||||
notify: restart qjournal services |
@ -1,5 +0,0 @@ |
||||
--- |
||||
# Handler for the zookeeper services |
||||
|
||||
- name: restart zookeeper |
||||
service: name=zookeeper-server state=restarted |
@ -1,13 +0,0 @@ |
||||
--- |
||||
# The plays for zookeper daemons |
||||
|
||||
- name: Install the zookeeper files |
||||
yum: name=zookeeper-server state=installed |
||||
|
||||
- name: Copy the configuration file for zookeeper |
||||
template: src=zoo.cfg.j2 dest=/etc/zookeeper/conf/zoo.cfg |
||||
notify: restart zookeeper |
||||
|
||||
- name: initialize the zookeper |
||||
shell: creates=/var/lib/zookeeper/myid service zookeeper-server init --myid=${zoo_id} |
||||
|
@ -1,9 +0,0 @@ |
||||
tickTime=2000 |
||||
dataDir=/var/lib/zookeeper/ |
||||
clientPort={{ hadoop['zookeeper_clientport'] }} |
||||
initLimit=5 |
||||
syncLimit=2 |
||||
{% for host in groups['zookeeper_servers'] %} |
||||
server.{{ hostvars[host].zoo_id }}={{ host }}:{{ hadoop['zookeeper_leader_port'] }}:{{ hadoop['zookeeper_election_port'] }} |
||||
{% endfor %} |
||||
|
@ -1,28 +0,0 @@ |
||||
--- |
||||
# The main playbook to deploy the site |
||||
|
||||
- hosts: hadoop_all |
||||
roles: |
||||
- common |
||||
|
||||
- hosts: zookeeper_servers |
||||
roles: |
||||
- { role: zookeeper_servers, when: ha_enabled } |
||||
|
||||
- hosts: qjournal_servers |
||||
roles: |
||||
- { role: qjournal_servers, when: ha_enabled } |
||||
|
||||
- hosts: hadoop_master_primary |
||||
roles: |
||||
- { role: hadoop_primary } |
||||
|
||||
- hosts: hadoop_master_secondary |
||||
roles: |
||||
- { role: hadoop_secondary, when: ha_enabled } |
||||
|
||||
- hosts: hadoop_slaves |
||||
roles: |
||||
- { role: hadoop_slaves } |
||||
|
||||
|
@ -1,4 +0,0 @@ |
||||
Copyright (C) 2013 AnsibleWorks, Inc. |
||||
|
||||
This work is licensed under the Creative Commons Attribution 3.0 Unported License. |
||||
To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/deed.en_US. |
@ -1,311 +0,0 @@ |
||||
# Deploying a Highly Available production ready OpenShift Deployment |
||||
|
||||
- Requires Ansible 1.3 |
||||
- Expects CentOS/RHEL 6 hosts (64 bit) |
||||
- RHEL 6 requires rhel-x86_64-server-optional-6 in enabled channels |
||||
|
||||
|
||||
## A Primer into OpenShift Architecture |
||||
|
||||
###OpenShift Overview |
||||
|
||||
OpenShift Origin is the next generation application hosting platform which enables the users to create, deploy and manage applications within their cloud. In other words, it provides a PaaS service (Platform as a Service). This alleviates the developers from time consuming processes like machine provisioning and necessary application deployments. OpenShift provides disk space, CPU resources, memory, network connectivity, and various application deployment platforms like JBoss, Python, MySQL, etc., so the developers can spend their time on coding and testing new applications rather than spending time figuring out how to acquire and configure these resources. |
||||
|
||||
|
||||
###OpenShift Components |
||||
|
||||
Here's a list and a brief overview of the diffrent components used by OpenShift. |
||||
|
||||
- Broker: is the single point of contact for all application management activities. It is responsible for managing user logins, DNS, application state, and general orchestration of the application. Customers don’t contact the broker directly; instead they use the Web console, CLI tools, or JBoss tools to interact with Broker over a REST-based API. |
||||
|
||||
- Cartridges: provide the actual functionality necessary to run the user application. OpenShift currently supports many language Cartridges like JBoss, PHP, Ruby, etc., as well as many database Cartridges such as Postgres, MySQL, MongoDB, etc. In case a user need to deploy or create a PHP application with MySQL as a backend, they can just ask the broker to deploy a PHP and a MySQL cartridge on separate “Gears”. |
||||
|
||||
- Gear: Gears provide a resource-constrained container to run one or more Cartridges. They limit the amount of RAM and disk space available to a Cartridge. For simplicity we can consider this as a separate VM or Linux container for running an application for a specific tenant, but in reality they are containers created by SELinux contexts and PAM namespacing. |
||||
|
||||
- Node: are the physical machines where Gears are allocated. Gears are generally over-allocated on nodes since not all applications are active at the same time. |
||||
|
||||
- BSN (Broker Support Nodes): are the nodes which run applications for OpenShift management. For example, OpenShift uses MongoDB to store various user/app details, and it also uses ActiveMQ to communicate with different application nodes via MCollective. The nodes which host these supporting applications are called as Broker Support Nodes. |
||||
|
||||
- Districts: are resource pools which can be used to separate the application nodes based on performance or environments. For example, in a production deployment we can have two Districts of Nodes, one of which has resources with lower memory/CPU/disk requirements, and another for high performance applications. |
||||
|
||||
|
||||
### An Overview of application creation process in OpenShift. |
||||
|
||||
![Alt text](images/app_deploy.png "App") |
||||
|
||||
The above figure depicts an overview of the different steps involved in creating an application in OpenShift. If a developer wants to create or deploy a JBoss & MySQL application, they can request the same from different client tools that are available, the choice can be an Eclipse IDE , command line tool (RHC) or even a web browser (management console). |
||||
|
||||
Once the user has instructed the client tool to deploy a JBoss & MySQL application, the client tool makes a web service request to the broker to provision the resources. The broker in turn queries the Nodes for Gear and Cartridge availability, and if the resources are available, two Gears are created and JBoss and MySQL Cartridges are deployed on them. The user is then notified and they can then access the Gears via SSH and start deploying the code. |
||||
|
||||
|
||||
### Deployment Diagram of OpenShift via Ansible. |
||||
|
||||
![Alt text](images/arch.png "App") |
||||
|
||||
The above diagram shows the Ansible playbooks deploying a highly-available Openshift PaaS environment. The deployment has two servers running LVS (Piranha) for load balancing and provides HA for the Brokers. Two instances of Brokers also run for fault tolerance. Ansible also configures a DNS server which provides name resolution for all the new apps created in the OpenShift environment. |
||||
|
||||
Three BSN (Broker Support Node) nodes provide a replicated MongoDB deployment and the same nodes run three instances of a highly-available ActiveMQ cluster. There is no limitation on the number of application nodes you can deploy–the user just needs to add the hostnames of the OpenShift nodes to the Ansible inventory and Ansible will configure all of them. |
||||
|
||||
Note: As a best practice if the deployment is in an actual production environment it is recommended to integrate with the infrastructure’s internal DNS server for name resolution and use LDAP or integrate with an existing Active Directory for user authentication. |
||||
|
||||
|
||||
## Deployment Steps for OpenShift via Ansible |
||||
|
||||
As a first step probably you may want to setup ansible, Assuming the Ansible host is Rhel variant install the EPEL package. |
||||
|
||||
yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm |
||||
|
||||
Once the epel repo is installed ansible can be installed via the following command. |
||||
|
||||
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm |
||||
|
||||
It is recommended to use seperate machines for the different components of Openshift, but if you are testing it out you could combine the services but atleast four nodes are mandatory as the mongodb and activemq cluster needs atleast three for the cluster to work properly. |
||||
|
||||
As a first step checkout this repository onto you ansible management host and setup the inventory(hosts) as follows. |
||||
|
||||
git checkout https://github.com/ansible/ansible-examples.git |
||||
|
||||
[dns] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
|
||||
[mongo_servers] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
ec2-54-227-131-56.compute-1.amazonaws.com |
||||
ec2-54-227-169-137.compute-1.amazonaws.com |
||||
|
||||
[mq] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
ec2-54-227-131-56.compute-1.amazonaws.com |
||||
ec2-54-227-169-137.compute-1.amazonaws.com |
||||
|
||||
[broker] |
||||
ec2-54-227-63-48.compute-1.amazonaws.com |
||||
ec2-54-227-171-2.compute-1.amazonaws.com |
||||
|
||||
[nodes] |
||||
ec2-54-227-146-187.compute-1.amazonaws.com |
||||
|
||||
[lvs] |
||||
ec2-54-227-176-123.compute-1.amazonaws.com |
||||
ec2-54-227-177-87.compute-1.amazonaws.com |
||||
|
||||
Once the inventroy is setup with hosts in your environment the Openshift stack can be deployed easily by issuing the following command. |
||||
|
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
|
||||
|
||||
### Verifying the Installation |
||||
|
||||
Once the stack has been succesfully deployed, we can check if the diffrent components has been deployed correctly. |
||||
|
||||
- Mongodb: Login to any bsn node running mongodb and issue the following command and a similar output should be displayed. Which displays that the mongo cluster is up with a primary node and two secondary nodes. |
||||
|
||||
|
||||
[root@ip-10-165-33-186 ~]# mongo 127.0.0.1:2700/admin -u admin -p passme |
||||
MongoDB shell version: 2.2.3 |
||||
connecting to: 127.0.0.1:2700/admin |
||||
openshift:PRIMARY> rs.status() |
||||
{ |
||||
"set" : "openshift", |
||||
"date" : ISODate("2013-07-21T18:56:27Z"), |
||||
"myState" : 1, |
||||
"members" : [ |
||||
{ |
||||
"_id" : 0, |
||||
"name" : "ip-10-165-33-186:2700", |
||||
"health" : 1, |
||||
"state" : 1, |
||||
"stateStr" : "PRIMARY", |
||||
"uptime" : 804, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"self" : true |
||||
}, |
||||
{ |
||||
"_id" : 1, |
||||
"name" : "ec2-54-227-131-56.compute-1.amazonaws.com:2700", |
||||
"health" : 1, |
||||
"state" : 2, |
||||
"stateStr" : "SECONDARY", |
||||
"uptime" : 431, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), |
||||
"pingMs" : 0 |
||||
}, |
||||
{ |
||||
"_id" : 2, |
||||
"name" : "ec2-54-227-169-137.compute-1.amazonaws.com:2700", |
||||
"health" : 1, |
||||
"state" : 2, |
||||
"stateStr" : "SECONDARY", |
||||
"uptime" : 423, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), |
||||
"pingMs" : 0 |
||||
} |
||||
], |
||||
"ok" : 1 |
||||
} |
||||
openshift:PRIMARY> |
||||
|
||||
- ActiveMQ: To verify the cluster status of activeMQ browse to the following url pointing to any one of the mq nodes and provide the credentials as user admin and password as specified in the group_vars/all file. The browser should bring up a page similar to shown below, which shows the other two mq nodes in the cluster to which this node as joined. |
||||
|
||||
http://ec2-54-226-116-175.compute-1.amazonaws.com:8161/admin/network.jsp |
||||
|
||||
|
||||
![Alt text](images/mq.png "App") |
||||
|
||||
- Broker: To check if the broker node is installed/configured succesfully, issue the following command on any broker node and a similar output should be displayed. Make sure there is a PASS at the end. |
||||
|
||||
[root@ip-10-118-127-30 ~]# oo-accept-broker -v |
||||
INFO: Broker package is: openshift-origin-broker |
||||
INFO: checking packages |
||||
INFO: checking package ruby |
||||
INFO: checking package rubygem-openshift-origin-common |
||||
INFO: checking package rubygem-openshift-origin-controller |
||||
INFO: checking package openshift-origin-broker |
||||
INFO: checking package ruby193-rubygem-rails |
||||
INFO: checking package ruby193-rubygem-passenger |
||||
INFO: checking package ruby193-rubygems |
||||
INFO: checking ruby requirements |
||||
INFO: checking ruby requirements for openshift-origin-controller |
||||
INFO: checking ruby requirements for config/application |
||||
INFO: checking that selinux modules are loaded |
||||
NOTICE: SELinux is Enforcing |
||||
NOTICE: SELinux is Enforcing |
||||
INFO: SELinux boolean httpd_unified is enabled |
||||
INFO: SELinux boolean httpd_can_network_connect is enabled |
||||
INFO: SELinux boolean httpd_can_network_relay is enabled |
||||
INFO: SELinux boolean httpd_run_stickshift is enabled |
||||
INFO: SELinux boolean allow_ypbind is enabled |
||||
INFO: checking firewall settings |
||||
INFO: checking mongo datastore configuration |
||||
INFO: Datastore Host: ec2-54-226-116-175.compute-1.amazonaws.com |
||||
INFO: Datastore Port: 2700 |
||||
INFO: Datastore User: admin |
||||
INFO: Datastore SSL: false |
||||
INFO: Datastore Password has been set to non-default |
||||
INFO: Datastore DB Name: admin |
||||
INFO: Datastore: mongo db service is remote |
||||
INFO: checking mongo db login access |
||||
INFO: mongo db login successful: ec2-54-226-116-175.compute-1.amazonaws.com:2700/admin --username admin |
||||
INFO: checking services |
||||
INFO: checking cloud user authentication |
||||
INFO: auth plugin = OpenShift::RemoteUserAuthService |
||||
INFO: auth plugin: OpenShift::RemoteUserAuthService |
||||
INFO: checking remote-user auth configuration |
||||
INFO: Auth trusted header: REMOTE_USER |
||||
INFO: Auth passthrough is enabled for OpenShift services |
||||
INFO: Got HTTP 200 response from https://localhost/broker/rest/api |
||||
INFO: Got HTTP 200 response from https://localhost/broker/rest/cartridges |
||||
INFO: Got HTTP 401 response from https://localhost/broker/rest/user |
||||
INFO: Got HTTP 401 response from https://localhost/broker/rest/domains |
||||
INFO: checking dynamic dns plugin |
||||
INFO: dynamic dns plugin = OpenShift::BindPlugin |
||||
INFO: checking bind dns plugin configuration |
||||
INFO: DNS Server: 10.165.33.186 |
||||
INFO: DNS Port: 53 |
||||
INFO: DNS Zone: example.com |
||||
INFO: DNS Domain Suffix: example.com |
||||
INFO: DNS Update Auth: key |
||||
INFO: DNS Key Name: example.com |
||||
INFO: DNS Key Value: ***** |
||||
INFO: adding txt record named testrecord.example.com to server 10.165.33.186: key0 |
||||
INFO: txt record successfully added |
||||
INFO: deleteing txt record named testrecord.example.com to server 10.165.33.186: key0 |
||||
INFO: txt record successfully deleted |
||||
INFO: checking messaging configuration |
||||
INFO: messaging plugin = OpenShift::MCollectiveApplicationContainerProxy |
||||
PASS |
||||
|
||||
- Node: To verify if the node installation/configuration has been successfull, issue the follwoing command and check for a similar output as shown below. |
||||
|
||||
[root@ip-10-152-154-18 ~]# oo-accept-node -v |
||||
INFO: using default accept-node extensions |
||||
INFO: loading node configuration file /etc/openshift/node.conf |
||||
INFO: loading resource limit file /etc/openshift/resource_limits.conf |
||||
INFO: finding external network device |
||||
INFO: checking node public hostname resolution |
||||
INFO: checking selinux status |
||||
INFO: checking selinux openshift-origin policy |
||||
INFO: checking selinux booleans |
||||
INFO: checking package list |
||||
INFO: checking services |
||||
INFO: checking kernel semaphores >= 512 |
||||
INFO: checking cgroups configuration |
||||
INFO: checking cgroups processes |
||||
INFO: checking filesystem quotas |
||||
INFO: checking quota db file selinux label |
||||
INFO: checking 0 user accounts |
||||
INFO: checking application dirs |
||||
INFO: checking system httpd configs |
||||
INFO: checking cartridge repository |
||||
PASS |
||||
|
||||
- LVS (LoadBalancer): To check the LoadBalncer Login to the active loadbalancer and issue the follwing command, the output would show the two broker to which the loadbalancer is balancing the traffic. |
||||
|
||||
[root@ip-10-145-204-43 ~]# ipvsadm |
||||
IP Virtual Server version 1.2.1 (size=4096) |
||||
Prot LocalAddress:Port Scheduler Flags |
||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn |
||||
TCP ip-192-168-1-1.ec2.internal: rr |
||||
-> ec2-54-227-63-48.compute-1.a Route 1 0 0 |
||||
-> ec2-54-227-171-2.compute-1.a Route 2 0 0 |
||||
|
||||
## Creating an APP in Openshift |
||||
|
||||
To create an App in openshift access the management console via any browser, the VIP specified in group_vars/all can used or ip address of any broker node can used. |
||||
|
||||
https://<ip-of-broker-or-vip>/ |
||||
|
||||
The page would as a login, give it as demo/passme. Once logged in follow the screen instructions to create your first Application. |
||||
Note: Python2.6 cartridge is by default installed by plabooks, so choose python2.6 as the cartridge. |
||||
|
||||
## Deploying Openshift in EC2 |
||||
|
||||
The repo also has playbook that would deploy the Highly Available Openshift in EC2. The playbooks should also be able to deploy the cluster in any ec2 api compatible clouds like Eucalyptus etc.. |
||||
|
||||
Before deploying Please make sure: |
||||
|
||||
- A security groups is created which allows ssh and HTTP/HTTPS traffic. |
||||
- The access/secret key is entered in group_vars/all |
||||
- Also specify the number of nodes required for the cluser in group_vars/all in the variable "count". |
||||
|
||||
Once that is done the cluster can be deployed simply by issuing the command. |
||||
|
||||
ansible-playbook -i ec2hosts ec2.yml -e id=openshift |
||||
|
||||
Note: 'id' is a unique identifier for the cluster, if you are deploying multiple clusters, please make sure the value given is seperate for each deployments. Also the role of the created instances can figured out checking the tags tab in ec2 console. |
||||
|
||||
###Remove the cluster from EC2. |
||||
|
||||
To remove the deployed openshift cluster in ec2, just run the following command. The id paramter should be the same which was given to create the Instance. |
||||
|
||||
Note: The id can be figured out by checking the tags tab in the ec2 console. |
||||
|
||||
ansible-playbook -i ec2hosts ec2_remove.yml -e id=openshift5 |
||||
|
||||
|
||||
|
||||
## HA Tests |
||||
|
||||
Few test's that can be performed to test High Availability are: |
||||
|
||||
- Shutdown any broker and try to create a new Application |
||||
- Shutdown anyone mongo/mq node and try to create a new Appliaction. |
||||
- Shutdown any loadbalaning machine, and the manamgement application should be available via the VirtualIP. |
||||
|
||||
|
||||
|
@ -1,115 +0,0 @@ |
||||
# config file for ansible -- http://ansibleworks.com/ |
||||
# ================================================== |
||||
|
||||
# nearly all parameters can be overridden in ansible-playbook |
||||
# or with command line flags. ansible will read ~/.ansible.cfg, |
||||
# ansible.cfg in the current working directory or |
||||
# /etc/ansible/ansible.cfg, whichever it finds first |
||||
|
||||
[defaults] |
||||
|
||||
# some basic default values... |
||||
|
||||
hostfile = /etc/ansible/hosts |
||||
library = /usr/share/ansible |
||||
remote_tmp = $HOME/.ansible/tmp |
||||
pattern = * |
||||
forks = 5 |
||||
poll_interval = 15 |
||||
sudo_user = root |
||||
#ask_sudo_pass = True |
||||
#ask_pass = True |
||||
transport = smart |
||||
remote_port = 22 |
||||
|
||||
# uncomment this to disable SSH key host checking |
||||
host_key_checking = False |
||||
|
||||
# change this for alternative sudo implementations |
||||
sudo_exe = sudo |
||||
|
||||
# what flags to pass to sudo |
||||
#sudo_flags = -H |
||||
|
||||
# SSH timeout |
||||
timeout = 10 |
||||
|
||||
# default user to use for playbooks if user is not specified |
||||
# (/usr/bin/ansible will use current user as default) |
||||
#remote_user = root |
||||
|
||||
# logging is off by default unless this path is defined |
||||
# if so defined, consider logrotate |
||||
#log_path = /var/log/ansible.log |
||||
|
||||
# default module name for /usr/bin/ansible |
||||
#module_name = command |
||||
|
||||
# use this shell for commands executed under sudo |
||||
# you may need to change this to bin/bash in rare instances |
||||
# if sudo is constrained |
||||
#executable = /bin/sh |
||||
|
||||
# if inventory variables overlap, does the higher precedence one win |
||||
# or are hash values merged together? The default is 'replace' but |
||||
# this can also be set to 'merge'. |
||||
#hash_behaviour = replace |
||||
|
||||
# How to handle variable replacement - as of 1.2, Jinja2 variable syntax is |
||||
# preferred, but we still support the old $variable replacement too. |
||||
# Turn off ${old_style} variables here if you like. |
||||
#legacy_playbook_variables = yes |
||||
|
||||
# list any Jinja2 extensions to enable here: |
||||
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n |
||||
|
||||
# if set, always use this private key file for authentication, same as |
||||
# if passing --private-key to ansible or ansible-playbook |
||||
#private_key_file = /path/to/file |
||||
|
||||
# format of string {{ ansible_managed }} available within Jinja2 |
||||
# templates indicates to users editing templates files will be replaced. |
||||
# replacing {file}, {host} and {uid} and strftime codes with proper values. |
||||
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} |
||||
|
||||
# by default (as of 1.3), Ansible will raise errors when attempting to dereference |
||||
# Jinja2 variables that are not set in templates or action lines. Uncomment this line |
||||
# to revert the behavior to pre-1.3. |
||||
#error_on_undefined_vars = False |
||||
|
||||
# set plugin path directories here, seperate with colons |
||||
action_plugins = /usr/share/ansible_plugins/action_plugins |
||||
callback_plugins = /usr/share/ansible_plugins/callback_plugins |
||||
connection_plugins = /usr/share/ansible_plugins/connection_plugins |
||||
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins |
||||
vars_plugins = /usr/share/ansible_plugins/vars_plugins |
||||
filter_plugins = /usr/share/ansible_plugins/filter_plugins |
||||
|
||||
# don't like cows? that's unfortunate. |
||||
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 |
||||
#nocows = 1 |
||||
|
||||
# don't like colors either? |
||||
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 |
||||
#nocolor = 1 |
||||
|
||||
[paramiko_connection] |
||||
|
||||
# uncomment this line to cause the paramiko connection plugin to not record new host |
||||
# keys encountered. Increases performance on new host additions. Setting works independently of the |
||||
# host key checking setting above. |
||||
|
||||
#record_host_keys=False |
||||
|
||||
[ssh_connection] |
||||
|
||||
# ssh arguments to use |
||||
# Leaving off ControlPersist will result in poor performance, so use |
||||
# paramiko on older platforms rather than removing it |
||||
#ssh_args = -o ControlMaster=auto -o ControlPersist=60s |
||||
|
||||
# if True, make ansible use scp if the connection type is ssh |
||||
# (default is sftp) |
||||
#scp_if_ssh = True |
||||
|
||||
|
@ -1,61 +0,0 @@ |
||||
- hosts: localhost |
||||
connection: local |
||||
pre_tasks: |
||||
- fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" |
||||
when: id is not defined |
||||
|
||||
roles: |
||||
- role: ec2 |
||||
type: dns |
||||
ncount: 1 |
||||
|
||||
- role: ec2 |
||||
type: mq |
||||
ncount: 3 |
||||
|
||||
- role: ec2 |
||||
type: broker |
||||
ncount: 2 |
||||
|
||||
- role: ec2 |
||||
type: nodes |
||||
ncount: "{{ count }}" |
||||
|
||||
post_tasks: |
||||
- name: Wait for the instance to come up |
||||
wait_for: delay=10 host={{ item.public_dns_name }} port=22 state=started timeout=360 |
||||
with_items: ec2.instances |
||||
|
||||
- debug: msg="{{ groups }}" |
||||
|
||||
- hosts: all:!localhost |
||||
user: root |
||||
roles: |
||||
- role: common |
||||
|
||||
- hosts: dns |
||||
user: root |
||||
roles: |
||||
- role: dns |
||||
|
||||
- hosts: mongo_servers |
||||
user: root |
||||
roles: |
||||
- role: mongodb |
||||
|
||||
- hosts: mq |
||||
user: root |
||||
roles: |
||||
- role: mq |
||||
|
||||
- hosts: broker |
||||
user: root |
||||
roles: |
||||
- role: broker |
||||
|
||||
- hosts: nodes |
||||
user: root |
||||
roles: |
||||
- role: nodes |
||||
|
||||
|
@ -1,23 +0,0 @@ |
||||
- hosts: localhost |
||||
connection: local |
||||
pre_tasks: |
||||
- fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" |
||||
when: id is not defined |
||||
|
||||
roles: |
||||
- role: ec2_remove |
||||
type: dns |
||||
ncount: 1 |
||||
|
||||
- role: ec2_remove |
||||
type: mq |
||||
ncount: 3 |
||||
|
||||
- role: ec2_remove |
||||
type: broker |
||||
ncount: 2 |
||||
|
||||
- role: ec2_remove |
||||
type: nodes |
||||
ncount: "{{ count }}" |
||||
|
@ -1 +0,0 @@ |
||||
localhost |
@ -1,32 +0,0 @@ |
||||
--- |
||||
# Global Vars for OpenShift |
||||
|
||||
#EC2 specific varibles |
||||
ec2_access_key: "AKIUFDNXQ" |
||||
ec2_secret_key: "RyhTz1wzZ3kmtMEu" |
||||
keypair: "axialkey" |
||||
instance_type: "m1.small" |
||||
image: "ami-bf5021d6" |
||||
group: "default" |
||||
count: 2 |
||||
ec2_elbs: oselb |
||||
region: "us-east-1" |
||||
zone: "us-east-1a" |
||||
|
||||
iface: '{{ ansible_default_ipv4.interface }}' |
||||
|
||||
domain_name: example.com |
||||
dns_port: 53 |
||||
rndc_port: 953 |
||||
dns_key: "YG70pT2h9xmn9DviT+E6H8MNlJ9wc7Xa9qpCOtuonj3oLJGBBA8udXUsJnoGdMSIIw2pk9lw9QL4rv8XQNBRLQ==" |
||||
|
||||
mongodb_datadir_prefix: /data/ |
||||
mongod_port: 2700 |
||||
mongo_admin_pass: passme |
||||
|
||||
mcollective_pass: passme |
||||
admin_pass: passme |
||||
amquser_pass: passme |
||||
|
||||
vip: 192.168.2.15 |
||||
vip_netmask: 255.255.255.0 |
@ -1,24 +0,0 @@ |
||||
[dns] |
||||
vm1 |
||||
|
||||
[mongo_servers] |
||||
vm1 |
||||
vm2 |
||||
vm3 |
||||
|
||||
[mq] |
||||
vm1 |
||||
vm2 |
||||
vm3 |
||||
|
||||
[broker] |
||||
vm6 |
||||
vm7 |
||||
|
||||
[nodes] |
||||
vm4 |
||||
|
||||
[lvs] |
||||
vm5 |
||||
vm3 |
||||
|
Before Width: | Height: | Size: 194 KiB |
Before Width: | Height: | Size: 163 KiB |
Before Width: | Height: | Size: 195 KiB |
@ -1,284 +0,0 @@ |
||||
# Deploying a Highly Available production ready OpenShift Deployment |
||||
|
||||
- Requires Ansible 1.2 |
||||
- Expects CentOS/RHEL 6 hosts (64 bit) |
||||
|
||||
|
||||
## A Primer into OpenShift Architecture |
||||
|
||||
###OpenShift Overview |
||||
|
||||
OpenShift Origin enables the users to create, deploy and manage applications within the cloud, or in other words it provies a PaaS service (Platform as a service). This aleviates the developers from time consuming processes like machine provisioning and neccesary appliaction deployments. OpenShift provides disk space, CPU resources, memory, network connectivity, and various application like JBoss, python, MySQL etc... So that the developer can spent time on coding, testing his/her new application rather than spending time on figuring out how to get/configure those resources. |
||||
|
||||
###OpenShift Components |
||||
|
||||
Here's a list and a brief overview of the diffrent components used by OpenShift. |
||||
|
||||
- Broker: is the single point of contact for all application management activities. It is responsible for managing user logins, DNS, application state, and general orchestration of the application. Customers don't contact the broker directly; instead they use the Web console, CLI tools, or JBoss tools to interact with Broker over a REST based API. |
||||
|
||||
- Cartridges: provide the actual functionality necessary to run the user application. Openshift currently supports many language cartridges like JBoss, PHP, Ruby, etc., as well as many DB cartridges such as Postgres, Mysql, Mongo, etc. So incase a user need to deploy or create an php application with mysql as backend, he/she can just ask the broker to deploy a php and an mysql cartridgeon seperate gears. |
||||
|
||||
- Gear: Gears provide a resource-constrained container to run one or more cartridges. They limit the amount of RAM and disk space available to a cartridge. For simplicity we can consider this as a seperate vm or linux container for running application for a specific tenant, but in reality they are containers created by selinux contexts and pam namespacing. |
||||
|
||||
- Node: are the physical machines where gears are allocated. Gears are generally over-allocated on nodes since not all applications are active at the same time. |
||||
|
||||
- BSN (Broker support Nodes): are the nodes which run applications for OpenShift management. for example OpenShift uses mongodb to |
||||
store various user/app details, it also uses ActiveMQ for communincating with different application nodes via Mcollective. These nodes which hosts this supporing applications are called as broker support nodes. |
||||
|
||||
- Districts: are resource pools which can be used to seperate the application nodes based on performance or environments. so for example in a production deployment we can have two districts of nodes one which has resources with lower memory/cpu/disk requirements and another for high performance applications. |
||||
|
||||
### An Overview of application creation process in OpenShift. |
||||
|
||||
![Alt text](/images/app_deploy.png "App") |
||||
|
||||
|
||||
The above figure depicts an overview of diffrent steps invovled in creating an application in OpenShift. So if a developer wants to create or deploy a JBoss & Myql application the user can request the same from diffrent client tools that are available, the choice can be an Eclipse IDE or command line tool (rhc) or even a web browser. |
||||
|
||||
Once the user has instructed the client it makes a web service request to the Broker, the broker inturn check for available resources in the nodes and checks for gear and cartridge availability and if the resources are available two gears are created and JBoss and Mysql cartridges are deployed on them. The user is then notified and the user can then access the gears via ssh and start deploying the code. |
||||
|
||||
|
||||
### Deployment Diagram of OpenShift via Ansible. |
||||
|
||||
![Alt text](/images/arch.png "App") |
||||
|
||||
As the above diagram shows the Ansible playbooks deploys a highly available Openshift Paas environment. The deployment has two servers running lvs (piranha) for loadbalancing and ha for the brokers. Two instances of brokers also run for fault tolerence. Ansible also configures a dns server which provides name resolution for all the new apps created in the Openshift environment. |
||||
|
||||
Three bsn(broker support nodes) nodes provide a replicated mongodb deployment and the same nodes three instances of higly available activeMQ cluster. There is no limitation on the number of application nodes you can add, just add the hostnames of the application nodes in the ansible inventory and ansible will configure all of them for you. |
||||
|
||||
Note: As a best practise if you are deploying an actual production environemnt it is recommended to integrate with your internal DNS server for name resolution and use LDAP or integrate with an existing Active Directory for user authentication. |
||||
|
||||
## Deployment Steps for OpenShift via Ansible |
||||
|
||||
As a first step probably you may want to setup ansible, Assuming the Ansible host is Rhel variant install the EPEL package. |
||||
|
||||
yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm |
||||
|
||||
Once the epel repo is installed ansible can be installed via the following command. |
||||
|
||||
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm |
||||
|
||||
It is recommended to use seperate machines for the different components of Openshift, but if you are testing it out you could combine the services but atleast four nodes are mandatory as the mongodb and activemq cluster needs atleast three for the cluster to work properly. |
||||
|
||||
As a first step checkout this repository onto you ansible management host and setup the inventory(hosts) as follows. |
||||
|
||||
git checkout https://github.com/ansible/ansible-examples.git |
||||
|
||||
[dns] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
|
||||
[mongo_servers] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
ec2-54-227-131-56.compute-1.amazonaws.com |
||||
ec2-54-227-169-137.compute-1.amazonaws.com |
||||
|
||||
[mq] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
ec2-54-227-131-56.compute-1.amazonaws.com |
||||
ec2-54-227-169-137.compute-1.amazonaws.com |
||||
|
||||
[broker] |
||||
ec2-54-227-63-48.compute-1.amazonaws.com |
||||
ec2-54-227-171-2.compute-1.amazonaws.com |
||||
|
||||
[nodes] |
||||
ec2-54-227-146-187.compute-1.amazonaws.com |
||||
|
||||
[lvs] |
||||
ec2-54-227-176-123.compute-1.amazonaws.com |
||||
ec2-54-227-177-87.compute-1.amazonaws.com |
||||
|
||||
Once the inventroy is setup with hosts in your environment the Openshift stack can be deployed easily by issuing the following command. |
||||
|
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
|
||||
|
||||
### Verifying the Installation |
||||
|
||||
Once the stack has been succesfully deployed, we can check if the diffrent components has been deployed correctly. |
||||
|
||||
- Mongodb: Login to any bsn node running mongodb and issue the following command and a similar output should be displayed. Which displays that the mongo cluster is up with a primary node and two secondary nodes. |
||||
|
||||
|
||||
[root@ip-10-165-33-186 ~]# mongo 127.0.0.1:2700/admin -u admin -p passme |
||||
MongoDB shell version: 2.2.3 |
||||
connecting to: 127.0.0.1:2700/admin |
||||
openshift:PRIMARY> rs.status() |
||||
{ |
||||
"set" : "openshift", |
||||
"date" : ISODate("2013-07-21T18:56:27Z"), |
||||
"myState" : 1, |
||||
"members" : [ |
||||
{ |
||||
"_id" : 0, |
||||
"name" : "ip-10-165-33-186:2700", |
||||
"health" : 1, |
||||
"state" : 1, |
||||
"stateStr" : "PRIMARY", |
||||
"uptime" : 804, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"self" : true |
||||
}, |
||||
{ |
||||
"_id" : 1, |
||||
"name" : "ec2-54-227-131-56.compute-1.amazonaws.com:2700", |
||||
"health" : 1, |
||||
"state" : 2, |
||||
"stateStr" : "SECONDARY", |
||||
"uptime" : 431, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), |
||||
"pingMs" : 0 |
||||
}, |
||||
{ |
||||
"_id" : 2, |
||||
"name" : "ec2-54-227-169-137.compute-1.amazonaws.com:2700", |
||||
"health" : 1, |
||||
"state" : 2, |
||||
"stateStr" : "SECONDARY", |
||||
"uptime" : 423, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), |
||||
"pingMs" : 0 |
||||
} |
||||
], |
||||
"ok" : 1 |
||||
} |
||||
openshift:PRIMARY> |
||||
|
||||
- ActiveMQ: To verify the cluster status of activeMQ browse to the following url pointing to any one of the mq nodes and provide the credentials as user admin and password as specified in the group_vars/all file. The browser should bring up a page similar to shown below, which shows the other two mq nodes in the cluster to which this node as joined. |
||||
|
||||
http://ec2-54-226-116-175.compute-1.amazonaws.com:8161/admin/network.jsp |
||||
|
||||
|
||||
![Alt text](/images/mq.png "App") |
||||
|
||||
- Broker: To check if the broker node is installed/configured succesfully, issue the following command on any broker node and a similar output should be displayed. Make sure there is a PASS at the end. |
||||
|
||||
[root@ip-10-118-127-30 ~]# oo-accept-broker -v |
||||
INFO: Broker package is: openshift-origin-broker |
||||
INFO: checking packages |
||||
INFO: checking package ruby |
||||
INFO: checking package rubygem-openshift-origin-common |
||||
INFO: checking package rubygem-openshift-origin-controller |
||||
INFO: checking package openshift-origin-broker |
||||
INFO: checking package ruby193-rubygem-rails |
||||
INFO: checking package ruby193-rubygem-passenger |
||||
INFO: checking package ruby193-rubygems |
||||
INFO: checking ruby requirements |
||||
INFO: checking ruby requirements for openshift-origin-controller |
||||
INFO: checking ruby requirements for config/application |
||||
INFO: checking that selinux modules are loaded |
||||
NOTICE: SELinux is Enforcing |
||||
NOTICE: SELinux is Enforcing |
||||
INFO: SELinux boolean httpd_unified is enabled |
||||
INFO: SELinux boolean httpd_can_network_connect is enabled |
||||
INFO: SELinux boolean httpd_can_network_relay is enabled |
||||
INFO: SELinux boolean httpd_run_stickshift is enabled |
||||
INFO: SELinux boolean allow_ypbind is enabled |
||||
INFO: checking firewall settings |
||||
INFO: checking mongo datastore configuration |
||||
INFO: Datastore Host: ec2-54-226-116-175.compute-1.amazonaws.com |
||||
INFO: Datastore Port: 2700 |
||||
INFO: Datastore User: admin |
||||
INFO: Datastore SSL: false |
||||
INFO: Datastore Password has been set to non-default |
||||
INFO: Datastore DB Name: admin |
||||
INFO: Datastore: mongo db service is remote |
||||
INFO: checking mongo db login access |
||||
INFO: mongo db login successful: ec2-54-226-116-175.compute-1.amazonaws.com:2700/admin --username admin |
||||
INFO: checking services |
||||
INFO: checking cloud user authentication |
||||
INFO: auth plugin = OpenShift::RemoteUserAuthService |
||||
INFO: auth plugin: OpenShift::RemoteUserAuthService |
||||
INFO: checking remote-user auth configuration |
||||
INFO: Auth trusted header: REMOTE_USER |
||||
INFO: Auth passthrough is enabled for OpenShift services |
||||
INFO: Got HTTP 200 response from https://localhost/broker/rest/api |
||||
INFO: Got HTTP 200 response from https://localhost/broker/rest/cartridges |
||||
INFO: Got HTTP 401 response from https://localhost/broker/rest/user |
||||
INFO: Got HTTP 401 response from https://localhost/broker/rest/domains |
||||
INFO: checking dynamic dns plugin |
||||
INFO: dynamic dns plugin = OpenShift::BindPlugin |
||||
INFO: checking bind dns plugin configuration |
||||
INFO: DNS Server: 10.165.33.186 |
||||
INFO: DNS Port: 53 |
||||
INFO: DNS Zone: example.com |
||||
INFO: DNS Domain Suffix: example.com |
||||
INFO: DNS Update Auth: key |
||||
INFO: DNS Key Name: example.com |
||||
INFO: DNS Key Value: ***** |
||||
INFO: adding txt record named testrecord.example.com to server 10.165.33.186: key0 |
||||
INFO: txt record successfully added |
||||
INFO: deleteing txt record named testrecord.example.com to server 10.165.33.186: key0 |
||||
INFO: txt record successfully deleted |
||||
INFO: checking messaging configuration |
||||
INFO: messaging plugin = OpenShift::MCollectiveApplicationContainerProxy |
||||
PASS |
||||
|
||||
- Node: To verify if the node installation/configuration has been successfull, issue the follwoing command and check for a similar output as shown below. |
||||
|
||||
[root@ip-10-152-154-18 ~]# oo-accept-node -v |
||||
INFO: using default accept-node extensions |
||||
INFO: loading node configuration file /etc/openshift/node.conf |
||||
INFO: loading resource limit file /etc/openshift/resource_limits.conf |
||||
INFO: finding external network device |
||||
INFO: checking node public hostname resolution |
||||
INFO: checking selinux status |
||||
INFO: checking selinux openshift-origin policy |
||||
INFO: checking selinux booleans |
||||
INFO: checking package list |
||||
INFO: checking services |
||||
INFO: checking kernel semaphores >= 512 |
||||
INFO: checking cgroups configuration |
||||
INFO: checking cgroups processes |
||||
INFO: checking filesystem quotas |
||||
INFO: checking quota db file selinux label |
||||
INFO: checking 0 user accounts |
||||
INFO: checking application dirs |
||||
INFO: checking system httpd configs |
||||
INFO: checking cartridge repository |
||||
PASS |
||||
|
||||
- LVS (LoadBalancer): To check the LoadBalncer Login to the active loadbalancer and issue the follwing command, the output would show the two broker to which the loadbalancer is balancing the traffic. |
||||
|
||||
[root@ip-10-145-204-43 ~]# ipvsadm |
||||
IP Virtual Server version 1.2.1 (size=4096) |
||||
Prot LocalAddress:Port Scheduler Flags |
||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn |
||||
TCP ip-192-168-1-1.ec2.internal: rr |
||||
-> ec2-54-227-63-48.compute-1.a Route 1 0 0 |
||||
-> ec2-54-227-171-2.compute-1.a Route 2 0 0 |
||||
|
||||
## Creating an APP in Openshift |
||||
|
||||
To create an App in openshift access the management console via any browser, the VIP specified in group_vars/all can used or ip address of any broker node can used. |
||||
|
||||
https://<ip-of-broker-or-vip>/ |
||||
|
||||
The page would as a login, give it as demo/passme. Once logged in follow the screen instructions to create your first Application. |
||||
Note: Python2.6 cartridge is by default installed by plabooks, so choose python2.6 as the cartridge. |
||||
|
||||
|
||||
## HA Tests |
||||
|
||||
Few test's that can be performed to test High Availability are: |
||||
|
||||
- Shutdown any broker and try to create a new Application |
||||
- Shutdown anyone mongo/mq node and try to create a new Appliaction. |
||||
- Shutdown any loadbalaning machine, and the manamgement application should be available via the VirtualIP. |
||||
|
||||
|
||||
|
@ -1,115 +0,0 @@ |
||||
# config file for ansible -- http://ansibleworks.com/ |
||||
# ================================================== |
||||
|
||||
# nearly all parameters can be overridden in ansible-playbook |
||||
# or with command line flags. ansible will read ~/.ansible.cfg, |
||||
# ansible.cfg in the current working directory or |
||||
# /etc/ansible/ansible.cfg, whichever it finds first |
||||
|
||||
[defaults] |
||||
|
||||
# some basic default values... |
||||
|
||||
hostfile = /etc/ansible/hosts |
||||
library = /usr/share/ansible |
||||
remote_tmp = $HOME/.ansible/tmp |
||||
pattern = * |
||||
forks = 5 |
||||
poll_interval = 15 |
||||
sudo_user = root |
||||
#ask_sudo_pass = True |
||||
#ask_pass = True |
||||
transport = smart |
||||
remote_port = 22 |
||||
|
||||
# uncomment this to disable SSH key host checking |
||||
host_key_checking = False |
||||
|
||||
# change this for alternative sudo implementations |
||||
sudo_exe = sudo |
||||
|
||||
# what flags to pass to sudo |
||||
#sudo_flags = -H |
||||
|
||||
# SSH timeout |
||||
timeout = 10 |
||||
|
||||
# default user to use for playbooks if user is not specified |
||||
# (/usr/bin/ansible will use current user as default) |
||||
#remote_user = root |
||||
|
||||
# logging is off by default unless this path is defined |
||||
# if so defined, consider logrotate |
||||
#log_path = /var/log/ansible.log |
||||
|
||||
# default module name for /usr/bin/ansible |
||||
#module_name = command |
||||
|
||||
# use this shell for commands executed under sudo |
||||
# you may need to change this to bin/bash in rare instances |
||||
# if sudo is constrained |
||||
#executable = /bin/sh |
||||
|
||||
# if inventory variables overlap, does the higher precedence one win |
||||
# or are hash values merged together? The default is 'replace' but |
||||
# this can also be set to 'merge'. |
||||
#hash_behaviour = replace |
||||
|
||||
# How to handle variable replacement - as of 1.2, Jinja2 variable syntax is |
||||
# preferred, but we still support the old $variable replacement too. |
||||
# Turn off ${old_style} variables here if you like. |
||||
#legacy_playbook_variables = yes |
||||
|
||||
# list any Jinja2 extensions to enable here: |
||||
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n |
||||
|
||||
# if set, always use this private key file for authentication, same as |
||||
# if passing --private-key to ansible or ansible-playbook |
||||
#private_key_file = /path/to/file |
||||
|
||||
# format of string {{ ansible_managed }} available within Jinja2 |
||||
# templates indicates to users editing templates files will be replaced. |
||||
# replacing {file}, {host} and {uid} and strftime codes with proper values. |
||||
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} |
||||
|
||||
# by default (as of 1.3), Ansible will raise errors when attempting to dereference |
||||
# Jinja2 variables that are not set in templates or action lines. Uncomment this line |
||||
# to revert the behavior to pre-1.3. |
||||
#error_on_undefined_vars = False |
||||
|
||||
# set plugin path directories here, seperate with colons |
||||
action_plugins = /usr/share/ansible_plugins/action_plugins |
||||
callback_plugins = /usr/share/ansible_plugins/callback_plugins |
||||
connection_plugins = /usr/share/ansible_plugins/connection_plugins |
||||
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins |
||||
vars_plugins = /usr/share/ansible_plugins/vars_plugins |
||||
filter_plugins = /usr/share/ansible_plugins/filter_plugins |
||||
|
||||
# don't like cows? that's unfortunate. |
||||
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 |
||||
#nocows = 1 |
||||
|
||||
# don't like colors either? |
||||
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 |
||||
#nocolor = 1 |
||||
|
||||
[paramiko_connection] |
||||
|
||||
# uncomment this line to cause the paramiko connection plugin to not record new host |
||||
# keys encountered. Increases performance on new host additions. Setting works independently of the |
||||
# host key checking setting above. |
||||
|
||||
#record_host_keys=False |
||||
|
||||
[ssh_connection] |
||||
|
||||
# ssh arguments to use |
||||
# Leaving off ControlPersist will result in poor performance, so use |
||||
# paramiko on older platforms rather than removing it |
||||
#ssh_args = -o ControlMaster=auto -o ControlPersist=60s |
||||
|
||||
# if True, make ansible use scp if the connection type is ssh |
||||
# (default is sftp) |
||||
#scp_if_ssh = True |
||||
|
||||
|
@ -1,61 +0,0 @@ |
||||
- hosts: localhost |
||||
connection: local |
||||
pre_tasks: |
||||
- fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" |
||||
when: id is not defined |
||||
|
||||
roles: |
||||
- role: ec2 |
||||
type: dns |
||||
ncount: 1 |
||||
|
||||
- role: ec2 |
||||
type: mq |
||||
ncount: 3 |
||||
|
||||
- role: ec2 |
||||
type: broker |
||||
ncount: 2 |
||||
|
||||
- role: ec2 |
||||
type: nodes |
||||
ncount: "{{ count }}" |
||||
|
||||
post_tasks: |
||||
- name: Wait for the instance to come up |
||||
wait_for: delay=10 host={{ item.public_dns_name }} port=22 state=started timeout=360 |
||||
with_items: ec2.instances |
||||
|
||||
- debug: msg="{{ groups }}" |
||||
|
||||
- hosts: all:!localhost |
||||
user: root |
||||
roles: |
||||
- role: common |
||||
|
||||
- hosts: dns |
||||
user: root |
||||
roles: |
||||
- role: dns |
||||
|
||||
- hosts: mongo_servers |
||||
user: root |
||||
roles: |
||||
- role: mongodb |
||||
|
||||
- hosts: mq |
||||
user: root |
||||
roles: |
||||
- role: mq |
||||
|
||||
- hosts: broker |
||||
user: root |
||||
roles: |
||||
- role: broker |
||||
|
||||
- hosts: nodes |
||||
user: root |
||||
roles: |
||||
- role: nodes |
||||
|
||||
|
@ -1,23 +0,0 @@ |
||||
- hosts: localhost |
||||
connection: local |
||||
pre_tasks: |
||||
- fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" |
||||
when: id is not defined |
||||
|
||||
roles: |
||||
- role: ec2_remove |
||||
type: dns |
||||
ncount: 1 |
||||
|
||||
- role: ec2_remove |
||||
type: mq |
||||
ncount: 3 |
||||
|
||||
- role: ec2_remove |
||||
type: broker |
||||
ncount: 2 |
||||
|
||||
- role: ec2_remove |
||||
type: nodes |
||||
ncount: "{{ count }}" |
||||
|
@ -1 +0,0 @@ |
||||
localhost |
@ -1,32 +0,0 @@ |
||||
--- |
||||
# Global Vars for OpenShift |
||||
|
||||
#EC2 specific varibles |
||||
ec2_access_key: "xxx" |
||||
ec2_secret_key: "xxx" |
||||
keypair: "axialkey" |
||||
instance_type: "m1.small" |
||||
image: "ami-bf5021d6" |
||||
group: "default" |
||||
count: 2 |
||||
ec2_elbs: oselb |
||||
region: "us-east-1" |
||||
zone: "us-east-1a" |
||||
|
||||
iface: '{{ ansible_default_ipv4.interface }}' |
||||
|
||||
domain_name: example.com |
||||
dns_port: 53 |
||||
rndc_port: 953 |
||||
dns_key: "YG70pT2h9xmn9DviT+E6H8MNlJ9wc7Xa9qpCOtuonj3oLJGBBA8udXUsJnoGdMSIIw2pk9lw9QL4rv8XQNBRLQ==" |
||||
|
||||
mongodb_datadir_prefix: /data/ |
||||
mongod_port: 2700 |
||||
mongo_admin_pass: passme |
||||
|
||||
mcollective_pass: passme |
||||
admin_pass: passme |
||||
amquser_pass: passme |
||||
|
||||
vip: 192.168.2.15 |
||||
vip_netmask: 255.255.255.0 |
@ -1,23 +0,0 @@ |
||||
[dns] |
||||
vm1 |
||||
[mongo_servers] |
||||
vm1 |
||||
vm2 |
||||
vm3 |
||||
|
||||
[mq] |
||||
vm1 |
||||
vm2 |
||||
vm3 |
||||
|
||||
[broker] |
||||
vm1 |
||||
vm2 |
||||
|
||||
[nodes] |
||||
vm4 |
||||
|
||||
[lvs] |
||||
vm5 |
||||
vm3 |
||||
|
Before Width: | Height: | Size: 194 KiB |
Before Width: | Height: | Size: 163 KiB |
Before Width: | Height: | Size: 195 KiB |
@ -1,2 +0,0 @@ |
||||
#!/bin/bash |
||||
/usr/bin/scl enable ruby193 "gem install rspec --version 1.3.0 --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install fakefs --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install httpclient --version 2.3.2 --no-rdoc --no-ri" ; touch /opt/gem.init |
@ -1 +0,0 @@ |
||||
demo:k2WsPcYIRAaXs |
@ -1,25 +0,0 @@ |
||||
LoadModule auth_basic_module modules/mod_auth_basic.so |
||||
LoadModule authn_file_module modules/mod_authn_file.so |
||||
LoadModule authz_user_module modules/mod_authz_user.so |
||||
|
||||
# Turn the authenticated remote-user into an Apache environment variable for the console security controller |
||||
RewriteEngine On |
||||
RewriteCond %{LA-U:REMOTE_USER} (.+) |
||||
RewriteRule . - [E=RU:%1] |
||||
RequestHeader set X-Remote-User "%{RU}e" env=RU |
||||
|
||||
<Location /console> |
||||
AuthName "OpenShift Developer Console" |
||||
AuthType Basic |
||||
AuthUserFile /etc/openshift/htpasswd |
||||
require valid-user |
||||
|
||||
# The node->broker auth is handled in the Ruby code |
||||
BrowserMatch Openshift passthrough |
||||
Allow from env=passthrough |
||||
|
||||
Order Deny,Allow |
||||
Deny from all |
||||
Satisfy any |
||||
</Location> |
||||
|
@ -1,39 +0,0 @@ |
||||
LoadModule auth_basic_module modules/mod_auth_basic.so |
||||
LoadModule authn_file_module modules/mod_authn_file.so |
||||
LoadModule authz_user_module modules/mod_authz_user.so |
||||
|
||||
<Location /broker> |
||||
AuthName "OpenShift broker API" |
||||
AuthType Basic |
||||
AuthUserFile /etc/openshift/htpasswd |
||||
require valid-user |
||||
|
||||
SetEnvIfNoCase Authorization Bearer passthrough |
||||
|
||||
# The node->broker auth is handled in the Ruby code |
||||
BrowserMatchNoCase ^OpenShift passthrough |
||||
Allow from env=passthrough |
||||
|
||||
# Console traffic will hit the local port. mod_proxy will set this header automatically. |
||||
SetEnvIf X-Forwarded-For "^$" local_traffic=1 |
||||
# Turn the Console output header into the Apache environment variable for the broker remote-user plugin |
||||
SetEnvIf X-Remote-User "(..*)" REMOTE_USER=$1 |
||||
Allow from env=local_traffic |
||||
|
||||
Order Deny,Allow |
||||
Deny from all |
||||
Satisfy any |
||||
</Location> |
||||
|
||||
# The following APIs do not require auth: |
||||
<Location /broker/rest/cartridges*> |
||||
Allow from all |
||||
</Location> |
||||
|
||||
<Location /broker/rest/api*> |
||||
Allow from all |
||||
</Location> |
||||
|
||||
<Location /broker/rest/environment*> |
||||
Allow from all |
||||
</Location> |
@ -1,27 +0,0 @@ |
||||
-----BEGIN RSA PRIVATE KEY----- |
||||
MIIEpAIBAAKCAQEAyWM85VFDBOdWz16oC7j8Q7uHHbs3UVzRhHhHkSg8avK6ETMH |
||||
piXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4blZHcDO5T1I//MyjwVNfqAQV4xin |
||||
qRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadpL0vZrzXAJREEhh/G7021q010CF+E |
||||
KTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgbCdvx+DnTw9E/YY+07/lDPOiXQFZm |
||||
7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc1BMXzHUiDVehNZ+I8ffzFltNNUL0 |
||||
Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0bwIDAQABAoIBAQDJPXpvqLlw3/92 |
||||
bx87v5mN0YneYuOPUVIorszNN8jQEkduwnCFTec2b8xRgx45AqwG3Ol/xM/V+qrd |
||||
eEvUs/fBgkQW0gj+Q7GfW5rTqA2xZou8iDmaF0/0tCbFWkoe8I8MdCkOl0Pkv1A4 |
||||
Au/UNqc8VO5tUCf2oj/EC2MOZLgCOTaerePnc+SFIf4TkerixPA9I4KYWwJQ2eXG |
||||
esSfR2f2EsUGfwOqKLEQU1JTMFkttbSAp42p+xpRaUh1FuyLHDlf3EeFmq5BPaFL |
||||
UnpzPDJTZtXjnyBrM9fb1ewiFW8x+EBmsdGooY7ptrWWhGzvxAsK9C0L2di3FBAy |
||||
gscM/rPBAoGBAPpt0xXtVWJu2ezoBfjNuqwMqGKFsOF3hi5ncOHW9nd6iZABD5Xt |
||||
KamrszxItkqiJpEacBCabgfo0FSLEOo+KqfTBK/r4dIoMwgcfhJOz+HvEC6+557n |
||||
GEFaL+evdLrxNrU41wvvfCzPK7pWaQGR1nrGohTyX5ZH4uA0Kmreof+PAoGBAM3e |
||||
IFPNrXuzhgShqFibWqJ8JdsSfMroV62aCqdJlB92lxx8JJ2lEiAMPfHmAtF1g01r |
||||
oBUcJcPfuBZ0bC1KxIvtz9d5m1f2geNGH/uwVULU3skhPBwqAs2s607/Z1S+/WRr |
||||
Af1rAs2KTJ7BDCQo8g2TPUO+sDrUzR6joxOy/Y0hAoGAbWaI7m1N/cBbZ4k9AqIt |
||||
SHgHH3M0AGtMrPz3bVGRPkTDz6sG+gIvTzX5CP7i09veaUlZZ4dvRflI+YX/D7W0 |
||||
wLgItimf70UsdgCseqb/Xb4oHaO8X8io6fPSNa6KmhhCRAzetRIb9x9SBQc2vD7P |
||||
qbcYm3n+lBI3ZKalWSaFMrUCgYEAsV0xfuISGCRIT48zafuWr6zENKUN7QcWGxQ/ |
||||
H3eN7TmP4VO3fDZukjvZ1qHzRaC32ijih61zf/ksMfRmCvOCuIfP7HXx92wC5dtR |
||||
zNdT7btWofRHRICRX8AeDzaOQP43c5+Z3Eqo5IrFjnUFz9WTDU0QmGAeluEmQ8J5 |
||||
yowIVOECgYB97fGLuEBSlKJCvmWp6cTyY+mXbiQjYYGBbYAiJWnwaK9U3bt71we/ |
||||
MQNzBHAe0mPCReVHSr68BfoWY/crV+7RKSBgrDpR0Y0DI1yn0LXXZfd3NNrTVaAb |
||||
rScbJ8Xe3qcLi3QZ3BxaWfub08Wm57wjDBBqGZyExYjjlGSpjBpVJQ== |
||||
-----END RSA PRIVATE KEY----- |
@ -1,9 +0,0 @@ |
||||
-----BEGIN PUBLIC KEY----- |
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWM85VFDBOdWz16oC7j8 |
||||
Q7uHHbs3UVzRhHhHkSg8avK6ETMHpiXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4 |
||||
blZHcDO5T1I//MyjwVNfqAQV4xinqRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadp |
||||
L0vZrzXAJREEhh/G7021q010CF+EKTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgb |
||||
Cdvx+DnTw9E/YY+07/lDPOiXQFZm7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc |
||||
1BMXzHUiDVehNZ+I8ffzFltNNUL0Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0 |
||||
bwIDAQAB |
||||
-----END PUBLIC KEY----- |
@ -1,74 +0,0 @@ |
||||
# |
||||
# This is the Apache server configuration file providing SSL support. |
||||
# It contains the configuration directives to instruct the server how to |
||||
# serve pages over an https connection. For detailing information about these |
||||
# directives see <URL:http://httpd.apache.org/docs/2.2/mod/mod_ssl.html> |
||||
# |
||||
# Do NOT simply read the instructions in here without understanding |
||||
# what they do. They're here only as hints or reminders. If you are unsure |
||||
# consult the online docs. You have been warned. |
||||
# |
||||
|
||||
LoadModule ssl_module modules/mod_ssl.so |
||||
|
||||
# |
||||
# When we also provide SSL we have to listen to the |
||||
# the HTTPS port in addition. |
||||
# |
||||
Listen 443 |
||||
|
||||
## |
||||
## SSL Global Context |
||||
## |
||||
## All SSL configuration in this context applies both to |
||||
## the main server and all SSL-enabled virtual hosts. |
||||
## |
||||
|
||||
# Pass Phrase Dialog: |
||||
# Configure the pass phrase gathering process. |
||||
# The filtering dialog program (`builtin' is a internal |
||||
# terminal dialog) has to provide the pass phrase on stdout. |
||||
SSLPassPhraseDialog builtin |
||||
|
||||
# Inter-Process Session Cache: |
||||
# Configure the SSL Session Cache: First the mechanism |
||||
# to use and second the expiring timeout (in seconds). |
||||
SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) |
||||
SSLSessionCacheTimeout 300 |
||||
|
||||
# Semaphore: |
||||
# Configure the path to the mutual exclusion semaphore the |
||||
# SSL engine uses internally for inter-process synchronization. |
||||
SSLMutex default |
||||
|
||||
# Pseudo Random Number Generator (PRNG): |
||||
# Configure one or more sources to seed the PRNG of the |
||||
# SSL library. The seed data should be of good random quality. |
||||
# WARNING! On some platforms /dev/random blocks if not enough entropy |
||||
# is available. This means you then cannot use the /dev/random device |
||||
# because it would lead to very long connection times (as long as |
||||
# it requires to make more entropy available). But usually those |
||||
# platforms additionally provide a /dev/urandom device which doesn't |
||||
# block. So, if available, use this one instead. Read the mod_ssl User |
||||
# Manual for more details. |
||||
SSLRandomSeed startup file:/dev/urandom 256 |
||||
SSLRandomSeed connect builtin |
||||
#SSLRandomSeed startup file:/dev/random 512 |
||||
#SSLRandomSeed connect file:/dev/random 512 |
||||
#SSLRandomSeed connect file:/dev/urandom 512 |
||||
|
||||
# |
||||
# Use "SSLCryptoDevice" to enable any supported hardware |
||||
# accelerators. Use "openssl engine -v" to list supported |
||||
# engine names. NOTE: If you enable an accelerator and the |
||||
# server does not start, consult the error logs and ensure |
||||
# your accelerator is functioning properly. |
||||
# |
||||
SSLCryptoDevice builtin |
||||
#SSLCryptoDevice ubsec |
||||
|
||||
## |
||||
## SSL Virtual Host Context |
||||
## |
||||
|
||||
|
@ -1,9 +0,0 @@ |
||||
--- |
||||
# handlers for broker |
||||
|
||||
- name: restart broker |
||||
service: name=openshift-broker state=restarted |
||||
|
||||
- name: restart console |
||||
service: name=openshift-console state=restarted |
||||
|
@ -1,104 +0,0 @@ |
||||
--- |
||||
# Tasks for the Openshift broker installation |
||||
|
||||
- name: Install mcollective |
||||
yum: name=mcollective-client |
||||
|
||||
- name: Copy the mcollective configuration file |
||||
template: src=client.cfg.j2 dest=/etc/mcollective/client.cfg |
||||
|
||||
- name: Install the broker components |
||||
yum: name="{{ item }}" state=installed |
||||
with_items: "{{ broker_packages }}" |
||||
|
||||
- name: Copy the rhc client configuration file |
||||
template: src=express.conf.j2 dest=/etc/openshift/express.conf |
||||
register: last_run |
||||
|
||||
- name: Install the gems for rhc |
||||
script: gem.sh |
||||
when: last_run.changed |
||||
|
||||
- name: create the file for mcollective logging |
||||
copy: content="" dest=/var/log/mcollective-client.log owner=apache group=root |
||||
|
||||
- name: SELinux - configure sebooleans |
||||
seboolean: name="{{ item }}" state=true persistent=yes |
||||
with_items: |
||||
- httpd_unified |
||||
- httpd_execmem |
||||
- httpd_can_network_connect |
||||
- httpd_can_network_relay |
||||
- httpd_run_stickshift |
||||
- named_write_master_zones |
||||
- httpd_verify_dns |
||||
- allow_ypbind |
||||
|
||||
- name: copy the auth keyfiles |
||||
copy: src="{{ item }}" dest="/etc/openshift/{{ item }}" |
||||
with_items: |
||||
- server_priv.pem |
||||
- server_pub.pem |
||||
- htpasswd |
||||
|
||||
- name: copy the local ssh keys |
||||
copy: src="~/.ssh/{{ item }}" dest="~/.ssh/{{ item }}" |
||||
with_items: |
||||
- id_rsa.pub |
||||
- id_rsa |
||||
|
||||
- name: copy the local ssh keys to openshift dir |
||||
copy: src="~/.ssh/{{ item }}" dest="/etc/openshift/rsync_{{ item }}" |
||||
with_items: |
||||
- id_rsa.pub |
||||
- id_rsa |
||||
|
||||
- name: Copy the broker configuration file |
||||
template: src=broker.conf.j2 dest=/etc/openshift/broker.conf |
||||
notify: restart broker |
||||
|
||||
- name: Copy the console configuration file |
||||
template: src=console.conf.j2 dest=/etc/openshift/console.conf |
||||
notify: restart console |
||||
|
||||
- name: create the file for ssl.conf |
||||
copy: src=ssl.conf dest=/etc/httpd/conf.d/ssl.conf owner=apache group=root |
||||
|
||||
- name: copy the configuration file for openstack plugins |
||||
template: src="{{ item }}" dest="/etc/openshift/plugins.d/{{ item }}" |
||||
with_items: |
||||
- openshift-origin-auth-remote-user.conf |
||||
- openshift-origin-dns-bind.conf |
||||
- openshift-origin-msg-broker-mcollective.conf |
||||
|
||||
- name: Bundle the ruby gems |
||||
shell: chdir=/var/www/openshift/broker/ /usr/bin/scl enable ruby193 "bundle show"; touch bundle.init |
||||
creates=//var/www/openshift/broker/bundle.init |
||||
|
||||
- name: Copy the httpd configuration file |
||||
copy: src=openshift-origin-auth-remote-user.conf dest=/var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf |
||||
notify: restart broker |
||||
|
||||
- name: Copy the httpd configuration file for console |
||||
copy: src=openshift-origin-auth-remote-basic-user.conf dest=/var/www/openshift/console/httpd/conf.d/openshift-origin-auth-remote-basic-user.conf |
||||
notify: restart console |
||||
|
||||
- name: Fix the selinux contexts on several files |
||||
shell: fixfiles -R rubygem-passenger restore; fixfiles -R mod_passenger restore; restorecon -rv /var/run; restorecon -rv /usr/share/rubygems/gems/passenger-*; touch /opt/context.fixed creates=/opt/context.fixed |
||||
|
||||
- name: start the http and broker service |
||||
service: name="{{ item }}" state=started enabled=yes |
||||
with_items: |
||||
- httpd |
||||
- openshift-broker |
||||
|
||||
- name: Install the rhc client |
||||
gem: name={{ item }} state=latest |
||||
with_items: |
||||
- rdoc |
||||
- rhc |
||||
ignore_errors: yes |
||||
|
||||
- name: copy the resolv.conf |
||||
template: src=resolv.conf.j2 dest=/etc/resolv.conf |
||||
|
@ -1,47 +0,0 @@ |
||||
# Domain suffix to use for applications (Must match node config) |
||||
CLOUD_DOMAIN="{{ domain_name }}" |
||||
# Comma seperted list of valid gear sizes |
||||
VALID_GEAR_SIZES="small,medium" |
||||
|
||||
# Default number of gears to assign to a new user |
||||
DEFAULT_MAX_GEARS="100" |
||||
# Default gear size for a new gear |
||||
DEFAULT_GEAR_SIZE="small" |
||||
|
||||
#Broker datastore configuration |
||||
MONGO_REPLICA_SETS=true |
||||
# Replica set example: "<host-1>:<port-1> <host-2>:<port-2> ..." |
||||
{% set hosts = '' %} |
||||
{% for host in groups['mongo_servers'] %} |
||||
{% if loop.last %} |
||||
{% set hosts = hosts + host + ':' ~ mongod_port + ' ' %} |
||||
|
||||
MONGO_HOST_PORT="{{ hosts }}" |
||||
|
||||
{% else %} |
||||
{% set hosts = hosts + host + ':' ~ mongod_port + ', ' %} |
||||
{% endif %} |
||||
{% endfor %} |
||||
|
||||
MONGO_USER="admin" |
||||
MONGO_PASSWORD="{{ mongo_admin_pass }}" |
||||
MONGO_DB="admin" |
||||
|
||||
#Enables gear/filesystem resource usage tracking |
||||
ENABLE_USAGE_TRACKING_DATASTORE="false" |
||||
#Log resource usage information to syslog |
||||
ENABLE_USAGE_TRACKING_SYSLOG="false" |
||||
|
||||
#Enable all broker analytics |
||||
ENABLE_ANALYTICS="false" |
||||
|
||||
#Enables logging of REST API operations and success/failure |
||||
ENABLE_USER_ACTION_LOG="true" |
||||
USER_ACTION_LOG_FILE="/var/log/openshift/broker/user_action.log" |
||||
|
||||
AUTH_SALT="{{ auth_salt }}" |
||||
AUTH_PRIVKEYFILE="/etc/openshift/server_priv.pem" |
||||
AUTH_PRIVKEYPASS="" |
||||
AUTH_PUBKEYFILE="/etc/openshift/server_pub.pem" |
||||
AUTH_RSYNC_KEY_FILE="/etc/openshift/rsync_id_rsa" |
||||
SESSION_SECRET="{{ session_secret }}" |
@ -1,25 +0,0 @@ |
||||
topicprefix = /topic/ |
||||
main_collective = mcollective |
||||
collectives = mcollective |
||||
libdir = /opt/rh/ruby193/root/usr/libexec/mcollective |
||||
logfile = /var/log/mcollective-client.log |
||||
loglevel = debug |
||||
direct_addressing = 1 |
||||
|
||||
# Plugins |
||||
securityprovider = psk |
||||
plugin.psk = unset |
||||
|
||||
connector = stomp |
||||
plugin.stomp.pool.size = {{ groups['mq']|length() }} |
||||
{% for host in groups['mq'] %} |
||||
|
||||
plugin.stomp.pool.host{{ loop.index }} = {{ hostvars[host].ansible_hostname }} |
||||
plugin.stomp.pool.port{{ loop.index }} = 61613 |
||||
plugin.stomp.pool.user{{ loop.index }} = mcollective |
||||
plugin.stomp.pool.password{{ loop.index }} = {{ mcollective_pass }} |
||||
|
||||
{% endfor %} |
||||
|
||||
|
||||
|
@ -1,8 +0,0 @@ |
||||
BROKER_URL=http://localhost:8080/broker/rest |
||||
|
||||
CONSOLE_SECURITY=remote_user |
||||
|
||||
REMOTE_USER_HEADER=REMOTE_USER |
||||
|
||||
REMOTE_USER_COPY_HEADERS=X-Remote-User |
||||
SESSION_SECRET="{{ session_secret }}" |
@ -1,8 +0,0 @@ |
||||
# Remote API server |
||||
libra_server = '{{ ansible_hostname }}' |
||||
|
||||
# Logging |
||||
debug = 'false' |
||||
|
||||
# Timeout |
||||
#timeout = '10' |
@ -1,4 +0,0 @@ |
||||
# Settings related to the Remote-User variant of an OpenShift auth plugin |
||||
|
||||
# The name of the header containing the trusted username |
||||
TRUSTED_HEADER="REMOTE_USER" |
@ -1,16 +0,0 @@ |
||||
# Settings related to the bind variant of an OpenShift DNS plugin |
||||
|
||||
# The DNS server |
||||
BIND_SERVER="{{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }}" |
||||
|
||||
# The DNS server's port |
||||
BIND_PORT=53 |
||||
|
||||
# The key name for your zone |
||||
BIND_KEYNAME="{{ domain_name }}" |
||||
|
||||
# base64-encoded key, most likely from /var/named/example.com.key. |
||||
BIND_KEYVALUE="{{ dns_key }}" |
||||
|
||||
# The base zone for the DNS server |
||||
BIND_ZONE="{{ domain_name }}" |
@ -1,25 +0,0 @@ |
||||
# Some settings to configure how mcollective handles gear placement on nodes: |
||||
|
||||
# Use districts when placing gears and moving them between hosts. Should be |
||||
# true except for particular dev/test situations. |
||||
DISTRICTS_ENABLED=true |
||||
|
||||
# Require new gears to be placed in a district; when true, placement will fail |
||||
# if there isn't a district with capacity and the right gear profile. |
||||
DISTRICTS_REQUIRE_FOR_APP_CREATE=false |
||||
|
||||
# Used as the default max gear capacity when creating a district. |
||||
DISTRICTS_MAX_CAPACITY=6000 |
||||
|
||||
# It is unlikely these will need to be changed |
||||
DISTRICTS_FIRST_UID=1000 |
||||
MCOLLECTIVE_DISCTIMEOUT=5 |
||||
MCOLLECTIVE_TIMEOUT=180 |
||||
MCOLLECTIVE_VERBOSE=false |
||||
MCOLLECTIVE_PROGRESS_BAR=0 |
||||
MCOLLECTIVE_CONFIG="/etc/mcollective/client.cfg" |
||||
|
||||
# Place gears on nodes with the requested profile; should be true, as |
||||
# a false value means gear profiles are ignored and gears are placed arbitrarily. |
||||
NODE_PROFILE_ENABLED=true |
||||
|
@ -1,2 +0,0 @@ |
||||
search {{ domain_name }} |
||||
nameserver {{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }} |
@ -1,82 +0,0 @@ |
||||
--- |
||||
# variables for broker |
||||
|
||||
broker_packages: |
||||
- mongodb-devel |
||||
- openshift-origin-broker |
||||
- openshift-origin-broker-util |
||||
- rubygem-openshift-origin-dns-nsupdate |
||||
- rubygem-openshift-origin-auth-mongo |
||||
- rubygem-openshift-origin-auth-remote-user |
||||
- rubygem-openshift-origin-controller |
||||
- rubygem-openshift-origin-msg-broker-mcollective |
||||
- rubygem-openshift-origin-dns-bind |
||||
- rubygem-passenger |
||||
- ruby193-mod_passenger |
||||
- mysql-devel |
||||
- openshift-origin-console |
||||
- ruby193-rubygem-actionmailer |
||||
- ruby193-rubygem-actionpack |
||||
- ruby193-rubygem-activemodel |
||||
- ruby193-rubygem-activerecord |
||||
- ruby193-rubygem-activeresource |
||||
- ruby193-rubygem-activesupport |
||||
- ruby193-rubygem-arel |
||||
- ruby193-rubygem-bigdecimal |
||||
- ruby193-rubygem-net-ssh |
||||
- ruby193-rubygem-commander |
||||
- ruby193-rubygem-archive-tar-minitar |
||||
- ruby193-rubygem-bson |
||||
- ruby193-rubygem-bson_ext |
||||
- ruby193-rubygem-builder |
||||
- ruby193-rubygem-bundler |
||||
- ruby193-rubygem-cucumber |
||||
- ruby193-rubygem-diff-lcs |
||||
- ruby193-rubygem-dnsruby |
||||
- ruby193-rubygem-erubis |
||||
- ruby193-rubygem-gherkin |
||||
- ruby193-rubygem-hike |
||||
- ruby193-rubygem-i18n |
||||
- ruby193-rubygem-journey |
||||
- ruby193-rubygem-json |
||||
- ruby193-rubygem-mail |
||||
- ruby193-rubygem-metaclass |
||||
- ruby193-rubygem-mime-types |
||||
- ruby193-rubygem-minitest |
||||
- ruby193-rubygem-mocha |
||||
- ruby193-rubygem-mongo |
||||
- ruby193-rubygem-mongoid |
||||
- ruby193-rubygem-moped |
||||
- ruby193-rubygem-multi_json |
||||
- ruby193-rubygem-open4 |
||||
- ruby193-rubygem-origin |
||||
- ruby193-rubygem-parseconfig |
||||
- ruby193-rubygem-polyglot |
||||
- ruby193-rubygem-rack |
||||
- ruby193-rubygem-rack-cache |
||||
- ruby193-rubygem-rack-ssl |
||||
- ruby193-rubygem-rack-test |
||||
- ruby193-rubygem-rails |
||||
- ruby193-rubygem-railties |
||||
- ruby193-rubygem-rake |
||||
- ruby193-rubygem-rdoc |
||||
- ruby193-rubygem-regin |
||||
- ruby193-rubygem-rest-client |
||||
- ruby193-rubygem-simplecov |
||||
- ruby193-rubygem-simplecov-html |
||||
- ruby193-rubygem-sprockets |
||||
- ruby193-rubygem-state_machine |
||||
- ruby193-rubygem-stomp |
||||
- ruby193-rubygem-systemu |
||||
- ruby193-rubygem-term-ansicolor |
||||
- ruby193-rubygem-thor |
||||
- ruby193-rubygem-tilt |
||||
- ruby193-rubygem-treetop |
||||
- ruby193-rubygem-tzinfo |
||||
- ruby193-rubygem-xml-simple |
||||
|
||||
|
||||
|
||||
auth_salt: "ceFm8El0mTLu7VLGpBFSFfmxeID+UoNfsQrAKs8dhKSQ/uAGwjWiz3VdyuB1fW/WR+R1q7yXW+sxSm9wkmuqVA==" |
||||
session_secret: "25905ebdb06d8705025531bb5cb45335c53c4f36ee534719ffffd0fe28808395d80449c6c69bc079e2ac14c8ff66639bba1513332ef9ad5ed42cc0bb21e07134" |
||||
|
@ -1,29 +0,0 @@ |
||||
-----BEGIN PGP PUBLIC KEY BLOCK----- |
||||
Version: GnuPG v1.4.5 (GNU/Linux) |
||||
|
||||
mQINBEvSKUIBEADLGnUj24ZVKW7liFN/JA5CgtzlNnKs7sBg7fVbNWryiE3URbn1 |
||||
JXvrdwHtkKyY96/ifZ1Ld3lE2gOF61bGZ2CWwJNee76Sp9Z+isP8RQXbG5jwj/4B |
||||
M9HK7phktqFVJ8VbY2jfTjcfxRvGM8YBwXF8hx0CDZURAjvf1xRSQJ7iAo58qcHn |
||||
XtxOAvQmAbR9z6Q/h/D+Y/PhoIJp1OV4VNHCbCs9M7HUVBpgC53PDcTUQuwcgeY6 |
||||
pQgo9eT1eLNSZVrJ5Bctivl1UcD6P6CIGkkeT2gNhqindRPngUXGXW7Qzoefe+fV |
||||
QqJSm7Tq2q9oqVZ46J964waCRItRySpuW5dxZO34WM6wsw2BP2MlACbH4l3luqtp |
||||
Xo3Bvfnk+HAFH3HcMuwdaulxv7zYKXCfNoSfgrpEfo2Ex4Im/I3WdtwME/Gbnwdq |
||||
3VJzgAxLVFhczDHwNkjmIdPAlNJ9/ixRjip4dgZtW8VcBCrNoL+LhDrIfjvnLdRu |
||||
vBHy9P3sCF7FZycaHlMWP6RiLtHnEMGcbZ8QpQHi2dReU1wyr9QgguGU+jqSXYar |
||||
1yEcsdRGasppNIZ8+Qawbm/a4doT10TEtPArhSoHlwbvqTDYjtfV92lC/2iwgO6g |
||||
YgG9XrO4V8dV39Ffm7oLFfvTbg5mv4Q/E6AWo/gkjmtxkculbyAvjFtYAQARAQAB |
||||
tCFFUEVMICg2KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAjYEEwECACAFAkvS |
||||
KUICGw8GCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA7Sd8qBgi4lR/GD/wLGPv9 |
||||
qO39eyb9NlrwfKdUEo1tHxKdrhNz+XYrO4yVDTBZRPSuvL2yaoeSIhQOKhNPfEgT |
||||
9mdsbsgcfmoHxmGVcn+lbheWsSvcgrXuz0gLt8TGGKGGROAoLXpuUsb1HNtKEOwP |
||||
Q4z1uQ2nOz5hLRyDOV0I2LwYV8BjGIjBKUMFEUxFTsL7XOZkrAg/WbTH2PW3hrfS |
||||
WtcRA7EYonI3B80d39ffws7SmyKbS5PmZjqOPuTvV2F0tMhKIhncBwoojWZPExft |
||||
HpKhzKVh8fdDO/3P1y1Fk3Cin8UbCO9MWMFNR27fVzCANlEPljsHA+3Ez4F7uboF |
||||
p0OOEov4Yyi4BEbgqZnthTG4ub9nyiupIZ3ckPHr3nVcDUGcL6lQD/nkmNVIeLYP |
||||
x1uHPOSlWfuojAYgzRH6LL7Idg4FHHBA0to7FW8dQXFIOyNiJFAOT2j8P5+tVdq8 |
||||
wB0PDSH8yRpn4HdJ9RYquau4OkjluxOWf0uRaS//SUcCZh+1/KBEOmcvBHYRZA5J |
||||
l/nakCgxGb2paQOzqqpOcHKvlyLuzO5uybMXaipLExTGJXBlXrbbASfXa/yGYSAG |
||||
iVrGz9CE6676dMlm8F+s3XXE13QZrXmjloc6jwOljnfAkjTGXjiB7OULESed96MR |
||||
XtfLk0W5Ab9pd7tKDR6QHI7rgHXfCopRnZ2VVQ== |
||||
=V/6I |
||||
-----END PGP PUBLIC KEY BLOCK----- |
@ -1,26 +0,0 @@ |
||||
[epel] |
||||
name=Extra Packages for Enterprise Linux 6 - $basearch |
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch |
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch |
||||
failovermethod=priority |
||||
enabled=1 |
||||
gpgcheck=1 |
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 |
||||
|
||||
[epel-debuginfo] |
||||
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug |
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug |
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch |
||||
failovermethod=priority |
||||
enabled=0 |
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 |
||||
gpgcheck=1 |
||||
|
||||
[epel-source] |
||||
name=Extra Packages for Enterprise Linux 6 - $basearch - Source |
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS |
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch |
||||
failovermethod=priority |
||||
enabled=0 |
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 |
||||
gpgcheck=1 |
@ -1,13 +0,0 @@ |
||||
[openshift_support] |
||||
name=Extra Packages for OpenShift - $basearch |
||||
baseurl=https://mirror.openshift.com/pub/openshift/release/2/rhel-6/dependencies/x86_64/ |
||||
failovermethod=priority |
||||
enabled=1 |
||||
gpgcheck=0 |
||||
|
||||
[openshift] |
||||
name=Packages for OpenShift - $basearch |
||||
baseurl=https://mirror.openshift.com/pub/openshift/release/2/rhel-6/packages/x86_64/ |
||||
failovermethod=priority |
||||
enabled=1 |
||||
gpgcheck=0 |
@ -1,10 +0,0 @@ |
||||
# Setup PATH, LD_LIBRARY_PATH and MANPATH for ruby-1.9 |
||||
ruby19_dir=$(dirname `scl enable ruby193 "which ruby"`) |
||||
export PATH=$ruby19_dir:$PATH |
||||
|
||||
ruby19_ld_libs=$(scl enable ruby193 "printenv LD_LIBRARY_PATH") |
||||
export LD_LIBRARY_PATH=$ruby19_ld_libs:$LD_LIBRARY_PATH |
||||
|
||||
ruby19_manpath=$(scl enable ruby193 "printenv MANPATH") |
||||
export MANPATH=$ruby19_manpath:$MANPATH |
||||
|
@ -1,5 +0,0 @@ |
||||
--- |
||||
# Handler for mongod |
||||
|
||||
- name: restart iptables |
||||
service: name=iptables state=restarted |
@ -1,42 +0,0 @@ |
||||
--- |
||||
# Common tasks across nodes |
||||
|
||||
- name: Install common packages |
||||
yum : name={{ item }} state=installed |
||||
with_items: |
||||
- libselinux-python |
||||
- policycoreutils |
||||
- policycoreutils-python |
||||
- ntp |
||||
- ruby-devel |
||||
|
||||
- name: make sure we have the right time |
||||
shell: ntpdate -u 0.centos.pool.ntp.org |
||||
|
||||
- name: start the ntp service |
||||
service: name=ntpd state=started enabled=yes |
||||
|
||||
- name: Create the hosts file for all machines |
||||
template: src=hosts.j2 dest=/etc/hosts |
||||
|
||||
- name: Create the EPEL Repository. |
||||
copy: src=epel.repo.j2 dest=/etc/yum.repos.d/epel.repo |
||||
|
||||
- name: Create the OpenShift Repository. |
||||
copy: src=openshift.repo dest=/etc/yum.repos.d/openshift.repo |
||||
|
||||
- name: Create the GPG key for EPEL |
||||
copy: src=RPM-GPG-KEY-EPEL-6 dest=/etc/pki/rpm-gpg |
||||
|
||||
- name: SELinux Enforcing (Targeted) |
||||
selinux: policy=targeted state=enforcing |
||||
|
||||
- name: copy the file for ruby193 profile |
||||
copy: src=scl193.sh dest=/etc/profile.d/scl193.sh mode=755 |
||||
|
||||
- name: copy the file for mcollective profile |
||||
copy: src=scl193.sh dest=/etc/sysconfig/mcollective mode=755 |
||||
|
||||
- name: Create the iptables file |
||||
template: src=iptables.j2 dest=/etc/sysconfig/iptables |
||||
notify: restart iptables |
@ -1,4 +0,0 @@ |
||||
127.0.0.1 localhost |
||||
{% for host in groups['all'] %} |
||||
{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }} {{ hostvars[host].ansible_hostname }} |
||||
{% endfor %} |
@ -1,52 +0,0 @@ |
||||
# Firewall configuration written by system-config-firewall |
||||
# Manual customization of this file is not recommended. |
||||
|
||||
{% if 'broker' in group_names %} |
||||
*nat |
||||
-A PREROUTING -d {{ vip }}/32 -p tcp -m tcp --dport 443 -j REDIRECT |
||||
COMMIT |
||||
{% endif %} |
||||
|
||||
*filter |
||||
:INPUT ACCEPT [0:0] |
||||
:FORWARD ACCEPT [0:0] |
||||
:OUTPUT ACCEPT [0:0] |
||||
{% if 'mongo_servers' in group_names %} |
||||
-A INPUT -p tcp --dport {{ mongod_port }} -j ACCEPT |
||||
{% endif %} |
||||
{% if 'mq' in group_names %} |
||||
-A INPUT -p tcp --dport 61613 -j ACCEPT |
||||
-A INPUT -p tcp --dport 61616 -j ACCEPT |
||||
-A INPUT -p tcp --dport 8161 -j ACCEPT |
||||
{% endif %} |
||||
{% if 'broker' in group_names %} |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
-A INPUT -p tcp --dport 443 -j ACCEPT |
||||
{% endif %} |
||||
{% if 'lvs' in group_names %} |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
-A INPUT -p tcp --dport 443 -j ACCEPT |
||||
{% endif %} |
||||
{% if 'nodes' in group_names %} |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
-A INPUT -p tcp --dport 443 -j ACCEPT |
||||
-A INPUT -p tcp -m multiport --dports 35531:65535 -j ACCEPT |
||||
{% endif %} |
||||
{% if 'dns' in group_names %} |
||||
-A INPUT -p tcp --dport {{ dns_port }} -j ACCEPT |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
-A INPUT -p tcp --dport 443 -j ACCEPT |
||||
-A INPUT -p udp --dport {{ dns_port }} -j ACCEPT |
||||
-A INPUT -p udp --dport {{ rndc_port }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ rndc_port }} -j ACCEPT |
||||
{% endif %} |
||||
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT |
||||
-A INPUT -p icmp -j ACCEPT |
||||
-A INPUT -i lo -j ACCEPT |
||||
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT |
||||
-A INPUT -j REJECT --reject-with icmp-host-prohibited |
||||
-A FORWARD -j REJECT --reject-with icmp-host-prohibited |
||||
COMMIT |
||||
|
||||
|
||||
|
@ -1,6 +0,0 @@ |
||||
--- |
||||
# handler for dns |
||||
|
||||
- name: restart named |
||||
service: name=named state=restarted enabled=yes |
||||
|