diff --git a/hadoop/LICENSE.md b/hadoop/LICENSE.md deleted file mode 100644 index 2b437ec..0000000 --- a/hadoop/LICENSE.md +++ /dev/null @@ -1,4 +0,0 @@ -Copyright (C) 2013 AnsibleWorks, Inc. - -This work is licensed under the Creative Commons Attribution 3.0 Unported License. -To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/deed.en_US. diff --git a/hadoop/README.md b/hadoop/README.md deleted file mode 100644 index 8553d1c..0000000 --- a/hadoop/README.md +++ /dev/null @@ -1,363 +0,0 @@ -# Deploying Hadoop Clusters using Ansible - -## Preface - -The playbooks in this example are designed to deploy a Hadoop cluster on a -CentOS 6 or RHEL 6 environment using Ansible. The playbooks can: - -1) Deploy a fully functional Hadoop cluster with HA and automatic failover. - -2) Deploy a fully functional Hadoop cluster with no HA. - -3) Deploy additional nodes to scale the cluster - -These playbooks require Ansible 1.2, CentOS 6 or RHEL 6 target machines, and install -the open-source Cloudera Hadoop Distribution (CDH) version 4. - -## Hadoop Components - -Hadoop is framework that allows processing of large datasets across large -clusters. The two main components that make up a Hadoop cluster are the HDFS -Filesystem and the MapReduce framework. Briefly, the HDFS filesystem is responsible -for storing data across the cluster nodes on its local disks. The MapReduce -jobs are the tasks that would run on these nodes to get a meaningful result -using the data stored on the HDFS filesystem. - -Let's have a closer look at each of these components. - -## HDFS - -![Alt text](images/hdfs.png "HDFS") - -The above diagram illustrates an HDFS filesystem. The cluster consists of three -DataNodes which are responsible for storing/replicating data, while the NameNode -is a process which is responsible for storing the metadata for the entire -filesystem. As the example illustrates above, when a client wants to write a -file to the HDFS cluster it first contacts the NameNode and lets it know that -it want to write a file. The NameNode then decides where and how the file -should be saved and notifies the client about its decision. - -In the given example "File1" has a size of 128MB and the block size of the HDFS -filesystem is 64 MB. Hence, the NameNode instructs the client to break down the -file into two different blocks and write the first block to DataNode 1 and the -second block to DataNode 2. Upon receiving the notification from the NameNode, -the client contacts DataNode 1 and DataNode 2 directly to write the data. - -Once the data is recieved by the DataNodes, they replicate the block across the -other nodes. The number of nodes across which the data would be replicated is -based on the HDFS configuration, the default value being 3. Meanwhile the -NameNode updates its metadata with the entry of the new file "File1" and the -locations where the parts are stored. - -## MapReduce - -MapReduce is a Java application that utilizes the data stored in the -HDFS filesystem to get some useful and meaningful result. The whole job is -split into two parts: the "Map" job and the "Reduce" Job. - -Let's consider an example. In the previous step we had uploaded the "File1" -into the HDFS filesystem and the file was broken down into two different -blocks. Let's assume that the first block had the data "black sheep" in it and -the second block has the data "white sheep" in it. Now let's assume a client -wants to get count of all the words occurring in "File1". In order to get the -count, the first thing the client would have to do is write a "Map" program -then a "Reduce" program. - -Here's a psudeo code of how the Map and Reduce jobs might look: - - mapper (File1, file-contents): - for each word in file-contents: - emit (word, 1) - - reducer (word, values): - sum = 0 - for each value in values: - sum = sum + value - emit (word, sum) - -The work of the Map job is to go through all the words in the file and emit -a key/value pair. In this case the key is the word itself and value is always -1. - -The Reduce job is quite simple: it increments the value of sum by 1, for each -value it gets. - -Once the Map and Reduce jobs are ready, the client would instruct the -"JobTracker" (the process resposible for scheduling the jobs on the cluster) -to run the MapReduce job on "File1" - -Let's have closer look at the anotomy of a Map job. - - -![Alt text](images/map.png "Map job") - -As the figure above shows, when the client instructs the JobTracker to run a -job on File1, the JobTracker first contacts the NameNode to determine where the -blocks of the File1 are. Then the JobTracker sends the Map job's JAR file down -to the nodes having the blocks, and the TaskTracker process those nodes to run -the application. - -In the above example, DataNode 1 and DataNode 2 havw the blocks, so the -TaskTrackers on those nodes run the Map jobs. Once the jobs are completed the -two nodes would have key/value results as below: - -MapJob Results: - - TaskTracker1: - "Black: 1" - "Sheep: 1" - - TaskTracker2: - "White: 1" - "Sheep: 1" - - -Once the Map phase is completed the JobTracker process initiates the Shuffle -and Reduce process. - -Let's have closer look at the Shuffle-Reduce job. - -![Alt text](images/reduce.png "Reduce job") - -As the figure above demonstrates, the first thing that the JobTracker does is -spawn a Reducer job on the DataNode/Tasktracker nodes for each "key" in the job -result. In this case we have three keys: "black, white, sheep" in our result, -so three Reducers are spawned: one for each key. The Map jobs shuffle the keys -out to the respective Reduce jobs. Then the Reduce job code runs and the sum is -calculated, and the result is written into the HDFS filesystem in a common -directory. In the above example the output directory is specified as -"/home/ben/output" so all the Reducers will write their results into this -directory under different filenames; the file names being "part-00xx", where x -is the Reducer/partition number. - - -## Hadoop Deployment - -![Alt text](images/hadoop.png "Reduce job") - -The above diagram depicts a typical Hadoop deployment. The NameNode and -JobTracker usually reside on the same machine, though they can run on seperate -machines. The DataNodes and TaskTrackers run on the same node. The size of the -cluster can be scaled to thousands of nodes with petabytes of storage. - -The above deployment model provides redundancy for data as the HDFS filesytem -takes care of the data replication. The only single point of failure are the -NameNode and the JobTracker. If any of these components fail the cluster will -not be usable. - - -## Making Hadoop HA - -To make the Hadoop cluster highly available we would have to add another set of -JobTracker/NameNodes, and make sure that the data updated by the master is also -somehow also updated by the client. In case of failure of the primary node, the -secondary node takes over that role. - -The first thing that has to be dealt with is the data held by the NameNode. As -we recall, the NameNode holds all of the metadata about the filesystem, so any -update to the metadata should also be reflected on the secondary NameNode's -metadata copy. The synchronization of the primary and seconary NameNode -metadata is handled by the Quorum Journal Manager. - - -### Quorum Journal Manager - -![Alt text](images/qjm.png "QJM") - -As the figure above shows the Quorum Journal manager consists of the journal -manager client and journal manager nodes. The journal manager clients reside -on the same node as the NameNodes, and in case of primary node, collects all the -edits logs happening on the NameNode and sends it out to the Journal nodes. The -journal manager client residing on the secondary namenode regurlary contacts -the journal nodes and updates its local metadata to be consistant with the -master node. In case of primary node failure the secondary NameNode updates -itself to the latest edit logs and takes over as the primary NameNode. - - -### Zookeeper - -Apart from data consistency, a distributed cluster system also needs a -mechanism for centralized coordination. For example, there should be a way for -the secondary node to tell if the primary node is running properly, and if not -it has to take up the role of the primary. Zookeeper provides Hadoop with a -mechanism to coordinate in this way. - -![Alt text](images/zookeeper.png "Zookeeper") - -As the figure above shows, the Zookeeper services are client/server baseds -service. The server component itself is replicated over a set of machines that -comprise the service. In short, high availability is built into the Zookeeper -servers. - -For Hadoop, two Zookeeper clients have been built: ZKFC (Zookeeper Failover -Controller), one for the NameNode and one for JobTracker. These clients run on -the same machines as the NameNode/JobTrackers themselves. - -When a ZKFC client is started, it establishes a connection with one of the -Zookeeper nodes and obtains a session ID. The client then keeps a health check -on the NameNode/JobTracker and sends heartbeats to the ZooKeeper. - -If the ZKFC client detects a failure of the NameNode/JobTracker, it removes -itself from the ZooKeeper active/standby election, and the other ZKFC client -fences the node/service and takes over the primary role. - - -## Hadoop HA Deployment - -![Alt text](images/hadoopha.png "Hadoop_HA") - -The above diagram depicts a fully HA Hadoop Cluster with no single point of -failure and automated failover. - - -## Deploying Hadoop Clusters with Ansible - -Setting up a Hadoop cluster without HA itself can be a challenging and -time-consuming task, and with HA, things become even more difficult. - -Ansible can automate the whole process of deploying a Hadoop cluster with or -without HA with the same playbook, in a matter of minutes. This can be used for -quick environment rebuild, or in case of disaster or node failures, recovery -time can be greatly reduced with Ansible automation. - -Let's have a look to see how this is done. - -## Deploying a Hadoop cluster with HA - -### Prerequisites - -These playbooks have been tested using Ansible v1.2, and CentOS 6.x (64 bit) - -Modify group_vars/all to choose the network interface for Hadoop communication. - -Optionally you change the Hadoop-specific parameters like ports or directories -by editing group_vars/all file. - -Before launching the deployment playbook make sure the inventory file (hosts) -is set up properly. Here's a sample: - - [hadoop_master_primary] - zhadoop1 - - [hadoop_master_secondary] - zhadoop2 - - [hadoop_masters:children] - hadoop_master_primary - hadoop_master_secondary - - [hadoop_slaves] - hadoop1 - hadoop2 - hadoop3 - - [qjournal_servers] - zhadoop1 - zhadoop2 - zhadoop3 - - [zookeeper_servers] - zhadoop1 zoo_id=1 - zhadoop2 zoo_id=2 - zhadoop3 zoo_id=3 - -Once the inventory is set up, the Hadoop cluster can be setup using the following -command - - ansible-playbook -i hosts site.yml - -Once deployed, we can check the cluster sanity in different ways. To check the -status of the HDFS filesystem and a report on all the DataNodes, log in as the -'hdfs' user on any Hadoop master server, and issue the following command to get -the report: - - hadoop dfsadmin -report - -To check the sanity of HA, first log in as the 'hdfs' user on any Hadoop master -server and get the current active/standby NameNode servers this way: - - -bash-4.1$ hdfs haadmin -getServiceState zhadoop1 - active - -bash-4.1$ hdfs haadmin -getServiceState zhadoop2 - standby - -To get the state of the JobTracker process login as the 'mapred' user on any -Hadoop master server and issue the following command: - - -bash-4.1$ hadoop mrhaadmin -getServiceState hadoop1 - standby - -bash-4.1$ hadoop mrhaadmin -getServiceState hadoop2 - active - -Once you have determined which server is active and which is standby, you can -kill the NameNode/JobTracker process on the server listed as active and issue -the same commands again, and you should see that the standby has been promoted -to the active state. Later, you can restart the killed process and see that node -listed as standby. - -### Running a MapReduce Job - -To deploy the mapreduce job run the following script from any of the hadoop -master nodes as user 'hdfs'. The job would count the number of occurance of the -word 'hello' in the given inputfile. Eg: su - hdfs -c "/tmp/job.sh" - - #!/bin/bash - cat > /tmp/inputfile << EOF - hello - sf - sdf - hello - sdf - sdf - EOF - hadoop fs -put /tmp/inputfile /inputfile - hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep /inputfile /outputfile 'hello' - hadoop fs -get /outputfile /tmp/outputfile/ - -To verify the result, read the file on the server located at -/tmp/outputfile/part-00000, which should give you the count. - -## Scale the Cluster - -When the Hadoop cluster reaches its maximum capacity, it can be scaled by -adding nodes. This can be easily accomplished by adding the node hostname to -the Ansible inventory under the hadoop_slaves group, and running the following -command: - - ansible-playbook -i hosts site.yml --tags=slaves - -## Deploy a non-HA Hadoop Cluster - -The following diagram illustrates a standalone Hadoop cluster. - -To deploy this cluster fill in the inventory file as follows: - - [hadoop_all:children] - hadoop_masters - hadoop_slaves - - [hadoop_master_primary] - zhadoop1 - - [hadoop_master_secondary] - - [hadoop_masters:children] - hadoop_master_primary - hadoop_master_secondary - - [hadoop_slaves] - hadoop1 - hadoop2 - hadoop3 - -Edit the group_vars/all file to disable HA: - - ha_enabled: False - -And run the following command: - - ansible-playbook -i hosts site.yml - -The validity of the cluster can be checked by running the same MapReduce job -that has documented above for an HA Hadoop cluster. - diff --git a/hadoop/group_vars/all b/hadoop/group_vars/all deleted file mode 100644 index 019dc13..0000000 --- a/hadoop/group_vars/all +++ /dev/null @@ -1,56 +0,0 @@ -# Defaults to the first ethernet interface. Change this to: -# -# iface: eth1 -# -# ...to override. -# -iface: '{{ ansible_default_ipv4.interface }}' - -ha_enabled: False - -hadoop: - -#Variables for - common - - fs_default_FS_port: 8020 - nameservice_id: mycluster4 - -#Variables for - - dfs_permissions_superusergroup: hdfs - dfs_namenode_name_dir: - - /namedir1/ - - /namedir2/ - dfs_replication: 3 - dfs_namenode_handler_count: 50 - dfs_blocksize: 67108864 - dfs_datanode_data_dir: - - /datadir1/ - - /datadir2/ - dfs_datanode_address_port: 50010 - dfs_datanode_http_address_port: 50075 - dfs_datanode_ipc_address_port: 50020 - dfs_namenode_http_address_port: 50070 - dfs_ha_zkfc_port: 8019 - qjournal_port: 8485 - qjournal_http_port: 8480 - dfs_journalnode_edits_dir: /journaldir/ - zookeeper_clientport: 2181 - zookeeper_leader_port: 2888 - zookeeper_election_port: 3888 - -#Variables for - common - mapred_job_tracker_ha_servicename: myjt4 - mapred_job_tracker_http_address_port: 50030 - mapred_task_tracker_http_address_port: 50060 - mapred_job_tracker_port: 8021 - mapred_ha_jobtracker_rpc-address_port: 8023 - mapred_ha_zkfc_port: 8018 - mapred_job_tracker_persist_jobstatus_dir: /jobdir/ - mapred_local_dir: - - /mapred1/ - - /mapred2/ - - - - diff --git a/hadoop/hosts b/hadoop/hosts deleted file mode 100644 index 06cc524..0000000 --- a/hadoop/hosts +++ /dev/null @@ -1,31 +0,0 @@ -[hadoop_all:children] -hadoop_masters -hadoop_slaves -qjournal_servers -zookeeper_servers - -[hadoop_master_primary] -hadoop1 - -[hadoop_master_secondary] -hadoop2 - -[hadoop_masters:children] -hadoop_master_primary -hadoop_master_secondary - -[hadoop_slaves] -hadoop1 -hadoop2 -hadoop3 - -[qjournal_servers] -hadoop1 -hadoop2 -hadoop3 - -[zookeeper_servers] -hadoop1 zoo_id=1 -hadoop2 zoo_id=2 -hadoop3 zoo_id=3 - diff --git a/hadoop/images/hadoop.png b/hadoop/images/hadoop.png deleted file mode 100644 index 37b09e2..0000000 Binary files a/hadoop/images/hadoop.png and /dev/null differ diff --git a/hadoop/images/hadoopha.png b/hadoop/images/hadoopha.png deleted file mode 100644 index 895b2aa..0000000 Binary files a/hadoop/images/hadoopha.png and /dev/null differ diff --git a/hadoop/images/hdfs.png b/hadoop/images/hdfs.png deleted file mode 100644 index 475b483..0000000 Binary files a/hadoop/images/hdfs.png and /dev/null differ diff --git a/hadoop/images/map.png b/hadoop/images/map.png deleted file mode 100644 index 448c73d..0000000 Binary files a/hadoop/images/map.png and /dev/null differ diff --git a/hadoop/images/qjm.png b/hadoop/images/qjm.png deleted file mode 100644 index d2c7896..0000000 Binary files a/hadoop/images/qjm.png and /dev/null differ diff --git a/hadoop/images/reduce.png b/hadoop/images/reduce.png deleted file mode 100644 index 9a7d1c5..0000000 Binary files a/hadoop/images/reduce.png and /dev/null differ diff --git a/hadoop/images/zookeeper.png b/hadoop/images/zookeeper.png deleted file mode 100644 index e9cda52..0000000 Binary files a/hadoop/images/zookeeper.png and /dev/null differ diff --git a/hadoop/playbooks/inputfile b/hadoop/playbooks/inputfile deleted file mode 100644 index 00dbe3f..0000000 --- a/hadoop/playbooks/inputfile +++ /dev/null @@ -1,19 +0,0 @@ -asdf -sdf -sdf -sd -f -sf -sdf -sd -fsd -hello -asf -sf -sd -fsd -f -sdf -sd -hello - diff --git a/hadoop/playbooks/job.yml b/hadoop/playbooks/job.yml deleted file mode 100644 index a31e95e..0000000 --- a/hadoop/playbooks/job.yml +++ /dev/null @@ -1,21 +0,0 @@ ---- -# Launch Job to count occurance of a word. - -- hosts: $server - user: root - tasks: - - name: copy the file - copy: src=inputfile dest=/tmp/inputfile - - - name: upload the file - shell: su - hdfs -c "hadoop fs -put /tmp/inputfile /inputfile" - - - name: Run the MapReduce job to count the occurance of word hello - shell: su - hdfs -c "hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep /inputfile /outputfile 'hello'" - - - name: Fetch the outputfile to local tmp dir - shell: su - hdfs -c "hadoop fs -get /outputfile /tmp/outputfile" - - - name: Get the outputfile to ansible server - fetch: dest=/tmp src=/tmp/outputfile/part-00000 - diff --git a/hadoop/roles/common/files/etc/cloudera-CDH4.repo b/hadoop/roles/common/files/etc/cloudera-CDH4.repo deleted file mode 100644 index 249f664..0000000 --- a/hadoop/roles/common/files/etc/cloudera-CDH4.repo +++ /dev/null @@ -1,5 +0,0 @@ -[cloudera-cdh4] -name=Cloudera's Distribution for Hadoop, Version 4 -baseurl=http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/4/ -gpgkey = http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera -gpgcheck = 1 diff --git a/hadoop/roles/common/handlers/main.yml b/hadoop/roles/common/handlers/main.yml deleted file mode 100644 index 8860ce2..0000000 --- a/hadoop/roles/common/handlers/main.yml +++ /dev/null @@ -1,2 +0,0 @@ -- name: restart iptables - service: name=iptables state=restarted diff --git a/hadoop/roles/common/tasks/common.yml b/hadoop/roles/common/tasks/common.yml deleted file mode 100644 index 34485b8..0000000 --- a/hadoop/roles/common/tasks/common.yml +++ /dev/null @@ -1,28 +0,0 @@ ---- -# The playbook for common tasks - -- name: Deploy the Cloudera Repository - copy: src=etc/cloudera-CDH4.repo dest=/etc/yum.repos.d/cloudera-CDH4.repo - -- name: Install the libselinux-python package - yum: name=libselinux-python state=installed - -- name: Install the openjdk package - yum: name=java-1.6.0-openjdk state=installed - -- name: Create a directory for java - file: state=directory path=/usr/java/ - -- name: Create a link for java - file: src=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre state=link path=/usr/java/default - -- name: Create the hosts file for all machines - template: src=etc/hosts.j2 dest=/etc/hosts - -- name: Disable SELinux in conf file - selinux: state=disabled - -- name: Create the iptables file for all machines - template: src=iptables.j2 dest=/etc/sysconfig/iptables - notify: restart iptables - diff --git a/hadoop/roles/common/tasks/main.yml b/hadoop/roles/common/tasks/main.yml deleted file mode 100644 index ef6677f..0000000 --- a/hadoop/roles/common/tasks/main.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -# The playbook for common tasks - -- include: common.yml tags=slaves - diff --git a/hadoop/roles/common/templates/etc/hosts.j2 b/hadoop/roles/common/templates/etc/hosts.j2 deleted file mode 100644 index df5ed34..0000000 --- a/hadoop/roles/common/templates/etc/hosts.j2 +++ /dev/null @@ -1,5 +0,0 @@ -127.0.0.1 localhost -{% for host in groups.all %} -{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }} -{% endfor %} - diff --git a/hadoop/roles/common/templates/hadoop_conf/core-site.xml.j2 b/hadoop/roles/common/templates/hadoop_conf/core-site.xml.j2 deleted file mode 100644 index 75ac8f6..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/core-site.xml.j2 +++ /dev/null @@ -1,25 +0,0 @@ - - - - - - - fs.defaultFS - hdfs://{{ hostvars[groups['hadoop_masters'][0]]['ansible_hostname'] + ':' ~ hadoop['fs_default_FS_port'] }}/ - - diff --git a/hadoop/roles/common/templates/hadoop_conf/hadoop-metrics.properties.j2 b/hadoop/roles/common/templates/hadoop_conf/hadoop-metrics.properties.j2 deleted file mode 100644 index c1b2eb7..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/hadoop-metrics.properties.j2 +++ /dev/null @@ -1,75 +0,0 @@ -# Configuration of the "dfs" context for null -dfs.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "dfs" context for file -#dfs.class=org.apache.hadoop.metrics.file.FileContext -#dfs.period=10 -#dfs.fileName=/tmp/dfsmetrics.log - -# Configuration of the "dfs" context for ganglia -# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) -# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# dfs.period=10 -# dfs.servers=localhost:8649 - - -# Configuration of the "mapred" context for null -mapred.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "mapred" context for file -#mapred.class=org.apache.hadoop.metrics.file.FileContext -#mapred.period=10 -#mapred.fileName=/tmp/mrmetrics.log - -# Configuration of the "mapred" context for ganglia -# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) -# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# mapred.period=10 -# mapred.servers=localhost:8649 - - -# Configuration of the "jvm" context for null -#jvm.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "jvm" context for file -#jvm.class=org.apache.hadoop.metrics.file.FileContext -#jvm.period=10 -#jvm.fileName=/tmp/jvmmetrics.log - -# Configuration of the "jvm" context for ganglia -# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# jvm.period=10 -# jvm.servers=localhost:8649 - -# Configuration of the "rpc" context for null -rpc.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "rpc" context for file -#rpc.class=org.apache.hadoop.metrics.file.FileContext -#rpc.period=10 -#rpc.fileName=/tmp/rpcmetrics.log - -# Configuration of the "rpc" context for ganglia -# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# rpc.period=10 -# rpc.servers=localhost:8649 - - -# Configuration of the "ugi" context for null -ugi.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "ugi" context for file -#ugi.class=org.apache.hadoop.metrics.file.FileContext -#ugi.period=10 -#ugi.fileName=/tmp/ugimetrics.log - -# Configuration of the "ugi" context for ganglia -# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# ugi.period=10 -# ugi.servers=localhost:8649 - diff --git a/hadoop/roles/common/templates/hadoop_conf/hadoop-metrics2.properties.j2 b/hadoop/roles/common/templates/hadoop_conf/hadoop-metrics2.properties.j2 deleted file mode 100644 index c3ffe31..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/hadoop-metrics2.properties.j2 +++ /dev/null @@ -1,44 +0,0 @@ -# -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -# syntax: [prefix].[source|sink].[instance].[options] -# See javadoc of package-info.java for org.apache.hadoop.metrics2 for details - -*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink -# default sampling period, in seconds -*.period=10 - -# The namenode-metrics.out will contain metrics from all context -#namenode.sink.file.filename=namenode-metrics.out -# Specifying a special sampling period for namenode: -#namenode.sink.*.period=8 - -#datanode.sink.file.filename=datanode-metrics.out - -# the following example split metrics of different -# context to different sinks (in this case files) -#jobtracker.sink.file_jvm.context=jvm -#jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out -#jobtracker.sink.file_mapred.context=mapred -#jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out - -#tasktracker.sink.file.filename=tasktracker-metrics.out - -#maptask.sink.file.filename=maptask-metrics.out - -#reducetask.sink.file.filename=reducetask-metrics.out - diff --git a/hadoop/roles/common/templates/hadoop_conf/hdfs-site.xml.j2 b/hadoop/roles/common/templates/hadoop_conf/hdfs-site.xml.j2 deleted file mode 100644 index 0c537fc..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/hdfs-site.xml.j2 +++ /dev/null @@ -1,57 +0,0 @@ - - - - - - - dfs.blocksize - {{ hadoop['dfs_blocksize'] }} - - - dfs.permissions.superusergroup - {{ hadoop['dfs_permissions_superusergroup'] }} - - - dfs.namenode.http.address - 0.0.0.0:{{ hadoop['dfs_namenode_http_address_port'] }} - - - dfs.datanode.address - 0.0.0.0:{{ hadoop['dfs_datanode_address_port'] }} - - - dfs.datanode.http.address - 0.0.0.0:{{ hadoop['dfs_datanode_http_address_port'] }} - - - dfs.datanode.ipc.address - 0.0.0.0:{{ hadoop['dfs_datanode_ipc_address_port'] }} - - - dfs.replication - {{ hadoop['dfs_replication'] }} - - - dfs.namenode.name.dir - {{ hadoop['dfs_namenode_name_dir'] | join(',') }} - - - dfs.datanode.data.dir - {{ hadoop['dfs_datanode_data_dir'] | join(',') }} - - diff --git a/hadoop/roles/common/templates/hadoop_conf/log4j.properties.j2 b/hadoop/roles/common/templates/hadoop_conf/log4j.properties.j2 deleted file mode 100644 index b92ad27..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/log4j.properties.j2 +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright 2011 The Apache Software Foundation -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Define some default values that can be overridden by system properties -hadoop.root.logger=INFO,console -hadoop.log.dir=. -hadoop.log.file=hadoop.log - -# Define the root logger to the system property "hadoop.root.logger". -log4j.rootLogger=${hadoop.root.logger}, EventCounter - -# Logging Threshold -log4j.threshold=ALL - -# Null Appender -log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender - -# -# Rolling File Appender - cap space usage at 5gb. -# -hadoop.log.maxfilesize=256MB -hadoop.log.maxbackupindex=20 -log4j.appender.RFA=org.apache.log4j.RollingFileAppender -log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} - -log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize} -log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex} - -log4j.appender.RFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -# Debugging Pattern format -#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n - - -# -# Daily Rolling File Appender -# - -log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file} - -# Rollver at midnight -log4j.appender.DRFA.DatePattern=.yyyy-MM-dd - -# 30-day backup -#log4j.appender.DRFA.MaxBackupIndex=30 -log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -# Debugging Pattern format -#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n - - -# -# console -# Add "console" to rootlogger above if you want to use this -# - -log4j.appender.console=org.apache.log4j.ConsoleAppender -log4j.appender.console.target=System.err -log4j.appender.console.layout=org.apache.log4j.PatternLayout -log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n - -# -# TaskLog Appender -# - -#Default values -hadoop.tasklog.taskid=null -hadoop.tasklog.iscleanup=false -hadoop.tasklog.noKeepSplits=4 -hadoop.tasklog.totalLogFileSize=100 -hadoop.tasklog.purgeLogSplits=true -hadoop.tasklog.logsRetainHours=12 - -log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender -log4j.appender.TLA.taskId=${hadoop.tasklog.taskid} -log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup} -log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize} - -log4j.appender.TLA.layout=org.apache.log4j.PatternLayout -log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n - -# -# HDFS block state change log from block manager -# -# Uncomment the following to suppress normal block state change -# messages from BlockManager in NameNode. -#log4j.logger.BlockStateChange=WARN - -# -#Security appender -# -hadoop.security.logger=INFO,NullAppender -hadoop.security.log.maxfilesize=256MB -hadoop.security.log.maxbackupindex=20 -log4j.category.SecurityLogger=${hadoop.security.logger} -hadoop.security.log.file=SecurityAuth-${user.name}.audit -log4j.appender.RFAS=org.apache.log4j.RollingFileAppender -log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} -log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} -log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} - -# -# Daily Rolling Security appender -# -log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} -log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd - -# -# hdfs audit logging -# -hdfs.audit.logger=INFO,NullAppender -hdfs.audit.log.maxfilesize=256MB -hdfs.audit.log.maxbackupindex=20 -log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} -log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false -log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender -log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log -log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} -log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} - -# -# mapred audit logging -# -mapred.audit.logger=INFO,NullAppender -mapred.audit.log.maxfilesize=256MB -mapred.audit.log.maxbackupindex=20 -log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} -log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false -log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender -log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log -log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout -log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize} -log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex} - -# Custom Logging levels - -#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG -#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG -#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG - -# Jets3t library -log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR - -# -# Event Counter Appender -# Sends counts of logging messages at different severity levels to Hadoop Metrics. -# -log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter - -# -# Job Summary Appender -# -# Use following logger to send summary to separate file defined by -# hadoop.mapreduce.jobsummary.log.file : -# hadoop.mapreduce.jobsummary.logger=INFO,JSA -# -hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger} -hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log -hadoop.mapreduce.jobsummary.log.maxfilesize=256MB -hadoop.mapreduce.jobsummary.log.maxbackupindex=20 -log4j.appender.JSA=org.apache.log4j.RollingFileAppender -log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file} -log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize} -log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex} -log4j.appender.JSA.layout=org.apache.log4j.PatternLayout -log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n -log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger} -log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false - -# -# Yarn ResourceManager Application Summary Log -# -# Set the ResourceManager summary log filename -#yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log -# Set the ResourceManager summary log level and appender -#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY - -# Appender for ResourceManager Application Summary Log -# Requires the following properties to be set -# - hadoop.log.dir (Hadoop Log directory) -# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename) -# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender) - -#log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger} -#log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false -#log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender -#log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file} -#log4j.appender.RMSUMMARY.MaxFileSize=256MB -#log4j.appender.RMSUMMARY.MaxBackupIndex=20 -#log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout -#log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n diff --git a/hadoop/roles/common/templates/hadoop_conf/mapred-site.xml.j2 b/hadoop/roles/common/templates/hadoop_conf/mapred-site.xml.j2 deleted file mode 100644 index 0941698..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/mapred-site.xml.j2 +++ /dev/null @@ -1,22 +0,0 @@ - - - - mapred.job.tracker - {{ hostvars[groups['hadoop_masters'][0]]['ansible_hostname'] }}:{{ hadoop['mapred_job_tracker_port'] }} - - - - mapred.local.dir - {{ hadoop["mapred_local_dir"] | join(',') }} - - - - mapred.task.tracker.http.address - 0.0.0.0:{{ hadoop['mapred_task_tracker_http_address_port'] }} - - - mapred.job.tracker.http.address - 0.0.0.0:{{ hadoop['mapred_job_tracker_http_address_port'] }} - - - diff --git a/hadoop/roles/common/templates/hadoop_conf/slaves.j2 b/hadoop/roles/common/templates/hadoop_conf/slaves.j2 deleted file mode 100644 index 44f97e2..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/slaves.j2 +++ /dev/null @@ -1,3 +0,0 @@ -{% for host in groups['hadoop_slaves'] %} -{{ host }} -{% endfor %} diff --git a/hadoop/roles/common/templates/hadoop_conf/ssl-client.xml.example.j2 b/hadoop/roles/common/templates/hadoop_conf/ssl-client.xml.example.j2 deleted file mode 100644 index a50dce4..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/ssl-client.xml.example.j2 +++ /dev/null @@ -1,80 +0,0 @@ - - - - - - - ssl.client.truststore.location - - Truststore to be used by clients like distcp. Must be - specified. - - - - - ssl.client.truststore.password - - Optional. Default value is "". - - - - - ssl.client.truststore.type - jks - Optional. The keystore file format, default value is "jks". - - - - - ssl.client.truststore.reload.interval - 10000 - Truststore reload check interval, in milliseconds. - Default value is 10000 (10 seconds). - - - - - ssl.client.keystore.location - - Keystore to be used by clients like distcp. Must be - specified. - - - - - ssl.client.keystore.password - - Optional. Default value is "". - - - - - ssl.client.keystore.keypassword - - Optional. Default value is "". - - - - - ssl.client.keystore.type - jks - Optional. The keystore file format, default value is "jks". - - - - diff --git a/hadoop/roles/common/templates/hadoop_conf/ssl-server.xml.example.j2 b/hadoop/roles/common/templates/hadoop_conf/ssl-server.xml.example.j2 deleted file mode 100644 index 4b363ff..0000000 --- a/hadoop/roles/common/templates/hadoop_conf/ssl-server.xml.example.j2 +++ /dev/null @@ -1,77 +0,0 @@ - - - - - - - ssl.server.truststore.location - - Truststore to be used by NN and DN. Must be specified. - - - - - ssl.server.truststore.password - - Optional. Default value is "". - - - - - ssl.server.truststore.type - jks - Optional. The keystore file format, default value is "jks". - - - - - ssl.server.truststore.reload.interval - 10000 - Truststore reload check interval, in milliseconds. - Default value is 10000 (10 seconds). - - - - ssl.server.keystore.location - - Keystore to be used by NN and DN. Must be specified. - - - - - ssl.server.keystore.password - - Must be specified. - - - - - ssl.server.keystore.keypassword - - Must be specified. - - - - - ssl.server.keystore.type - jks - Optional. The keystore file format, default value is "jks". - - - - diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/core-site.xml.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/core-site.xml.j2 deleted file mode 100644 index 62f355d..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/core-site.xml.j2 +++ /dev/null @@ -1,25 +0,0 @@ - - - - - - - fs.defaultFS - hdfs://{{ hadoop['nameservice_id'] }}/ - - diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/hadoop-metrics.properties.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/hadoop-metrics.properties.j2 deleted file mode 100644 index c1b2eb7..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/hadoop-metrics.properties.j2 +++ /dev/null @@ -1,75 +0,0 @@ -# Configuration of the "dfs" context for null -dfs.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "dfs" context for file -#dfs.class=org.apache.hadoop.metrics.file.FileContext -#dfs.period=10 -#dfs.fileName=/tmp/dfsmetrics.log - -# Configuration of the "dfs" context for ganglia -# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) -# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# dfs.period=10 -# dfs.servers=localhost:8649 - - -# Configuration of the "mapred" context for null -mapred.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "mapred" context for file -#mapred.class=org.apache.hadoop.metrics.file.FileContext -#mapred.period=10 -#mapred.fileName=/tmp/mrmetrics.log - -# Configuration of the "mapred" context for ganglia -# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) -# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# mapred.period=10 -# mapred.servers=localhost:8649 - - -# Configuration of the "jvm" context for null -#jvm.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "jvm" context for file -#jvm.class=org.apache.hadoop.metrics.file.FileContext -#jvm.period=10 -#jvm.fileName=/tmp/jvmmetrics.log - -# Configuration of the "jvm" context for ganglia -# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# jvm.period=10 -# jvm.servers=localhost:8649 - -# Configuration of the "rpc" context for null -rpc.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "rpc" context for file -#rpc.class=org.apache.hadoop.metrics.file.FileContext -#rpc.period=10 -#rpc.fileName=/tmp/rpcmetrics.log - -# Configuration of the "rpc" context for ganglia -# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# rpc.period=10 -# rpc.servers=localhost:8649 - - -# Configuration of the "ugi" context for null -ugi.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "ugi" context for file -#ugi.class=org.apache.hadoop.metrics.file.FileContext -#ugi.period=10 -#ugi.fileName=/tmp/ugimetrics.log - -# Configuration of the "ugi" context for ganglia -# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# ugi.period=10 -# ugi.servers=localhost:8649 - diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/hadoop-metrics2.properties.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/hadoop-metrics2.properties.j2 deleted file mode 100644 index c3ffe31..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/hadoop-metrics2.properties.j2 +++ /dev/null @@ -1,44 +0,0 @@ -# -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -# syntax: [prefix].[source|sink].[instance].[options] -# See javadoc of package-info.java for org.apache.hadoop.metrics2 for details - -*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink -# default sampling period, in seconds -*.period=10 - -# The namenode-metrics.out will contain metrics from all context -#namenode.sink.file.filename=namenode-metrics.out -# Specifying a special sampling period for namenode: -#namenode.sink.*.period=8 - -#datanode.sink.file.filename=datanode-metrics.out - -# the following example split metrics of different -# context to different sinks (in this case files) -#jobtracker.sink.file_jvm.context=jvm -#jobtracker.sink.file_jvm.filename=jobtracker-jvm-metrics.out -#jobtracker.sink.file_mapred.context=mapred -#jobtracker.sink.file_mapred.filename=jobtracker-mapred-metrics.out - -#tasktracker.sink.file.filename=tasktracker-metrics.out - -#maptask.sink.file.filename=maptask-metrics.out - -#reducetask.sink.file.filename=reducetask-metrics.out - diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/hdfs-site.xml.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/hdfs-site.xml.j2 deleted file mode 100644 index 7dadd91..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/hdfs-site.xml.j2 +++ /dev/null @@ -1,103 +0,0 @@ - - - - - - dfs.nameservices - {{ hadoop['nameservice_id'] }} - - - dfs.ha.namenodes.{{ hadoop['nameservice_id'] }} - {{ groups.hadoop_masters | join(',') }} - - - dfs.blocksize - {{ hadoop['dfs_blocksize'] }} - - - dfs.permissions.superusergroup - {{ hadoop['dfs_permissions_superusergroup'] }} - - - dfs.ha.automatic-failover.enabled - true - - - ha.zookeeper.quorum - {{ groups.zookeeper_servers | join(':' ~ hadoop['zookeeper_clientport'] + ',') }}:{{ hadoop['zookeeper_clientport'] }} - - -{% for host in groups['hadoop_masters'] %} - - dfs.namenode.rpc-address.{{ hadoop['nameservice_id'] }}.{{ host }} - {{ host }}:{{ hadoop['fs_default_FS_port'] }} - -{% endfor %} -{% for host in groups['hadoop_masters'] %} - - dfs.namenode.http-address.{{ hadoop['nameservice_id'] }}.{{ host }} - {{ host }}:{{ hadoop['dfs_namenode_http_address_port'] }} - -{% endfor %} - - dfs.namenode.shared.edits.dir - qjournal://{{ groups.qjournal_servers | join(':' ~ hadoop['qjournal_port'] + ';') }}:{{ hadoop['qjournal_port'] }}/{{ hadoop['nameservice_id'] }} - - - dfs.journalnode.edits.dir - {{ hadoop['dfs_journalnode_edits_dir'] }} - - - dfs.client.failover.proxy.provider.{{ hadoop['nameservice_id'] }} - org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider - - - dfs.ha.fencing.methods - shell(/bin/true ) - - - - dfs.ha.zkfc.port - {{ hadoop['dfs_ha_zkfc_port'] }} - - - - dfs.datanode.address - 0.0.0.0:{{ hadoop['dfs_datanode_address_port'] }} - - - dfs.datanode.http.address - 0.0.0.0:{{ hadoop['dfs_datanode_http_address_port'] }} - - - dfs.datanode.ipc.address - 0.0.0.0:{{ hadoop['dfs_datanode_ipc_address_port'] }} - - - dfs.replication - {{ hadoop['dfs_replication'] }} - - - dfs.namenode.name.dir - {{ hadoop['dfs_namenode_name_dir'] | join(',') }} - - - dfs.datanode.data.dir - {{ hadoop['dfs_datanode_data_dir'] | join(',') }} - - diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/log4j.properties.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/log4j.properties.j2 deleted file mode 100644 index b92ad27..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/log4j.properties.j2 +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright 2011 The Apache Software Foundation -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Define some default values that can be overridden by system properties -hadoop.root.logger=INFO,console -hadoop.log.dir=. -hadoop.log.file=hadoop.log - -# Define the root logger to the system property "hadoop.root.logger". -log4j.rootLogger=${hadoop.root.logger}, EventCounter - -# Logging Threshold -log4j.threshold=ALL - -# Null Appender -log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender - -# -# Rolling File Appender - cap space usage at 5gb. -# -hadoop.log.maxfilesize=256MB -hadoop.log.maxbackupindex=20 -log4j.appender.RFA=org.apache.log4j.RollingFileAppender -log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} - -log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize} -log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex} - -log4j.appender.RFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -# Debugging Pattern format -#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n - - -# -# Daily Rolling File Appender -# - -log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file} - -# Rollver at midnight -log4j.appender.DRFA.DatePattern=.yyyy-MM-dd - -# 30-day backup -#log4j.appender.DRFA.MaxBackupIndex=30 -log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -# Debugging Pattern format -#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n - - -# -# console -# Add "console" to rootlogger above if you want to use this -# - -log4j.appender.console=org.apache.log4j.ConsoleAppender -log4j.appender.console.target=System.err -log4j.appender.console.layout=org.apache.log4j.PatternLayout -log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n - -# -# TaskLog Appender -# - -#Default values -hadoop.tasklog.taskid=null -hadoop.tasklog.iscleanup=false -hadoop.tasklog.noKeepSplits=4 -hadoop.tasklog.totalLogFileSize=100 -hadoop.tasklog.purgeLogSplits=true -hadoop.tasklog.logsRetainHours=12 - -log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender -log4j.appender.TLA.taskId=${hadoop.tasklog.taskid} -log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup} -log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize} - -log4j.appender.TLA.layout=org.apache.log4j.PatternLayout -log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n - -# -# HDFS block state change log from block manager -# -# Uncomment the following to suppress normal block state change -# messages from BlockManager in NameNode. -#log4j.logger.BlockStateChange=WARN - -# -#Security appender -# -hadoop.security.logger=INFO,NullAppender -hadoop.security.log.maxfilesize=256MB -hadoop.security.log.maxbackupindex=20 -log4j.category.SecurityLogger=${hadoop.security.logger} -hadoop.security.log.file=SecurityAuth-${user.name}.audit -log4j.appender.RFAS=org.apache.log4j.RollingFileAppender -log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} -log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} -log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} - -# -# Daily Rolling Security appender -# -log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} -log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd - -# -# hdfs audit logging -# -hdfs.audit.logger=INFO,NullAppender -hdfs.audit.log.maxfilesize=256MB -hdfs.audit.log.maxbackupindex=20 -log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} -log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false -log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender -log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log -log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} -log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} - -# -# mapred audit logging -# -mapred.audit.logger=INFO,NullAppender -mapred.audit.log.maxfilesize=256MB -mapred.audit.log.maxbackupindex=20 -log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} -log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false -log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender -log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log -log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout -log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize} -log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex} - -# Custom Logging levels - -#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG -#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG -#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG - -# Jets3t library -log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR - -# -# Event Counter Appender -# Sends counts of logging messages at different severity levels to Hadoop Metrics. -# -log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter - -# -# Job Summary Appender -# -# Use following logger to send summary to separate file defined by -# hadoop.mapreduce.jobsummary.log.file : -# hadoop.mapreduce.jobsummary.logger=INFO,JSA -# -hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger} -hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log -hadoop.mapreduce.jobsummary.log.maxfilesize=256MB -hadoop.mapreduce.jobsummary.log.maxbackupindex=20 -log4j.appender.JSA=org.apache.log4j.RollingFileAppender -log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file} -log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize} -log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex} -log4j.appender.JSA.layout=org.apache.log4j.PatternLayout -log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n -log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger} -log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false - -# -# Yarn ResourceManager Application Summary Log -# -# Set the ResourceManager summary log filename -#yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log -# Set the ResourceManager summary log level and appender -#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY - -# Appender for ResourceManager Application Summary Log -# Requires the following properties to be set -# - hadoop.log.dir (Hadoop Log directory) -# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename) -# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender) - -#log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger} -#log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false -#log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender -#log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file} -#log4j.appender.RMSUMMARY.MaxFileSize=256MB -#log4j.appender.RMSUMMARY.MaxBackupIndex=20 -#log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout -#log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/mapred-site.xml.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/mapred-site.xml.j2 deleted file mode 100644 index 4a839c9..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/mapred-site.xml.j2 +++ /dev/null @@ -1,120 +0,0 @@ - - - - mapred.job.tracker - {{ hadoop['mapred_job_tracker_ha_servicename'] }} - - - - mapred.jobtrackers.{{ hadoop['mapred_job_tracker_ha_servicename'] }} - {{ groups['hadoop_masters'] | join(',') }} - Comma-separated list of JobTracker IDs. - - - - mapred.ha.automatic-failover.enabled - true - - - - mapred.ha.zkfc.port - {{ hadoop['mapred_ha_zkfc_port'] }} - - - - mapred.ha.fencing.methods - shell(/bin/true) - - - - ha.zookeeper.quorum - {{ groups.zookeeper_servers | join(':' ~ hadoop['zookeeper_clientport'] + ',') }}:{{ hadoop['zookeeper_clientport'] }} - - -{% for host in groups['hadoop_masters'] %} - - mapred.jobtracker.rpc-address.{{ hadoop['mapred_job_tracker_ha_servicename'] }}.{{ host }} - {{ host }}:{{ hadoop['mapred_job_tracker_port'] }} - -{% endfor %} -{% for host in groups['hadoop_masters'] %} - - mapred.job.tracker.http.address.{{ hadoop['mapred_job_tracker_ha_servicename'] }}.{{ host }} - 0.0.0.0:{{ hadoop['mapred_job_tracker_http_address_port'] }} - -{% endfor %} -{% for host in groups['hadoop_masters'] %} - - mapred.ha.jobtracker.rpc-address.{{ hadoop['mapred_job_tracker_ha_servicename'] }}.{{ host }} - {{ host }}:{{ hadoop['mapred_ha_jobtracker_rpc-address_port'] }} - -{% endfor %} -{% for host in groups['hadoop_masters'] %} - - mapred.ha.jobtracker.http-redirect-address.{{ hadoop['mapred_job_tracker_ha_servicename'] }}.{{ host }} - {{ host }}:{{ hadoop['mapred_job_tracker_http_address_port'] }} - -{% endfor %} - - - mapred.jobtracker.restart.recover - true - - - - mapred.job.tracker.persist.jobstatus.active - true - - - - mapred.job.tracker.persist.jobstatus.hours - 1 - - - - mapred.job.tracker.persist.jobstatus.dir - {{ hadoop['mapred_job_tracker_persist_jobstatus_dir'] }} - - - - mapred.client.failover.proxy.provider.{{ hadoop['mapred_job_tracker_ha_servicename'] }} - org.apache.hadoop.mapred.ConfiguredFailoverProxyProvider - - - - mapred.client.failover.max.attempts - 15 - - - - mapred.client.failover.sleep.base.millis - 500 - - - - mapred.client.failover.sleep.max.millis - 1500 - - - - mapred.client.failover.connection.retries - 0 - - - - mapred.client.failover.connection.retries.on.timeouts - 0 - - - - - mapred.local.dir - {{ hadoop["mapred_local_dir"] | join(',') }} - - - - mapred.task.tracker.http.address - 0.0.0.0:{{ hadoop['mapred_task_tracker_http_address_port'] }} - - - diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/slaves.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/slaves.j2 deleted file mode 100644 index 44f97e2..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/slaves.j2 +++ /dev/null @@ -1,3 +0,0 @@ -{% for host in groups['hadoop_slaves'] %} -{{ host }} -{% endfor %} diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/ssl-client.xml.example.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/ssl-client.xml.example.j2 deleted file mode 100644 index a50dce4..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/ssl-client.xml.example.j2 +++ /dev/null @@ -1,80 +0,0 @@ - - - - - - - ssl.client.truststore.location - - Truststore to be used by clients like distcp. Must be - specified. - - - - - ssl.client.truststore.password - - Optional. Default value is "". - - - - - ssl.client.truststore.type - jks - Optional. The keystore file format, default value is "jks". - - - - - ssl.client.truststore.reload.interval - 10000 - Truststore reload check interval, in milliseconds. - Default value is 10000 (10 seconds). - - - - - ssl.client.keystore.location - - Keystore to be used by clients like distcp. Must be - specified. - - - - - ssl.client.keystore.password - - Optional. Default value is "". - - - - - ssl.client.keystore.keypassword - - Optional. Default value is "". - - - - - ssl.client.keystore.type - jks - Optional. The keystore file format, default value is "jks". - - - - diff --git a/hadoop/roles/common/templates/hadoop_ha_conf/ssl-server.xml.example.j2 b/hadoop/roles/common/templates/hadoop_ha_conf/ssl-server.xml.example.j2 deleted file mode 100644 index 4b363ff..0000000 --- a/hadoop/roles/common/templates/hadoop_ha_conf/ssl-server.xml.example.j2 +++ /dev/null @@ -1,77 +0,0 @@ - - - - - - - ssl.server.truststore.location - - Truststore to be used by NN and DN. Must be specified. - - - - - ssl.server.truststore.password - - Optional. Default value is "". - - - - - ssl.server.truststore.type - jks - Optional. The keystore file format, default value is "jks". - - - - - ssl.server.truststore.reload.interval - 10000 - Truststore reload check interval, in milliseconds. - Default value is 10000 (10 seconds). - - - - ssl.server.keystore.location - - Keystore to be used by NN and DN. Must be specified. - - - - - ssl.server.keystore.password - - Must be specified. - - - - - ssl.server.keystore.keypassword - - Must be specified. - - - - - ssl.server.keystore.type - jks - Optional. The keystore file format, default value is "jks". - - - - diff --git a/hadoop/roles/common/templates/iptables.j2 b/hadoop/roles/common/templates/iptables.j2 deleted file mode 100644 index 7e80bf6..0000000 --- a/hadoop/roles/common/templates/iptables.j2 +++ /dev/null @@ -1,40 +0,0 @@ -# Firewall configuration written by system-config-firewall -# Manual customization of this file is not recommended_ -*filter -:INPUT ACCEPT [0:0] -:FORWARD ACCEPT [0:0] -:OUTPUT ACCEPT [0:0] -{% if 'hadoop_masters' in group_names %} --A INPUT -p tcp --dport {{ hadoop['fs_default_FS_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['dfs_namenode_http_address_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['mapred_job_tracker_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['mapred_job_tracker_http_address_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['mapred_ha_jobtracker_rpc-address_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['mapred_ha_zkfc_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['dfs_ha_zkfc_port'] }} -j ACCEPT -{% endif %} - -{% if 'hadoop_slaves' in group_names %} --A INPUT -p tcp --dport {{ hadoop['dfs_datanode_address_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['dfs_datanode_http_address_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['dfs_datanode_ipc_address_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['mapred_task_tracker_http_address_port'] }} -j ACCEPT -{% endif %} - -{% if 'qjournal_servers' in group_names %} --A INPUT -p tcp --dport {{ hadoop['qjournal_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['qjournal_http_port'] }} -j ACCEPT -{% endif %} - -{% if 'zookeeper_servers' in group_names %} --A INPUT -p tcp --dport {{ hadoop['zookeeper_clientport'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['zookeeper_leader_port'] }} -j ACCEPT --A INPUT -p tcp --dport {{ hadoop['zookeeper_election_port'] }} -j ACCEPT -{% endif %} --A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT --A INPUT -p icmp -j ACCEPT --A INPUT -i lo -j ACCEPT --A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT --A INPUT -j REJECT --reject-with icmp-host-prohibited --A FORWARD -j REJECT --reject-with icmp-host-prohibited -COMMIT diff --git a/hadoop/roles/hadoop_primary/handlers/main.yml b/hadoop/roles/hadoop_primary/handlers/main.yml deleted file mode 100644 index c493318..0000000 --- a/hadoop/roles/hadoop_primary/handlers/main.yml +++ /dev/null @@ -1,14 +0,0 @@ ---- -# Handlers for the hadoop master services - -- name: restart hadoop master services - service: name=${item} state=restarted - with_items: - - hadoop-0.20-mapreduce-jobtracker - - hadoop-hdfs-namenode - -- name: restart hadoopha master services - service: name=${item} state=restarted - with_items: - - hadoop-0.20-mapreduce-jobtrackerha - - hadoop-hdfs-namenode diff --git a/hadoop/roles/hadoop_primary/tasks/hadoop_master.yml b/hadoop/roles/hadoop_primary/tasks/hadoop_master.yml deleted file mode 100644 index 23a66ae..0000000 --- a/hadoop/roles/hadoop_primary/tasks/hadoop_master.yml +++ /dev/null @@ -1,38 +0,0 @@ ---- -# Playbook for Hadoop master servers - -- name: Install the namenode and jobtracker packages - yum: name={{ item }} state=installed - with_items: - - hadoop-0.20-mapreduce-jobtrackerha - - hadoop-hdfs-namenode - - hadoop-hdfs-zkfc - - hadoop-0.20-mapreduce-zkfc - -- name: Copy the hadoop configuration files - template: src=roles/common/templates/hadoop_ha_conf/{{ item }}.j2 dest=/etc/hadoop/conf/{{ item }} - with_items: - - core-site.xml - - hadoop-metrics.properties - - hadoop-metrics2.properties - - hdfs-site.xml - - log4j.properties - - mapred-site.xml - - slaves - - ssl-client.xml.example - - ssl-server.xml.example - notify: restart hadoopha master services - -- name: Create the data directory for the namenode metadata - file: path={{ item }} owner=hdfs group=hdfs state=directory - with_items: hadoop.dfs_namenode_name_dir - -- name: Create the data directory for the jobtracker ha - file: path={{ item }} owner=mapred group=mapred state=directory - with_items: hadoop.mapred_job_tracker_persist_jobstatus_dir - -- name: Format the namenode - shell: creates=/usr/lib/hadoop/namenode.formatted su - hdfs -c "hadoop namenode -format" && touch /usr/lib/hadoop/namenode.formatted - -- name: start hadoop namenode services - service: name=hadoop-hdfs-namenode state=started diff --git a/hadoop/roles/hadoop_primary/tasks/hadoop_master_no_ha.yml b/hadoop/roles/hadoop_primary/tasks/hadoop_master_no_ha.yml deleted file mode 100644 index fa8bb83..0000000 --- a/hadoop/roles/hadoop_primary/tasks/hadoop_master_no_ha.yml +++ /dev/null @@ -1,38 +0,0 @@ ---- -# Playbook for Hadoop master servers - -- name: Install the namenode and jobtracker packages - yum: name={{ item }} state=installed - with_items: - - hadoop-0.20-mapreduce-jobtracker - - hadoop-hdfs-namenode - -- name: Copy the hadoop configuration files for no ha - template: src=roles/common/templates/hadoop_conf/{{ item }}.j2 dest=/etc/hadoop/conf/{{ item }} - with_items: - - core-site.xml - - hadoop-metrics.properties - - hadoop-metrics2.properties - - hdfs-site.xml - - log4j.properties - - mapred-site.xml - - slaves - - ssl-client.xml.example - - ssl-server.xml.example - notify: restart hadoop master services - -- name: Create the data directory for the namenode metadata - file: path={{ item }} owner=hdfs group=hdfs state=directory - with_items: hadoop.dfs_namenode_name_dir - -- name: Format the namenode - shell: creates=/usr/lib/hadoop/namenode.formatted su - hdfs -c "hadoop namenode -format" && touch /usr/lib/hadoop/namenode.formatted - -- name: start hadoop namenode services - service: name=hadoop-hdfs-namenode state=started - -- name: Give permissions for mapred users - shell: creates=/usr/lib/hadoop/namenode.initialized su - hdfs -c "hadoop fs -chown hdfs:hadoop /"; su - hdfs -c "hadoop fs -chmod 0775 /" && touch /usr/lib/hadoop/namenode.initialized - -- name: start hadoop jobtracker services - service: name=hadoop-0.20-mapreduce-jobtracker state=started diff --git a/hadoop/roles/hadoop_primary/tasks/main.yml b/hadoop/roles/hadoop_primary/tasks/main.yml deleted file mode 100644 index 9693414..0000000 --- a/hadoop/roles/hadoop_primary/tasks/main.yml +++ /dev/null @@ -1,9 +0,0 @@ ---- -# Playbook for Hadoop master primary servers - -- include: hadoop_master.yml - when: ha_enabled - -- include: hadoop_master_no_ha.yml - when: not ha_enabled - diff --git a/hadoop/roles/hadoop_secondary/handlers/main.yml b/hadoop/roles/hadoop_secondary/handlers/main.yml deleted file mode 100644 index c493318..0000000 --- a/hadoop/roles/hadoop_secondary/handlers/main.yml +++ /dev/null @@ -1,14 +0,0 @@ ---- -# Handlers for the hadoop master services - -- name: restart hadoop master services - service: name=${item} state=restarted - with_items: - - hadoop-0.20-mapreduce-jobtracker - - hadoop-hdfs-namenode - -- name: restart hadoopha master services - service: name=${item} state=restarted - with_items: - - hadoop-0.20-mapreduce-jobtrackerha - - hadoop-hdfs-namenode diff --git a/hadoop/roles/hadoop_secondary/tasks/main.yml b/hadoop/roles/hadoop_secondary/tasks/main.yml deleted file mode 100644 index 8bd1d8b..0000000 --- a/hadoop/roles/hadoop_secondary/tasks/main.yml +++ /dev/null @@ -1,64 +0,0 @@ ---- -# Playbook for Hadoop master secondary server - - -- name: Install the namenode and jobtracker packages - yum: name=${item} state=installed - with_items: - - hadoop-0.20-mapreduce-jobtrackerha - - hadoop-hdfs-namenode - - hadoop-hdfs-zkfc - - hadoop-0.20-mapreduce-zkfc - -- name: Copy the hadoop configuration files - template: src=roles/common/templates/hadoop_ha_conf/{{ item }}.j2 dest=/etc/hadoop/conf/{{ item }} - with_items: - - core-site.xml - - hadoop-metrics.properties - - hadoop-metrics2.properties - - hdfs-site.xml - - log4j.properties - - mapred-site.xml - - slaves - - ssl-client.xml.example - - ssl-server.xml.example - notify: restart hadoopha master services - -- name: Create the data directory for the namenode metadata - file: path={{ item }} owner=hdfs group=hdfs state=directory - with_items: hadoop.dfs_namenode_name_dir - -- name: Create the data directory for the jobtracker ha - file: path={{ item }} owner=mapred group=mapred state=directory - with_items: hadoop.mapred_job_tracker_persist_jobstatus_dir - - -- name: Initialize the secodary namenode - shell: creates=/usr/lib/hadoop/namenode.formatted su - hdfs -c "hadoop namenode -bootstrapStandby" && touch /usr/lib/hadoop/namenode.formatted - -- name: start hadoop namenode services - service: name=hadoop-hdfs-namenode state=started - -- name: Initialize the zkfc for namenode - shell: creates=/usr/lib/hadoop/zkfc.formatted su - hdfs -c "hdfs zkfc -formatZK" && touch /usr/lib/hadoop/zkfc.formatted - -- name: start zkfc for namenodes - service: name=hadoop-hdfs-zkfc state=started - delegate_to: ${item} - with_items: groups.hadoop_masters - -- name: Give permissions for mapred users - shell: creates=/usr/lib/hadoop/fs.initialized su - hdfs -c "hadoop fs -chown hdfs:hadoop /"; su - hdfs -c "hadoop fs -chmod 0774 /" && touch /usr/lib/hadoop/namenode.initialized - -- name: Initialize the zkfc for jobtracker - shell: creates=/usr/lib/hadoop/zkfcjob.formatted su - mapred -c "hadoop mrzkfc -formatZK" && touch /usr/lib/hadoop/zkfcjob.formatted - -- name: start zkfc for jobtracker - service: name=hadoop-0.20-mapreduce-zkfc state=started - delegate_to: '{{ item }}' - with_items: groups.hadoop_masters - -- name: start hadoop Jobtracker services - service: name=hadoop-0.20-mapreduce-jobtrackerha state=started - delegate_to: '{{ item }}' - with_items: groups.hadoop_masters diff --git a/hadoop/roles/hadoop_slaves/handlers/main.yml b/hadoop/roles/hadoop_slaves/handlers/main.yml deleted file mode 100644 index 82ccc5c..0000000 --- a/hadoop/roles/hadoop_slaves/handlers/main.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -# Handlers for the hadoop slave services - -- name: restart hadoop slave services - service: name=${item} state=restarted - with_items: - - hadoop-0.20-mapreduce-tasktracker - - hadoop-hdfs-datanode diff --git a/hadoop/roles/hadoop_slaves/tasks/main.yml b/hadoop/roles/hadoop_slaves/tasks/main.yml deleted file mode 100644 index 9056ea2..0000000 --- a/hadoop/roles/hadoop_slaves/tasks/main.yml +++ /dev/null @@ -1,4 +0,0 @@ ---- -# Playbook for Hadoop slave servers - -- include: slaves.yml tags=slaves diff --git a/hadoop/roles/hadoop_slaves/tasks/slaves.yml b/hadoop/roles/hadoop_slaves/tasks/slaves.yml deleted file mode 100644 index 2807bac..0000000 --- a/hadoop/roles/hadoop_slaves/tasks/slaves.yml +++ /dev/null @@ -1,53 +0,0 @@ ---- -# Playbook for Hadoop slave servers - -- name: Install the datanode and tasktracker packages - yum: name=${item} state=installed - with_items: - - hadoop-0.20-mapreduce-tasktracker - - hadoop-hdfs-datanode - -- name: Copy the hadoop configuration files - template: src=roles/common/templates/hadoop_ha_conf/${item}.j2 dest=/etc/hadoop/conf/${item} - with_items: - - core-site.xml - - hadoop-metrics.properties - - hadoop-metrics2.properties - - hdfs-site.xml - - log4j.properties - - mapred-site.xml - - slaves - - ssl-client.xml.example - - ssl-server.xml.example - when: ha_enabled - notify: restart hadoop slave services - -- name: Copy the hadoop configuration files for non ha - template: src=roles/common/templates/hadoop_conf/${item}.j2 dest=/etc/hadoop/conf/${item} - with_items: - - core-site.xml - - hadoop-metrics.properties - - hadoop-metrics2.properties - - hdfs-site.xml - - log4j.properties - - mapred-site.xml - - slaves - - ssl-client.xml.example - - ssl-server.xml.example - when: not ha_enabled - notify: restart hadoop slave services - -- name: Create the data directory for the slave nodes to store the data - file: path={{ item }} owner=hdfs group=hdfs state=directory - with_items: hadoop.dfs_datanode_data_dir - -- name: Create the data directory for the slave nodes for mapreduce - file: path={{ item }} owner=mapred group=mapred state=directory - with_items: hadoop.mapred_local_dir - -- name: start hadoop slave services - service: name={{ item }} state=started - with_items: - - hadoop-0.20-mapreduce-tasktracker - - hadoop-hdfs-datanode - diff --git a/hadoop/roles/qjournal_servers/handlers/main.yml b/hadoop/roles/qjournal_servers/handlers/main.yml deleted file mode 100644 index a466737..0000000 --- a/hadoop/roles/qjournal_servers/handlers/main.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -# The journal node handlers - -- name: restart qjournal services - service: name=hadoop-hdfs-journalnode state=restarted diff --git a/hadoop/roles/qjournal_servers/tasks/main.yml b/hadoop/roles/qjournal_servers/tasks/main.yml deleted file mode 100644 index 86fa9e3..0000000 --- a/hadoop/roles/qjournal_servers/tasks/main.yml +++ /dev/null @@ -1,22 +0,0 @@ ---- -# Playbook for the qjournal nodes - -- name: Install the qjournal package - yum: name=hadoop-hdfs-journalnode state=installed - -- name: Create folder for Journaling - file: path={{ hadoop.dfs_journalnode_edits_dir }} state=directory owner=hdfs group=hdfs - -- name: Copy the hadoop configuration files - template: src=roles/common/templates/hadoop_ha_conf/{{ item }}.j2 dest=/etc/hadoop/conf/{{ item }} - with_items: - - core-site.xml - - hadoop-metrics.properties - - hadoop-metrics2.properties - - hdfs-site.xml - - log4j.properties - - mapred-site.xml - - slaves - - ssl-client.xml.example - - ssl-server.xml.example - notify: restart qjournal services diff --git a/hadoop/roles/zookeeper_servers/handlers/main.yml b/hadoop/roles/zookeeper_servers/handlers/main.yml deleted file mode 100644 index b0a9cc1..0000000 --- a/hadoop/roles/zookeeper_servers/handlers/main.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -# Handler for the zookeeper services - -- name: restart zookeeper - service: name=zookeeper-server state=restarted diff --git a/hadoop/roles/zookeeper_servers/tasks/main.yml b/hadoop/roles/zookeeper_servers/tasks/main.yml deleted file mode 100644 index c791a39..0000000 --- a/hadoop/roles/zookeeper_servers/tasks/main.yml +++ /dev/null @@ -1,13 +0,0 @@ ---- -# The plays for zookeper daemons - -- name: Install the zookeeper files - yum: name=zookeeper-server state=installed - -- name: Copy the configuration file for zookeeper - template: src=zoo.cfg.j2 dest=/etc/zookeeper/conf/zoo.cfg - notify: restart zookeeper - -- name: initialize the zookeper - shell: creates=/var/lib/zookeeper/myid service zookeeper-server init --myid=${zoo_id} - diff --git a/hadoop/roles/zookeeper_servers/templates/zoo.cfg.j2 b/hadoop/roles/zookeeper_servers/templates/zoo.cfg.j2 deleted file mode 100644 index a5e3f9f..0000000 --- a/hadoop/roles/zookeeper_servers/templates/zoo.cfg.j2 +++ /dev/null @@ -1,9 +0,0 @@ -tickTime=2000 -dataDir=/var/lib/zookeeper/ -clientPort={{ hadoop['zookeeper_clientport'] }} -initLimit=5 -syncLimit=2 -{% for host in groups['zookeeper_servers'] %} -server.{{ hostvars[host].zoo_id }}={{ host }}:{{ hadoop['zookeeper_leader_port'] }}:{{ hadoop['zookeeper_election_port'] }} -{% endfor %} - diff --git a/hadoop/site.yml b/hadoop/site.yml deleted file mode 100644 index cbe33a9..0000000 --- a/hadoop/site.yml +++ /dev/null @@ -1,28 +0,0 @@ ---- -# The main playbook to deploy the site - -- hosts: hadoop_all - roles: - - common - -- hosts: zookeeper_servers - roles: - - { role: zookeeper_servers, when: ha_enabled } - -- hosts: qjournal_servers - roles: - - { role: qjournal_servers, when: ha_enabled } - -- hosts: hadoop_master_primary - roles: - - { role: hadoop_primary } - -- hosts: hadoop_master_secondary - roles: - - { role: hadoop_secondary, when: ha_enabled } - -- hosts: hadoop_slaves - roles: - - { role: hadoop_slaves } - - diff --git a/openshift/LICENSE.md b/openshift/LICENSE.md deleted file mode 100644 index 2b437ec..0000000 --- a/openshift/LICENSE.md +++ /dev/null @@ -1,4 +0,0 @@ -Copyright (C) 2013 AnsibleWorks, Inc. - -This work is licensed under the Creative Commons Attribution 3.0 Unported License. -To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/deed.en_US. diff --git a/openshift/README.md b/openshift/README.md deleted file mode 100644 index 97a7b00..0000000 --- a/openshift/README.md +++ /dev/null @@ -1,311 +0,0 @@ -# Deploying a Highly Available production ready OpenShift Deployment - -- Requires Ansible 1.3 -- Expects CentOS/RHEL 6 hosts (64 bit) -- RHEL 6 requires rhel-x86_64-server-optional-6 in enabled channels - - -## A Primer into OpenShift Architecture - -###OpenShift Overview - -OpenShift Origin is the next generation application hosting platform which enables the users to create, deploy and manage applications within their cloud. In other words, it provides a PaaS service (Platform as a Service). This alleviates the developers from time consuming processes like machine provisioning and necessary application deployments. OpenShift provides disk space, CPU resources, memory, network connectivity, and various application deployment platforms like JBoss, Python, MySQL, etc., so the developers can spend their time on coding and testing new applications rather than spending time figuring out how to acquire and configure these resources. - - -###OpenShift Components - -Here's a list and a brief overview of the diffrent components used by OpenShift. - -- Broker: is the single point of contact for all application management activities. It is responsible for managing user logins, DNS, application state, and general orchestration of the application. Customers don’t contact the broker directly; instead they use the Web console, CLI tools, or JBoss tools to interact with Broker over a REST-based API. - -- Cartridges: provide the actual functionality necessary to run the user application. OpenShift currently supports many language Cartridges like JBoss, PHP, Ruby, etc., as well as many database Cartridges such as Postgres, MySQL, MongoDB, etc. In case a user need to deploy or create a PHP application with MySQL as a backend, they can just ask the broker to deploy a PHP and a MySQL cartridge on separate “Gears”. - -- Gear: Gears provide a resource-constrained container to run one or more Cartridges. They limit the amount of RAM and disk space available to a Cartridge. For simplicity we can consider this as a separate VM or Linux container for running an application for a specific tenant, but in reality they are containers created by SELinux contexts and PAM namespacing. - -- Node: are the physical machines where Gears are allocated. Gears are generally over-allocated on nodes since not all applications are active at the same time. - -- BSN (Broker Support Nodes): are the nodes which run applications for OpenShift management. For example, OpenShift uses MongoDB to store various user/app details, and it also uses ActiveMQ to communicate with different application nodes via MCollective. The nodes which host these supporting applications are called as Broker Support Nodes. - -- Districts: are resource pools which can be used to separate the application nodes based on performance or environments. For example, in a production deployment we can have two Districts of Nodes, one of which has resources with lower memory/CPU/disk requirements, and another for high performance applications. - - -### An Overview of application creation process in OpenShift. - -![Alt text](images/app_deploy.png "App") - -The above figure depicts an overview of the different steps involved in creating an application in OpenShift. If a developer wants to create or deploy a JBoss & MySQL application, they can request the same from different client tools that are available, the choice can be an Eclipse IDE , command line tool (RHC) or even a web browser (management console). - -Once the user has instructed the client tool to deploy a JBoss & MySQL application, the client tool makes a web service request to the broker to provision the resources. The broker in turn queries the Nodes for Gear and Cartridge availability, and if the resources are available, two Gears are created and JBoss and MySQL Cartridges are deployed on them. The user is then notified and they can then access the Gears via SSH and start deploying the code. - - -### Deployment Diagram of OpenShift via Ansible. - -![Alt text](images/arch.png "App") - -The above diagram shows the Ansible playbooks deploying a highly-available Openshift PaaS environment. The deployment has two servers running LVS (Piranha) for load balancing and provides HA for the Brokers. Two instances of Brokers also run for fault tolerance. Ansible also configures a DNS server which provides name resolution for all the new apps created in the OpenShift environment. - -Three BSN (Broker Support Node) nodes provide a replicated MongoDB deployment and the same nodes run three instances of a highly-available ActiveMQ cluster. There is no limitation on the number of application nodes you can deploy–the user just needs to add the hostnames of the OpenShift nodes to the Ansible inventory and Ansible will configure all of them. - -Note: As a best practice if the deployment is in an actual production environment it is recommended to integrate with the infrastructure’s internal DNS server for name resolution and use LDAP or integrate with an existing Active Directory for user authentication. - - -## Deployment Steps for OpenShift via Ansible - -As a first step probably you may want to setup ansible, Assuming the Ansible host is Rhel variant install the EPEL package. - - yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm - -Once the epel repo is installed ansible can be installed via the following command. - - http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm - -It is recommended to use seperate machines for the different components of Openshift, but if you are testing it out you could combine the services but atleast four nodes are mandatory as the mongodb and activemq cluster needs atleast three for the cluster to work properly. - -As a first step checkout this repository onto you ansible management host and setup the inventory(hosts) as follows. - - git checkout https://github.com/ansible/ansible-examples.git - - [dns] - ec2-54-226-116-175.compute-1.amazonaws.com - - [mongo_servers] - ec2-54-226-116-175.compute-1.amazonaws.com - ec2-54-227-131-56.compute-1.amazonaws.com - ec2-54-227-169-137.compute-1.amazonaws.com - - [mq] - ec2-54-226-116-175.compute-1.amazonaws.com - ec2-54-227-131-56.compute-1.amazonaws.com - ec2-54-227-169-137.compute-1.amazonaws.com - - [broker] - ec2-54-227-63-48.compute-1.amazonaws.com - ec2-54-227-171-2.compute-1.amazonaws.com - - [nodes] - ec2-54-227-146-187.compute-1.amazonaws.com - - [lvs] - ec2-54-227-176-123.compute-1.amazonaws.com - ec2-54-227-177-87.compute-1.amazonaws.com - -Once the inventroy is setup with hosts in your environment the Openshift stack can be deployed easily by issuing the following command. - - ansible-playbook -i hosts site.yml - - - -### Verifying the Installation - -Once the stack has been succesfully deployed, we can check if the diffrent components has been deployed correctly. - -- Mongodb: Login to any bsn node running mongodb and issue the following command and a similar output should be displayed. Which displays that the mongo cluster is up with a primary node and two secondary nodes. - - - [root@ip-10-165-33-186 ~]# mongo 127.0.0.1:2700/admin -u admin -p passme - MongoDB shell version: 2.2.3 - connecting to: 127.0.0.1:2700/admin - openshift:PRIMARY> rs.status() - { - "set" : "openshift", - "date" : ISODate("2013-07-21T18:56:27Z"), - "myState" : 1, - "members" : [ - { - "_id" : 0, - "name" : "ip-10-165-33-186:2700", - "health" : 1, - "state" : 1, - "stateStr" : "PRIMARY", - "uptime" : 804, - "optime" : { - "t" : 1374432940000, - "i" : 1 - }, - "optimeDate" : ISODate("2013-07-21T18:55:40Z"), - "self" : true - }, - { - "_id" : 1, - "name" : "ec2-54-227-131-56.compute-1.amazonaws.com:2700", - "health" : 1, - "state" : 2, - "stateStr" : "SECONDARY", - "uptime" : 431, - "optime" : { - "t" : 1374432940000, - "i" : 1 - }, - "optimeDate" : ISODate("2013-07-21T18:55:40Z"), - "lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), - "pingMs" : 0 - }, - { - "_id" : 2, - "name" : "ec2-54-227-169-137.compute-1.amazonaws.com:2700", - "health" : 1, - "state" : 2, - "stateStr" : "SECONDARY", - "uptime" : 423, - "optime" : { - "t" : 1374432940000, - "i" : 1 - }, - "optimeDate" : ISODate("2013-07-21T18:55:40Z"), - "lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), - "pingMs" : 0 - } - ], - "ok" : 1 - } - openshift:PRIMARY> - -- ActiveMQ: To verify the cluster status of activeMQ browse to the following url pointing to any one of the mq nodes and provide the credentials as user admin and password as specified in the group_vars/all file. The browser should bring up a page similar to shown below, which shows the other two mq nodes in the cluster to which this node as joined. - - http://ec2-54-226-116-175.compute-1.amazonaws.com:8161/admin/network.jsp - - -![Alt text](images/mq.png "App") - -- Broker: To check if the broker node is installed/configured succesfully, issue the following command on any broker node and a similar output should be displayed. Make sure there is a PASS at the end. - - [root@ip-10-118-127-30 ~]# oo-accept-broker -v - INFO: Broker package is: openshift-origin-broker - INFO: checking packages - INFO: checking package ruby - INFO: checking package rubygem-openshift-origin-common - INFO: checking package rubygem-openshift-origin-controller - INFO: checking package openshift-origin-broker - INFO: checking package ruby193-rubygem-rails - INFO: checking package ruby193-rubygem-passenger - INFO: checking package ruby193-rubygems - INFO: checking ruby requirements - INFO: checking ruby requirements for openshift-origin-controller - INFO: checking ruby requirements for config/application - INFO: checking that selinux modules are loaded - NOTICE: SELinux is Enforcing - NOTICE: SELinux is Enforcing - INFO: SELinux boolean httpd_unified is enabled - INFO: SELinux boolean httpd_can_network_connect is enabled - INFO: SELinux boolean httpd_can_network_relay is enabled - INFO: SELinux boolean httpd_run_stickshift is enabled - INFO: SELinux boolean allow_ypbind is enabled - INFO: checking firewall settings - INFO: checking mongo datastore configuration - INFO: Datastore Host: ec2-54-226-116-175.compute-1.amazonaws.com - INFO: Datastore Port: 2700 - INFO: Datastore User: admin - INFO: Datastore SSL: false - INFO: Datastore Password has been set to non-default - INFO: Datastore DB Name: admin - INFO: Datastore: mongo db service is remote - INFO: checking mongo db login access - INFO: mongo db login successful: ec2-54-226-116-175.compute-1.amazonaws.com:2700/admin --username admin - INFO: checking services - INFO: checking cloud user authentication - INFO: auth plugin = OpenShift::RemoteUserAuthService - INFO: auth plugin: OpenShift::RemoteUserAuthService - INFO: checking remote-user auth configuration - INFO: Auth trusted header: REMOTE_USER - INFO: Auth passthrough is enabled for OpenShift services - INFO: Got HTTP 200 response from https://localhost/broker/rest/api - INFO: Got HTTP 200 response from https://localhost/broker/rest/cartridges - INFO: Got HTTP 401 response from https://localhost/broker/rest/user - INFO: Got HTTP 401 response from https://localhost/broker/rest/domains - INFO: checking dynamic dns plugin - INFO: dynamic dns plugin = OpenShift::BindPlugin - INFO: checking bind dns plugin configuration - INFO: DNS Server: 10.165.33.186 - INFO: DNS Port: 53 - INFO: DNS Zone: example.com - INFO: DNS Domain Suffix: example.com - INFO: DNS Update Auth: key - INFO: DNS Key Name: example.com - INFO: DNS Key Value: ***** - INFO: adding txt record named testrecord.example.com to server 10.165.33.186: key0 - INFO: txt record successfully added - INFO: deleteing txt record named testrecord.example.com to server 10.165.33.186: key0 - INFO: txt record successfully deleted - INFO: checking messaging configuration - INFO: messaging plugin = OpenShift::MCollectiveApplicationContainerProxy - PASS - -- Node: To verify if the node installation/configuration has been successfull, issue the follwoing command and check for a similar output as shown below. - - [root@ip-10-152-154-18 ~]# oo-accept-node -v - INFO: using default accept-node extensions - INFO: loading node configuration file /etc/openshift/node.conf - INFO: loading resource limit file /etc/openshift/resource_limits.conf - INFO: finding external network device - INFO: checking node public hostname resolution - INFO: checking selinux status - INFO: checking selinux openshift-origin policy - INFO: checking selinux booleans - INFO: checking package list - INFO: checking services - INFO: checking kernel semaphores >= 512 - INFO: checking cgroups configuration - INFO: checking cgroups processes - INFO: checking filesystem quotas - INFO: checking quota db file selinux label - INFO: checking 0 user accounts - INFO: checking application dirs - INFO: checking system httpd configs - INFO: checking cartridge repository - PASS - -- LVS (LoadBalancer): To check the LoadBalncer Login to the active loadbalancer and issue the follwing command, the output would show the two broker to which the loadbalancer is balancing the traffic. - - [root@ip-10-145-204-43 ~]# ipvsadm - IP Virtual Server version 1.2.1 (size=4096) - Prot LocalAddress:Port Scheduler Flags - -> RemoteAddress:Port Forward Weight ActiveConn InActConn - TCP ip-192-168-1-1.ec2.internal: rr - -> ec2-54-227-63-48.compute-1.a Route 1 0 0 - -> ec2-54-227-171-2.compute-1.a Route 2 0 0 - -## Creating an APP in Openshift - -To create an App in openshift access the management console via any browser, the VIP specified in group_vars/all can used or ip address of any broker node can used. - - https:/// - -The page would as a login, give it as demo/passme. Once logged in follow the screen instructions to create your first Application. -Note: Python2.6 cartridge is by default installed by plabooks, so choose python2.6 as the cartridge. - -## Deploying Openshift in EC2 - -The repo also has playbook that would deploy the Highly Available Openshift in EC2. The playbooks should also be able to deploy the cluster in any ec2 api compatible clouds like Eucalyptus etc.. - -Before deploying Please make sure: - - - A security groups is created which allows ssh and HTTP/HTTPS traffic. - - The access/secret key is entered in group_vars/all - - Also specify the number of nodes required for the cluser in group_vars/all in the variable "count". - -Once that is done the cluster can be deployed simply by issuing the command. - - ansible-playbook -i ec2hosts ec2.yml -e id=openshift - -Note: 'id' is a unique identifier for the cluster, if you are deploying multiple clusters, please make sure the value given is seperate for each deployments. Also the role of the created instances can figured out checking the tags tab in ec2 console. - -###Remove the cluster from EC2. - -To remove the deployed openshift cluster in ec2, just run the following command. The id paramter should be the same which was given to create the Instance. - -Note: The id can be figured out by checking the tags tab in the ec2 console. - - ansible-playbook -i ec2hosts ec2_remove.yml -e id=openshift5 - - - -## HA Tests - -Few test's that can be performed to test High Availability are: - -- Shutdown any broker and try to create a new Application -- Shutdown anyone mongo/mq node and try to create a new Appliaction. -- Shutdown any loadbalaning machine, and the manamgement application should be available via the VirtualIP. - - - diff --git a/openshift/ansible.cfg b/openshift/ansible.cfg deleted file mode 100644 index b7d69cd..0000000 --- a/openshift/ansible.cfg +++ /dev/null @@ -1,115 +0,0 @@ -# config file for ansible -- http://ansibleworks.com/ -# ================================================== - -# nearly all parameters can be overridden in ansible-playbook -# or with command line flags. ansible will read ~/.ansible.cfg, -# ansible.cfg in the current working directory or -# /etc/ansible/ansible.cfg, whichever it finds first - -[defaults] - -# some basic default values... - -hostfile = /etc/ansible/hosts -library = /usr/share/ansible -remote_tmp = $HOME/.ansible/tmp -pattern = * -forks = 5 -poll_interval = 15 -sudo_user = root -#ask_sudo_pass = True -#ask_pass = True -transport = smart -remote_port = 22 - -# uncomment this to disable SSH key host checking -host_key_checking = False - -# change this for alternative sudo implementations -sudo_exe = sudo - -# what flags to pass to sudo -#sudo_flags = -H - -# SSH timeout -timeout = 10 - -# default user to use for playbooks if user is not specified -# (/usr/bin/ansible will use current user as default) -#remote_user = root - -# logging is off by default unless this path is defined -# if so defined, consider logrotate -#log_path = /var/log/ansible.log - -# default module name for /usr/bin/ansible -#module_name = command - -# use this shell for commands executed under sudo -# you may need to change this to bin/bash in rare instances -# if sudo is constrained -#executable = /bin/sh - -# if inventory variables overlap, does the higher precedence one win -# or are hash values merged together? The default is 'replace' but -# this can also be set to 'merge'. -#hash_behaviour = replace - -# How to handle variable replacement - as of 1.2, Jinja2 variable syntax is -# preferred, but we still support the old $variable replacement too. -# Turn off ${old_style} variables here if you like. -#legacy_playbook_variables = yes - -# list any Jinja2 extensions to enable here: -#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n - -# if set, always use this private key file for authentication, same as -# if passing --private-key to ansible or ansible-playbook -#private_key_file = /path/to/file - -# format of string {{ ansible_managed }} available within Jinja2 -# templates indicates to users editing templates files will be replaced. -# replacing {file}, {host} and {uid} and strftime codes with proper values. -ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} - -# by default (as of 1.3), Ansible will raise errors when attempting to dereference -# Jinja2 variables that are not set in templates or action lines. Uncomment this line -# to revert the behavior to pre-1.3. -#error_on_undefined_vars = False - -# set plugin path directories here, seperate with colons -action_plugins = /usr/share/ansible_plugins/action_plugins -callback_plugins = /usr/share/ansible_plugins/callback_plugins -connection_plugins = /usr/share/ansible_plugins/connection_plugins -lookup_plugins = /usr/share/ansible_plugins/lookup_plugins -vars_plugins = /usr/share/ansible_plugins/vars_plugins -filter_plugins = /usr/share/ansible_plugins/filter_plugins - -# don't like cows? that's unfortunate. -# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 -#nocows = 1 - -# don't like colors either? -# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 -#nocolor = 1 - -[paramiko_connection] - -# uncomment this line to cause the paramiko connection plugin to not record new host -# keys encountered. Increases performance on new host additions. Setting works independently of the -# host key checking setting above. - -#record_host_keys=False - -[ssh_connection] - -# ssh arguments to use -# Leaving off ControlPersist will result in poor performance, so use -# paramiko on older platforms rather than removing it -#ssh_args = -o ControlMaster=auto -o ControlPersist=60s - -# if True, make ansible use scp if the connection type is ssh -# (default is sftp) -#scp_if_ssh = True - - diff --git a/openshift/ec2.yml b/openshift/ec2.yml deleted file mode 100644 index 67a8e1e..0000000 --- a/openshift/ec2.yml +++ /dev/null @@ -1,61 +0,0 @@ -- hosts: localhost - connection: local - pre_tasks: - - fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" - when: id is not defined - - roles: - - role: ec2 - type: dns - ncount: 1 - - - role: ec2 - type: mq - ncount: 3 - - - role: ec2 - type: broker - ncount: 2 - - - role: ec2 - type: nodes - ncount: "{{ count }}" - - post_tasks: - - name: Wait for the instance to come up - wait_for: delay=10 host={{ item.public_dns_name }} port=22 state=started timeout=360 - with_items: ec2.instances - - - debug: msg="{{ groups }}" - -- hosts: all:!localhost - user: root - roles: - - role: common - -- hosts: dns - user: root - roles: - - role: dns - -- hosts: mongo_servers - user: root - roles: - - role: mongodb - -- hosts: mq - user: root - roles: - - role: mq - -- hosts: broker - user: root - roles: - - role: broker - -- hosts: nodes - user: root - roles: - - role: nodes - - diff --git a/openshift/ec2_remove.yml b/openshift/ec2_remove.yml deleted file mode 100644 index c667dd3..0000000 --- a/openshift/ec2_remove.yml +++ /dev/null @@ -1,23 +0,0 @@ -- hosts: localhost - connection: local - pre_tasks: - - fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" - when: id is not defined - - roles: - - role: ec2_remove - type: dns - ncount: 1 - - - role: ec2_remove - type: mq - ncount: 3 - - - role: ec2_remove - type: broker - ncount: 2 - - - role: ec2_remove - type: nodes - ncount: "{{ count }}" - diff --git a/openshift/ec2hosts b/openshift/ec2hosts deleted file mode 100644 index 2fbb50c..0000000 --- a/openshift/ec2hosts +++ /dev/null @@ -1 +0,0 @@ -localhost diff --git a/openshift/group_vars/all b/openshift/group_vars/all deleted file mode 100644 index e8ba384..0000000 --- a/openshift/group_vars/all +++ /dev/null @@ -1,32 +0,0 @@ ---- -# Global Vars for OpenShift - -#EC2 specific varibles -ec2_access_key: "AKIUFDNXQ" -ec2_secret_key: "RyhTz1wzZ3kmtMEu" -keypair: "axialkey" -instance_type: "m1.small" -image: "ami-bf5021d6" -group: "default" -count: 2 -ec2_elbs: oselb -region: "us-east-1" -zone: "us-east-1a" - -iface: '{{ ansible_default_ipv4.interface }}' - -domain_name: example.com -dns_port: 53 -rndc_port: 953 -dns_key: "YG70pT2h9xmn9DviT+E6H8MNlJ9wc7Xa9qpCOtuonj3oLJGBBA8udXUsJnoGdMSIIw2pk9lw9QL4rv8XQNBRLQ==" - -mongodb_datadir_prefix: /data/ -mongod_port: 2700 -mongo_admin_pass: passme - -mcollective_pass: passme -admin_pass: passme -amquser_pass: passme - -vip: 192.168.2.15 -vip_netmask: 255.255.255.0 diff --git a/openshift/hosts b/openshift/hosts deleted file mode 100644 index 934e5be..0000000 --- a/openshift/hosts +++ /dev/null @@ -1,24 +0,0 @@ -[dns] -vm1 - -[mongo_servers] -vm1 -vm2 -vm3 - -[mq] -vm1 -vm2 -vm3 - -[broker] -vm6 -vm7 - -[nodes] -vm4 - -[lvs] -vm5 -vm3 - diff --git a/openshift/images/app_deploy.png b/openshift/images/app_deploy.png deleted file mode 100644 index 6f9fe7d..0000000 Binary files a/openshift/images/app_deploy.png and /dev/null differ diff --git a/openshift/images/arch.png b/openshift/images/arch.png deleted file mode 100644 index c8bd18d..0000000 Binary files a/openshift/images/arch.png and /dev/null differ diff --git a/openshift/images/mq.png b/openshift/images/mq.png deleted file mode 100644 index e1372bf..0000000 Binary files a/openshift/images/mq.png and /dev/null differ diff --git a/openshift/openshift_ec2/README.md b/openshift/openshift_ec2/README.md deleted file mode 100644 index bf17d2d..0000000 --- a/openshift/openshift_ec2/README.md +++ /dev/null @@ -1,284 +0,0 @@ -# Deploying a Highly Available production ready OpenShift Deployment - -- Requires Ansible 1.2 -- Expects CentOS/RHEL 6 hosts (64 bit) - - -## A Primer into OpenShift Architecture - -###OpenShift Overview - -OpenShift Origin enables the users to create, deploy and manage applications within the cloud, or in other words it provies a PaaS service (Platform as a service). This aleviates the developers from time consuming processes like machine provisioning and neccesary appliaction deployments. OpenShift provides disk space, CPU resources, memory, network connectivity, and various application like JBoss, python, MySQL etc... So that the developer can spent time on coding, testing his/her new application rather than spending time on figuring out how to get/configure those resources. - -###OpenShift Components - -Here's a list and a brief overview of the diffrent components used by OpenShift. - -- Broker: is the single point of contact for all application management activities. It is responsible for managing user logins, DNS, application state, and general orchestration of the application. Customers don't contact the broker directly; instead they use the Web console, CLI tools, or JBoss tools to interact with Broker over a REST based API. - -- Cartridges: provide the actual functionality necessary to run the user application. Openshift currently supports many language cartridges like JBoss, PHP, Ruby, etc., as well as many DB cartridges such as Postgres, Mysql, Mongo, etc. So incase a user need to deploy or create an php application with mysql as backend, he/she can just ask the broker to deploy a php and an mysql cartridgeon seperate gears. - -- Gear: Gears provide a resource-constrained container to run one or more cartridges. They limit the amount of RAM and disk space available to a cartridge. For simplicity we can consider this as a seperate vm or linux container for running application for a specific tenant, but in reality they are containers created by selinux contexts and pam namespacing. - -- Node: are the physical machines where gears are allocated. Gears are generally over-allocated on nodes since not all applications are active at the same time. - -- BSN (Broker support Nodes): are the nodes which run applications for OpenShift management. for example OpenShift uses mongodb to -store various user/app details, it also uses ActiveMQ for communincating with different application nodes via Mcollective. These nodes which hosts this supporing applications are called as broker support nodes. - -- Districts: are resource pools which can be used to seperate the application nodes based on performance or environments. so for example in a production deployment we can have two districts of nodes one which has resources with lower memory/cpu/disk requirements and another for high performance applications. - -### An Overview of application creation process in OpenShift. - -![Alt text](/images/app_deploy.png "App") - - -The above figure depicts an overview of diffrent steps invovled in creating an application in OpenShift. So if a developer wants to create or deploy a JBoss & Myql application the user can request the same from diffrent client tools that are available, the choice can be an Eclipse IDE or command line tool (rhc) or even a web browser. - -Once the user has instructed the client it makes a web service request to the Broker, the broker inturn check for available resources in the nodes and checks for gear and cartridge availability and if the resources are available two gears are created and JBoss and Mysql cartridges are deployed on them. The user is then notified and the user can then access the gears via ssh and start deploying the code. - - -### Deployment Diagram of OpenShift via Ansible. - -![Alt text](/images/arch.png "App") - -As the above diagram shows the Ansible playbooks deploys a highly available Openshift Paas environment. The deployment has two servers running lvs (piranha) for loadbalancing and ha for the brokers. Two instances of brokers also run for fault tolerence. Ansible also configures a dns server which provides name resolution for all the new apps created in the Openshift environment. - -Three bsn(broker support nodes) nodes provide a replicated mongodb deployment and the same nodes three instances of higly available activeMQ cluster. There is no limitation on the number of application nodes you can add, just add the hostnames of the application nodes in the ansible inventory and ansible will configure all of them for you. - -Note: As a best practise if you are deploying an actual production environemnt it is recommended to integrate with your internal DNS server for name resolution and use LDAP or integrate with an existing Active Directory for user authentication. - -## Deployment Steps for OpenShift via Ansible - -As a first step probably you may want to setup ansible, Assuming the Ansible host is Rhel variant install the EPEL package. - - yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm - -Once the epel repo is installed ansible can be installed via the following command. - - http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm - -It is recommended to use seperate machines for the different components of Openshift, but if you are testing it out you could combine the services but atleast four nodes are mandatory as the mongodb and activemq cluster needs atleast three for the cluster to work properly. - -As a first step checkout this repository onto you ansible management host and setup the inventory(hosts) as follows. - - git checkout https://github.com/ansible/ansible-examples.git - - [dns] - ec2-54-226-116-175.compute-1.amazonaws.com - - [mongo_servers] - ec2-54-226-116-175.compute-1.amazonaws.com - ec2-54-227-131-56.compute-1.amazonaws.com - ec2-54-227-169-137.compute-1.amazonaws.com - - [mq] - ec2-54-226-116-175.compute-1.amazonaws.com - ec2-54-227-131-56.compute-1.amazonaws.com - ec2-54-227-169-137.compute-1.amazonaws.com - - [broker] - ec2-54-227-63-48.compute-1.amazonaws.com - ec2-54-227-171-2.compute-1.amazonaws.com - - [nodes] - ec2-54-227-146-187.compute-1.amazonaws.com - - [lvs] - ec2-54-227-176-123.compute-1.amazonaws.com - ec2-54-227-177-87.compute-1.amazonaws.com - -Once the inventroy is setup with hosts in your environment the Openshift stack can be deployed easily by issuing the following command. - - ansible-playbook -i hosts site.yml - - - -### Verifying the Installation - -Once the stack has been succesfully deployed, we can check if the diffrent components has been deployed correctly. - -- Mongodb: Login to any bsn node running mongodb and issue the following command and a similar output should be displayed. Which displays that the mongo cluster is up with a primary node and two secondary nodes. - - - [root@ip-10-165-33-186 ~]# mongo 127.0.0.1:2700/admin -u admin -p passme - MongoDB shell version: 2.2.3 - connecting to: 127.0.0.1:2700/admin - openshift:PRIMARY> rs.status() - { - "set" : "openshift", - "date" : ISODate("2013-07-21T18:56:27Z"), - "myState" : 1, - "members" : [ - { - "_id" : 0, - "name" : "ip-10-165-33-186:2700", - "health" : 1, - "state" : 1, - "stateStr" : "PRIMARY", - "uptime" : 804, - "optime" : { - "t" : 1374432940000, - "i" : 1 - }, - "optimeDate" : ISODate("2013-07-21T18:55:40Z"), - "self" : true - }, - { - "_id" : 1, - "name" : "ec2-54-227-131-56.compute-1.amazonaws.com:2700", - "health" : 1, - "state" : 2, - "stateStr" : "SECONDARY", - "uptime" : 431, - "optime" : { - "t" : 1374432940000, - "i" : 1 - }, - "optimeDate" : ISODate("2013-07-21T18:55:40Z"), - "lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), - "pingMs" : 0 - }, - { - "_id" : 2, - "name" : "ec2-54-227-169-137.compute-1.amazonaws.com:2700", - "health" : 1, - "state" : 2, - "stateStr" : "SECONDARY", - "uptime" : 423, - "optime" : { - "t" : 1374432940000, - "i" : 1 - }, - "optimeDate" : ISODate("2013-07-21T18:55:40Z"), - "lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), - "pingMs" : 0 - } - ], - "ok" : 1 - } - openshift:PRIMARY> - -- ActiveMQ: To verify the cluster status of activeMQ browse to the following url pointing to any one of the mq nodes and provide the credentials as user admin and password as specified in the group_vars/all file. The browser should bring up a page similar to shown below, which shows the other two mq nodes in the cluster to which this node as joined. - - http://ec2-54-226-116-175.compute-1.amazonaws.com:8161/admin/network.jsp - - -![Alt text](/images/mq.png "App") - -- Broker: To check if the broker node is installed/configured succesfully, issue the following command on any broker node and a similar output should be displayed. Make sure there is a PASS at the end. - - [root@ip-10-118-127-30 ~]# oo-accept-broker -v - INFO: Broker package is: openshift-origin-broker - INFO: checking packages - INFO: checking package ruby - INFO: checking package rubygem-openshift-origin-common - INFO: checking package rubygem-openshift-origin-controller - INFO: checking package openshift-origin-broker - INFO: checking package ruby193-rubygem-rails - INFO: checking package ruby193-rubygem-passenger - INFO: checking package ruby193-rubygems - INFO: checking ruby requirements - INFO: checking ruby requirements for openshift-origin-controller - INFO: checking ruby requirements for config/application - INFO: checking that selinux modules are loaded - NOTICE: SELinux is Enforcing - NOTICE: SELinux is Enforcing - INFO: SELinux boolean httpd_unified is enabled - INFO: SELinux boolean httpd_can_network_connect is enabled - INFO: SELinux boolean httpd_can_network_relay is enabled - INFO: SELinux boolean httpd_run_stickshift is enabled - INFO: SELinux boolean allow_ypbind is enabled - INFO: checking firewall settings - INFO: checking mongo datastore configuration - INFO: Datastore Host: ec2-54-226-116-175.compute-1.amazonaws.com - INFO: Datastore Port: 2700 - INFO: Datastore User: admin - INFO: Datastore SSL: false - INFO: Datastore Password has been set to non-default - INFO: Datastore DB Name: admin - INFO: Datastore: mongo db service is remote - INFO: checking mongo db login access - INFO: mongo db login successful: ec2-54-226-116-175.compute-1.amazonaws.com:2700/admin --username admin - INFO: checking services - INFO: checking cloud user authentication - INFO: auth plugin = OpenShift::RemoteUserAuthService - INFO: auth plugin: OpenShift::RemoteUserAuthService - INFO: checking remote-user auth configuration - INFO: Auth trusted header: REMOTE_USER - INFO: Auth passthrough is enabled for OpenShift services - INFO: Got HTTP 200 response from https://localhost/broker/rest/api - INFO: Got HTTP 200 response from https://localhost/broker/rest/cartridges - INFO: Got HTTP 401 response from https://localhost/broker/rest/user - INFO: Got HTTP 401 response from https://localhost/broker/rest/domains - INFO: checking dynamic dns plugin - INFO: dynamic dns plugin = OpenShift::BindPlugin - INFO: checking bind dns plugin configuration - INFO: DNS Server: 10.165.33.186 - INFO: DNS Port: 53 - INFO: DNS Zone: example.com - INFO: DNS Domain Suffix: example.com - INFO: DNS Update Auth: key - INFO: DNS Key Name: example.com - INFO: DNS Key Value: ***** - INFO: adding txt record named testrecord.example.com to server 10.165.33.186: key0 - INFO: txt record successfully added - INFO: deleteing txt record named testrecord.example.com to server 10.165.33.186: key0 - INFO: txt record successfully deleted - INFO: checking messaging configuration - INFO: messaging plugin = OpenShift::MCollectiveApplicationContainerProxy - PASS - -- Node: To verify if the node installation/configuration has been successfull, issue the follwoing command and check for a similar output as shown below. - - [root@ip-10-152-154-18 ~]# oo-accept-node -v - INFO: using default accept-node extensions - INFO: loading node configuration file /etc/openshift/node.conf - INFO: loading resource limit file /etc/openshift/resource_limits.conf - INFO: finding external network device - INFO: checking node public hostname resolution - INFO: checking selinux status - INFO: checking selinux openshift-origin policy - INFO: checking selinux booleans - INFO: checking package list - INFO: checking services - INFO: checking kernel semaphores >= 512 - INFO: checking cgroups configuration - INFO: checking cgroups processes - INFO: checking filesystem quotas - INFO: checking quota db file selinux label - INFO: checking 0 user accounts - INFO: checking application dirs - INFO: checking system httpd configs - INFO: checking cartridge repository - PASS - -- LVS (LoadBalancer): To check the LoadBalncer Login to the active loadbalancer and issue the follwing command, the output would show the two broker to which the loadbalancer is balancing the traffic. - - [root@ip-10-145-204-43 ~]# ipvsadm - IP Virtual Server version 1.2.1 (size=4096) - Prot LocalAddress:Port Scheduler Flags - -> RemoteAddress:Port Forward Weight ActiveConn InActConn - TCP ip-192-168-1-1.ec2.internal: rr - -> ec2-54-227-63-48.compute-1.a Route 1 0 0 - -> ec2-54-227-171-2.compute-1.a Route 2 0 0 - -## Creating an APP in Openshift - -To create an App in openshift access the management console via any browser, the VIP specified in group_vars/all can used or ip address of any broker node can used. - - https:/// - -The page would as a login, give it as demo/passme. Once logged in follow the screen instructions to create your first Application. -Note: Python2.6 cartridge is by default installed by plabooks, so choose python2.6 as the cartridge. - - -## HA Tests - -Few test's that can be performed to test High Availability are: - -- Shutdown any broker and try to create a new Application -- Shutdown anyone mongo/mq node and try to create a new Appliaction. -- Shutdown any loadbalaning machine, and the manamgement application should be available via the VirtualIP. - - - diff --git a/openshift/openshift_ec2/ansible.cfg b/openshift/openshift_ec2/ansible.cfg deleted file mode 100644 index b7d69cd..0000000 --- a/openshift/openshift_ec2/ansible.cfg +++ /dev/null @@ -1,115 +0,0 @@ -# config file for ansible -- http://ansibleworks.com/ -# ================================================== - -# nearly all parameters can be overridden in ansible-playbook -# or with command line flags. ansible will read ~/.ansible.cfg, -# ansible.cfg in the current working directory or -# /etc/ansible/ansible.cfg, whichever it finds first - -[defaults] - -# some basic default values... - -hostfile = /etc/ansible/hosts -library = /usr/share/ansible -remote_tmp = $HOME/.ansible/tmp -pattern = * -forks = 5 -poll_interval = 15 -sudo_user = root -#ask_sudo_pass = True -#ask_pass = True -transport = smart -remote_port = 22 - -# uncomment this to disable SSH key host checking -host_key_checking = False - -# change this for alternative sudo implementations -sudo_exe = sudo - -# what flags to pass to sudo -#sudo_flags = -H - -# SSH timeout -timeout = 10 - -# default user to use for playbooks if user is not specified -# (/usr/bin/ansible will use current user as default) -#remote_user = root - -# logging is off by default unless this path is defined -# if so defined, consider logrotate -#log_path = /var/log/ansible.log - -# default module name for /usr/bin/ansible -#module_name = command - -# use this shell for commands executed under sudo -# you may need to change this to bin/bash in rare instances -# if sudo is constrained -#executable = /bin/sh - -# if inventory variables overlap, does the higher precedence one win -# or are hash values merged together? The default is 'replace' but -# this can also be set to 'merge'. -#hash_behaviour = replace - -# How to handle variable replacement - as of 1.2, Jinja2 variable syntax is -# preferred, but we still support the old $variable replacement too. -# Turn off ${old_style} variables here if you like. -#legacy_playbook_variables = yes - -# list any Jinja2 extensions to enable here: -#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n - -# if set, always use this private key file for authentication, same as -# if passing --private-key to ansible or ansible-playbook -#private_key_file = /path/to/file - -# format of string {{ ansible_managed }} available within Jinja2 -# templates indicates to users editing templates files will be replaced. -# replacing {file}, {host} and {uid} and strftime codes with proper values. -ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} - -# by default (as of 1.3), Ansible will raise errors when attempting to dereference -# Jinja2 variables that are not set in templates or action lines. Uncomment this line -# to revert the behavior to pre-1.3. -#error_on_undefined_vars = False - -# set plugin path directories here, seperate with colons -action_plugins = /usr/share/ansible_plugins/action_plugins -callback_plugins = /usr/share/ansible_plugins/callback_plugins -connection_plugins = /usr/share/ansible_plugins/connection_plugins -lookup_plugins = /usr/share/ansible_plugins/lookup_plugins -vars_plugins = /usr/share/ansible_plugins/vars_plugins -filter_plugins = /usr/share/ansible_plugins/filter_plugins - -# don't like cows? that's unfortunate. -# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 -#nocows = 1 - -# don't like colors either? -# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 -#nocolor = 1 - -[paramiko_connection] - -# uncomment this line to cause the paramiko connection plugin to not record new host -# keys encountered. Increases performance on new host additions. Setting works independently of the -# host key checking setting above. - -#record_host_keys=False - -[ssh_connection] - -# ssh arguments to use -# Leaving off ControlPersist will result in poor performance, so use -# paramiko on older platforms rather than removing it -#ssh_args = -o ControlMaster=auto -o ControlPersist=60s - -# if True, make ansible use scp if the connection type is ssh -# (default is sftp) -#scp_if_ssh = True - - diff --git a/openshift/openshift_ec2/ec2.yml b/openshift/openshift_ec2/ec2.yml deleted file mode 100644 index 67a8e1e..0000000 --- a/openshift/openshift_ec2/ec2.yml +++ /dev/null @@ -1,61 +0,0 @@ -- hosts: localhost - connection: local - pre_tasks: - - fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" - when: id is not defined - - roles: - - role: ec2 - type: dns - ncount: 1 - - - role: ec2 - type: mq - ncount: 3 - - - role: ec2 - type: broker - ncount: 2 - - - role: ec2 - type: nodes - ncount: "{{ count }}" - - post_tasks: - - name: Wait for the instance to come up - wait_for: delay=10 host={{ item.public_dns_name }} port=22 state=started timeout=360 - with_items: ec2.instances - - - debug: msg="{{ groups }}" - -- hosts: all:!localhost - user: root - roles: - - role: common - -- hosts: dns - user: root - roles: - - role: dns - -- hosts: mongo_servers - user: root - roles: - - role: mongodb - -- hosts: mq - user: root - roles: - - role: mq - -- hosts: broker - user: root - roles: - - role: broker - -- hosts: nodes - user: root - roles: - - role: nodes - - diff --git a/openshift/openshift_ec2/ec2_remove.yml b/openshift/openshift_ec2/ec2_remove.yml deleted file mode 100644 index c667dd3..0000000 --- a/openshift/openshift_ec2/ec2_remove.yml +++ /dev/null @@ -1,23 +0,0 @@ -- hosts: localhost - connection: local - pre_tasks: - - fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" - when: id is not defined - - roles: - - role: ec2_remove - type: dns - ncount: 1 - - - role: ec2_remove - type: mq - ncount: 3 - - - role: ec2_remove - type: broker - ncount: 2 - - - role: ec2_remove - type: nodes - ncount: "{{ count }}" - diff --git a/openshift/openshift_ec2/ec2hosts b/openshift/openshift_ec2/ec2hosts deleted file mode 100644 index 2fbb50c..0000000 --- a/openshift/openshift_ec2/ec2hosts +++ /dev/null @@ -1 +0,0 @@ -localhost diff --git a/openshift/openshift_ec2/group_vars/all b/openshift/openshift_ec2/group_vars/all deleted file mode 100644 index 0c5b791..0000000 --- a/openshift/openshift_ec2/group_vars/all +++ /dev/null @@ -1,32 +0,0 @@ ---- -# Global Vars for OpenShift - -#EC2 specific varibles -ec2_access_key: "xxx" -ec2_secret_key: "xxx" -keypair: "axialkey" -instance_type: "m1.small" -image: "ami-bf5021d6" -group: "default" -count: 2 -ec2_elbs: oselb -region: "us-east-1" -zone: "us-east-1a" - -iface: '{{ ansible_default_ipv4.interface }}' - -domain_name: example.com -dns_port: 53 -rndc_port: 953 -dns_key: "YG70pT2h9xmn9DviT+E6H8MNlJ9wc7Xa9qpCOtuonj3oLJGBBA8udXUsJnoGdMSIIw2pk9lw9QL4rv8XQNBRLQ==" - -mongodb_datadir_prefix: /data/ -mongod_port: 2700 -mongo_admin_pass: passme - -mcollective_pass: passme -admin_pass: passme -amquser_pass: passme - -vip: 192.168.2.15 -vip_netmask: 255.255.255.0 diff --git a/openshift/openshift_ec2/hosts b/openshift/openshift_ec2/hosts deleted file mode 100644 index 2c77041..0000000 --- a/openshift/openshift_ec2/hosts +++ /dev/null @@ -1,23 +0,0 @@ -[dns] -vm1 -[mongo_servers] -vm1 -vm2 -vm3 - -[mq] -vm1 -vm2 -vm3 - -[broker] -vm1 -vm2 - -[nodes] -vm4 - -[lvs] -vm5 -vm3 - diff --git a/openshift/openshift_ec2/images/app_deploy.png b/openshift/openshift_ec2/images/app_deploy.png deleted file mode 100644 index 6f9fe7d..0000000 Binary files a/openshift/openshift_ec2/images/app_deploy.png and /dev/null differ diff --git a/openshift/openshift_ec2/images/arch.png b/openshift/openshift_ec2/images/arch.png deleted file mode 100644 index c8bd18d..0000000 Binary files a/openshift/openshift_ec2/images/arch.png and /dev/null differ diff --git a/openshift/openshift_ec2/images/mq.png b/openshift/openshift_ec2/images/mq.png deleted file mode 100644 index e1372bf..0000000 Binary files a/openshift/openshift_ec2/images/mq.png and /dev/null differ diff --git a/openshift/openshift_ec2/roles/broker/files/gem.sh b/openshift/openshift_ec2/roles/broker/files/gem.sh deleted file mode 100644 index 2883aef..0000000 --- a/openshift/openshift_ec2/roles/broker/files/gem.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/bin/bash -/usr/bin/scl enable ruby193 "gem install rspec --version 1.3.0 --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install fakefs --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install httpclient --version 2.3.2 --no-rdoc --no-ri" ; touch /opt/gem.init diff --git a/openshift/openshift_ec2/roles/broker/files/htpasswd b/openshift/openshift_ec2/roles/broker/files/htpasswd deleted file mode 100644 index abd468f..0000000 --- a/openshift/openshift_ec2/roles/broker/files/htpasswd +++ /dev/null @@ -1 +0,0 @@ -demo:k2WsPcYIRAaXs diff --git a/openshift/openshift_ec2/roles/broker/files/openshift-origin-auth-remote-basic-user.conf b/openshift/openshift_ec2/roles/broker/files/openshift-origin-auth-remote-basic-user.conf deleted file mode 100644 index 30e179d..0000000 --- a/openshift/openshift_ec2/roles/broker/files/openshift-origin-auth-remote-basic-user.conf +++ /dev/null @@ -1,25 +0,0 @@ -LoadModule auth_basic_module modules/mod_auth_basic.so -LoadModule authn_file_module modules/mod_authn_file.so -LoadModule authz_user_module modules/mod_authz_user.so - -# Turn the authenticated remote-user into an Apache environment variable for the console security controller -RewriteEngine On -RewriteCond %{LA-U:REMOTE_USER} (.+) -RewriteRule . - [E=RU:%1] -RequestHeader set X-Remote-User "%{RU}e" env=RU - - - AuthName "OpenShift Developer Console" - AuthType Basic - AuthUserFile /etc/openshift/htpasswd - require valid-user - - # The node->broker auth is handled in the Ruby code - BrowserMatch Openshift passthrough - Allow from env=passthrough - - Order Deny,Allow - Deny from all - Satisfy any - - diff --git a/openshift/openshift_ec2/roles/broker/files/openshift-origin-auth-remote-user.conf b/openshift/openshift_ec2/roles/broker/files/openshift-origin-auth-remote-user.conf deleted file mode 100644 index be9e84f..0000000 --- a/openshift/openshift_ec2/roles/broker/files/openshift-origin-auth-remote-user.conf +++ /dev/null @@ -1,39 +0,0 @@ -LoadModule auth_basic_module modules/mod_auth_basic.so -LoadModule authn_file_module modules/mod_authn_file.so -LoadModule authz_user_module modules/mod_authz_user.so - - - AuthName "OpenShift broker API" - AuthType Basic - AuthUserFile /etc/openshift/htpasswd - require valid-user - - SetEnvIfNoCase Authorization Bearer passthrough - - # The node->broker auth is handled in the Ruby code - BrowserMatchNoCase ^OpenShift passthrough - Allow from env=passthrough - - # Console traffic will hit the local port. mod_proxy will set this header automatically. - SetEnvIf X-Forwarded-For "^$" local_traffic=1 - # Turn the Console output header into the Apache environment variable for the broker remote-user plugin - SetEnvIf X-Remote-User "(..*)" REMOTE_USER=$1 - Allow from env=local_traffic - - Order Deny,Allow - Deny from all - Satisfy any - - -# The following APIs do not require auth: - - Allow from all - - - - Allow from all - - - - Allow from all - diff --git a/openshift/openshift_ec2/roles/broker/files/server_priv.pem b/openshift/openshift_ec2/roles/broker/files/server_priv.pem deleted file mode 100644 index ae1c791..0000000 --- a/openshift/openshift_ec2/roles/broker/files/server_priv.pem +++ /dev/null @@ -1,27 +0,0 @@ ------BEGIN RSA PRIVATE KEY----- -MIIEpAIBAAKCAQEAyWM85VFDBOdWz16oC7j8Q7uHHbs3UVzRhHhHkSg8avK6ETMH -piXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4blZHcDO5T1I//MyjwVNfqAQV4xin -qRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadpL0vZrzXAJREEhh/G7021q010CF+E -KTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgbCdvx+DnTw9E/YY+07/lDPOiXQFZm -7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc1BMXzHUiDVehNZ+I8ffzFltNNUL0 -Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0bwIDAQABAoIBAQDJPXpvqLlw3/92 -bx87v5mN0YneYuOPUVIorszNN8jQEkduwnCFTec2b8xRgx45AqwG3Ol/xM/V+qrd -eEvUs/fBgkQW0gj+Q7GfW5rTqA2xZou8iDmaF0/0tCbFWkoe8I8MdCkOl0Pkv1A4 -Au/UNqc8VO5tUCf2oj/EC2MOZLgCOTaerePnc+SFIf4TkerixPA9I4KYWwJQ2eXG -esSfR2f2EsUGfwOqKLEQU1JTMFkttbSAp42p+xpRaUh1FuyLHDlf3EeFmq5BPaFL -UnpzPDJTZtXjnyBrM9fb1ewiFW8x+EBmsdGooY7ptrWWhGzvxAsK9C0L2di3FBAy -gscM/rPBAoGBAPpt0xXtVWJu2ezoBfjNuqwMqGKFsOF3hi5ncOHW9nd6iZABD5Xt -KamrszxItkqiJpEacBCabgfo0FSLEOo+KqfTBK/r4dIoMwgcfhJOz+HvEC6+557n -GEFaL+evdLrxNrU41wvvfCzPK7pWaQGR1nrGohTyX5ZH4uA0Kmreof+PAoGBAM3e -IFPNrXuzhgShqFibWqJ8JdsSfMroV62aCqdJlB92lxx8JJ2lEiAMPfHmAtF1g01r -oBUcJcPfuBZ0bC1KxIvtz9d5m1f2geNGH/uwVULU3skhPBwqAs2s607/Z1S+/WRr -Af1rAs2KTJ7BDCQo8g2TPUO+sDrUzR6joxOy/Y0hAoGAbWaI7m1N/cBbZ4k9AqIt -SHgHH3M0AGtMrPz3bVGRPkTDz6sG+gIvTzX5CP7i09veaUlZZ4dvRflI+YX/D7W0 -wLgItimf70UsdgCseqb/Xb4oHaO8X8io6fPSNa6KmhhCRAzetRIb9x9SBQc2vD7P -qbcYm3n+lBI3ZKalWSaFMrUCgYEAsV0xfuISGCRIT48zafuWr6zENKUN7QcWGxQ/ -H3eN7TmP4VO3fDZukjvZ1qHzRaC32ijih61zf/ksMfRmCvOCuIfP7HXx92wC5dtR -zNdT7btWofRHRICRX8AeDzaOQP43c5+Z3Eqo5IrFjnUFz9WTDU0QmGAeluEmQ8J5 -yowIVOECgYB97fGLuEBSlKJCvmWp6cTyY+mXbiQjYYGBbYAiJWnwaK9U3bt71we/ -MQNzBHAe0mPCReVHSr68BfoWY/crV+7RKSBgrDpR0Y0DI1yn0LXXZfd3NNrTVaAb -rScbJ8Xe3qcLi3QZ3BxaWfub08Wm57wjDBBqGZyExYjjlGSpjBpVJQ== ------END RSA PRIVATE KEY----- diff --git a/openshift/openshift_ec2/roles/broker/files/server_pub.pem b/openshift/openshift_ec2/roles/broker/files/server_pub.pem deleted file mode 100644 index a0c54a7..0000000 --- a/openshift/openshift_ec2/roles/broker/files/server_pub.pem +++ /dev/null @@ -1,9 +0,0 @@ ------BEGIN PUBLIC KEY----- -MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWM85VFDBOdWz16oC7j8 -Q7uHHbs3UVzRhHhHkSg8avK6ETMHpiXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4 -blZHcDO5T1I//MyjwVNfqAQV4xinqRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadp -L0vZrzXAJREEhh/G7021q010CF+EKTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgb -Cdvx+DnTw9E/YY+07/lDPOiXQFZm7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc -1BMXzHUiDVehNZ+I8ffzFltNNUL0Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0 -bwIDAQAB ------END PUBLIC KEY----- diff --git a/openshift/openshift_ec2/roles/broker/files/ssl.conf b/openshift/openshift_ec2/roles/broker/files/ssl.conf deleted file mode 100644 index 614b1f8..0000000 --- a/openshift/openshift_ec2/roles/broker/files/ssl.conf +++ /dev/null @@ -1,74 +0,0 @@ -# -# This is the Apache server configuration file providing SSL support. -# It contains the configuration directives to instruct the server how to -# serve pages over an https connection. For detailing information about these -# directives see -# -# Do NOT simply read the instructions in here without understanding -# what they do. They're here only as hints or reminders. If you are unsure -# consult the online docs. You have been warned. -# - -LoadModule ssl_module modules/mod_ssl.so - -# -# When we also provide SSL we have to listen to the -# the HTTPS port in addition. -# -Listen 443 - -## -## SSL Global Context -## -## All SSL configuration in this context applies both to -## the main server and all SSL-enabled virtual hosts. -## - -# Pass Phrase Dialog: -# Configure the pass phrase gathering process. -# The filtering dialog program (`builtin' is a internal -# terminal dialog) has to provide the pass phrase on stdout. -SSLPassPhraseDialog builtin - -# Inter-Process Session Cache: -# Configure the SSL Session Cache: First the mechanism -# to use and second the expiring timeout (in seconds). -SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) -SSLSessionCacheTimeout 300 - -# Semaphore: -# Configure the path to the mutual exclusion semaphore the -# SSL engine uses internally for inter-process synchronization. -SSLMutex default - -# Pseudo Random Number Generator (PRNG): -# Configure one or more sources to seed the PRNG of the -# SSL library. The seed data should be of good random quality. -# WARNING! On some platforms /dev/random blocks if not enough entropy -# is available. This means you then cannot use the /dev/random device -# because it would lead to very long connection times (as long as -# it requires to make more entropy available). But usually those -# platforms additionally provide a /dev/urandom device which doesn't -# block. So, if available, use this one instead. Read the mod_ssl User -# Manual for more details. -SSLRandomSeed startup file:/dev/urandom 256 -SSLRandomSeed connect builtin -#SSLRandomSeed startup file:/dev/random 512 -#SSLRandomSeed connect file:/dev/random 512 -#SSLRandomSeed connect file:/dev/urandom 512 - -# -# Use "SSLCryptoDevice" to enable any supported hardware -# accelerators. Use "openssl engine -v" to list supported -# engine names. NOTE: If you enable an accelerator and the -# server does not start, consult the error logs and ensure -# your accelerator is functioning properly. -# -SSLCryptoDevice builtin -#SSLCryptoDevice ubsec - -## -## SSL Virtual Host Context -## - - diff --git a/openshift/openshift_ec2/roles/broker/handlers/main.yml b/openshift/openshift_ec2/roles/broker/handlers/main.yml deleted file mode 100644 index c63257d..0000000 --- a/openshift/openshift_ec2/roles/broker/handlers/main.yml +++ /dev/null @@ -1,9 +0,0 @@ ---- -# handlers for broker - -- name: restart broker - service: name=openshift-broker state=restarted - -- name: restart console - service: name=openshift-console state=restarted - diff --git a/openshift/openshift_ec2/roles/broker/tasks/main.yml b/openshift/openshift_ec2/roles/broker/tasks/main.yml deleted file mode 100644 index 6e82f51..0000000 --- a/openshift/openshift_ec2/roles/broker/tasks/main.yml +++ /dev/null @@ -1,104 +0,0 @@ ---- -# Tasks for the Openshift broker installation - -- name: Install mcollective - yum: name=mcollective-client - -- name: Copy the mcollective configuration file - template: src=client.cfg.j2 dest=/etc/mcollective/client.cfg - -- name: Install the broker components - yum: name="{{ item }}" state=installed - with_items: "{{ broker_packages }}" - -- name: Copy the rhc client configuration file - template: src=express.conf.j2 dest=/etc/openshift/express.conf - register: last_run - -- name: Install the gems for rhc - script: gem.sh - when: last_run.changed - -- name: create the file for mcollective logging - copy: content="" dest=/var/log/mcollective-client.log owner=apache group=root - -- name: SELinux - configure sebooleans - seboolean: name="{{ item }}" state=true persistent=yes - with_items: - - httpd_unified - - httpd_execmem - - httpd_can_network_connect - - httpd_can_network_relay - - httpd_run_stickshift - - named_write_master_zones - - httpd_verify_dns - - allow_ypbind - -- name: copy the auth keyfiles - copy: src="{{ item }}" dest="/etc/openshift/{{ item }}" - with_items: - - server_priv.pem - - server_pub.pem - - htpasswd - -- name: copy the local ssh keys - copy: src="~/.ssh/{{ item }}" dest="~/.ssh/{{ item }}" - with_items: - - id_rsa.pub - - id_rsa - -- name: copy the local ssh keys to openshift dir - copy: src="~/.ssh/{{ item }}" dest="/etc/openshift/rsync_{{ item }}" - with_items: - - id_rsa.pub - - id_rsa - -- name: Copy the broker configuration file - template: src=broker.conf.j2 dest=/etc/openshift/broker.conf - notify: restart broker - -- name: Copy the console configuration file - template: src=console.conf.j2 dest=/etc/openshift/console.conf - notify: restart console - -- name: create the file for ssl.conf - copy: src=ssl.conf dest=/etc/httpd/conf.d/ssl.conf owner=apache group=root - -- name: copy the configuration file for openstack plugins - template: src="{{ item }}" dest="/etc/openshift/plugins.d/{{ item }}" - with_items: - - openshift-origin-auth-remote-user.conf - - openshift-origin-dns-bind.conf - - openshift-origin-msg-broker-mcollective.conf - -- name: Bundle the ruby gems - shell: chdir=/var/www/openshift/broker/ /usr/bin/scl enable ruby193 "bundle show"; touch bundle.init - creates=//var/www/openshift/broker/bundle.init - -- name: Copy the httpd configuration file - copy: src=openshift-origin-auth-remote-user.conf dest=/var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf - notify: restart broker - -- name: Copy the httpd configuration file for console - copy: src=openshift-origin-auth-remote-basic-user.conf dest=/var/www/openshift/console/httpd/conf.d/openshift-origin-auth-remote-basic-user.conf - notify: restart console - -- name: Fix the selinux contexts on several files - shell: fixfiles -R rubygem-passenger restore; fixfiles -R mod_passenger restore; restorecon -rv /var/run; restorecon -rv /usr/share/rubygems/gems/passenger-*; touch /opt/context.fixed creates=/opt/context.fixed - -- name: start the http and broker service - service: name="{{ item }}" state=started enabled=yes - with_items: - - httpd - - openshift-broker - -- name: Install the rhc client - gem: name={{ item }} state=latest - with_items: - - rdoc - - rhc - ignore_errors: yes - -- name: copy the resolv.conf - template: src=resolv.conf.j2 dest=/etc/resolv.conf - diff --git a/openshift/openshift_ec2/roles/broker/templates/broker.conf.j2 b/openshift/openshift_ec2/roles/broker/templates/broker.conf.j2 deleted file mode 100644 index 7d83387..0000000 --- a/openshift/openshift_ec2/roles/broker/templates/broker.conf.j2 +++ /dev/null @@ -1,47 +0,0 @@ -# Domain suffix to use for applications (Must match node config) -CLOUD_DOMAIN="{{ domain_name }}" -# Comma seperted list of valid gear sizes -VALID_GEAR_SIZES="small,medium" - -# Default number of gears to assign to a new user -DEFAULT_MAX_GEARS="100" -# Default gear size for a new gear -DEFAULT_GEAR_SIZE="small" - -#Broker datastore configuration -MONGO_REPLICA_SETS=true -# Replica set example: ": : ..." -{% set hosts = '' %} - {% for host in groups['mongo_servers'] %} - {% if loop.last %} - {% set hosts = hosts + host + ':' ~ mongod_port + ' ' %} - -MONGO_HOST_PORT="{{ hosts }}" - - {% else %} - {% set hosts = hosts + host + ':' ~ mongod_port + ', ' %} - {% endif %} - {% endfor %} - -MONGO_USER="admin" -MONGO_PASSWORD="{{ mongo_admin_pass }}" -MONGO_DB="admin" - -#Enables gear/filesystem resource usage tracking -ENABLE_USAGE_TRACKING_DATASTORE="false" -#Log resource usage information to syslog -ENABLE_USAGE_TRACKING_SYSLOG="false" - -#Enable all broker analytics -ENABLE_ANALYTICS="false" - -#Enables logging of REST API operations and success/failure -ENABLE_USER_ACTION_LOG="true" -USER_ACTION_LOG_FILE="/var/log/openshift/broker/user_action.log" - -AUTH_SALT="{{ auth_salt }}" -AUTH_PRIVKEYFILE="/etc/openshift/server_priv.pem" -AUTH_PRIVKEYPASS="" -AUTH_PUBKEYFILE="/etc/openshift/server_pub.pem" -AUTH_RSYNC_KEY_FILE="/etc/openshift/rsync_id_rsa" -SESSION_SECRET="{{ session_secret }}" diff --git a/openshift/openshift_ec2/roles/broker/templates/client.cfg.j2 b/openshift/openshift_ec2/roles/broker/templates/client.cfg.j2 deleted file mode 100644 index 39a086d..0000000 --- a/openshift/openshift_ec2/roles/broker/templates/client.cfg.j2 +++ /dev/null @@ -1,25 +0,0 @@ -topicprefix = /topic/ -main_collective = mcollective -collectives = mcollective -libdir = /opt/rh/ruby193/root/usr/libexec/mcollective -logfile = /var/log/mcollective-client.log -loglevel = debug -direct_addressing = 1 - -# Plugins -securityprovider = psk -plugin.psk = unset - -connector = stomp -plugin.stomp.pool.size = {{ groups['mq']|length() }} -{% for host in groups['mq'] %} - -plugin.stomp.pool.host{{ loop.index }} = {{ hostvars[host].ansible_hostname }} -plugin.stomp.pool.port{{ loop.index }} = 61613 -plugin.stomp.pool.user{{ loop.index }} = mcollective -plugin.stomp.pool.password{{ loop.index }} = {{ mcollective_pass }} - -{% endfor %} - - - diff --git a/openshift/openshift_ec2/roles/broker/templates/console.conf.j2 b/openshift/openshift_ec2/roles/broker/templates/console.conf.j2 deleted file mode 100644 index 3791938..0000000 --- a/openshift/openshift_ec2/roles/broker/templates/console.conf.j2 +++ /dev/null @@ -1,8 +0,0 @@ -BROKER_URL=http://localhost:8080/broker/rest - -CONSOLE_SECURITY=remote_user - -REMOTE_USER_HEADER=REMOTE_USER - -REMOTE_USER_COPY_HEADERS=X-Remote-User -SESSION_SECRET="{{ session_secret }}" diff --git a/openshift/openshift_ec2/roles/broker/templates/express.conf.j2 b/openshift/openshift_ec2/roles/broker/templates/express.conf.j2 deleted file mode 100644 index a4951f0..0000000 --- a/openshift/openshift_ec2/roles/broker/templates/express.conf.j2 +++ /dev/null @@ -1,8 +0,0 @@ -# Remote API server -libra_server = '{{ ansible_hostname }}' - -# Logging -debug = 'false' - -# Timeout -#timeout = '10' diff --git a/openshift/openshift_ec2/roles/broker/templates/openshift-origin-auth-remote-user.conf b/openshift/openshift_ec2/roles/broker/templates/openshift-origin-auth-remote-user.conf deleted file mode 100644 index 67f1545..0000000 --- a/openshift/openshift_ec2/roles/broker/templates/openshift-origin-auth-remote-user.conf +++ /dev/null @@ -1,4 +0,0 @@ -# Settings related to the Remote-User variant of an OpenShift auth plugin - -# The name of the header containing the trusted username -TRUSTED_HEADER="REMOTE_USER" diff --git a/openshift/openshift_ec2/roles/broker/templates/openshift-origin-dns-bind.conf b/openshift/openshift_ec2/roles/broker/templates/openshift-origin-dns-bind.conf deleted file mode 100644 index b5a2390..0000000 --- a/openshift/openshift_ec2/roles/broker/templates/openshift-origin-dns-bind.conf +++ /dev/null @@ -1,16 +0,0 @@ -# Settings related to the bind variant of an OpenShift DNS plugin - -# The DNS server -BIND_SERVER="{{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }}" - -# The DNS server's port -BIND_PORT=53 - -# The key name for your zone -BIND_KEYNAME="{{ domain_name }}" - -# base64-encoded key, most likely from /var/named/example.com.key. -BIND_KEYVALUE="{{ dns_key }}" - -# The base zone for the DNS server -BIND_ZONE="{{ domain_name }}" diff --git a/openshift/openshift_ec2/roles/broker/templates/openshift-origin-msg-broker-mcollective.conf b/openshift/openshift_ec2/roles/broker/templates/openshift-origin-msg-broker-mcollective.conf deleted file mode 100644 index 76c9d75..0000000 --- a/openshift/openshift_ec2/roles/broker/templates/openshift-origin-msg-broker-mcollective.conf +++ /dev/null @@ -1,25 +0,0 @@ -# Some settings to configure how mcollective handles gear placement on nodes: - -# Use districts when placing gears and moving them between hosts. Should be -# true except for particular dev/test situations. -DISTRICTS_ENABLED=true - -# Require new gears to be placed in a district; when true, placement will fail -# if there isn't a district with capacity and the right gear profile. -DISTRICTS_REQUIRE_FOR_APP_CREATE=false - -# Used as the default max gear capacity when creating a district. -DISTRICTS_MAX_CAPACITY=6000 - -# It is unlikely these will need to be changed -DISTRICTS_FIRST_UID=1000 -MCOLLECTIVE_DISCTIMEOUT=5 -MCOLLECTIVE_TIMEOUT=180 -MCOLLECTIVE_VERBOSE=false -MCOLLECTIVE_PROGRESS_BAR=0 -MCOLLECTIVE_CONFIG="/etc/mcollective/client.cfg" - -# Place gears on nodes with the requested profile; should be true, as -# a false value means gear profiles are ignored and gears are placed arbitrarily. -NODE_PROFILE_ENABLED=true - diff --git a/openshift/openshift_ec2/roles/broker/templates/resolv.conf.j2 b/openshift/openshift_ec2/roles/broker/templates/resolv.conf.j2 deleted file mode 100644 index 9fdca84..0000000 --- a/openshift/openshift_ec2/roles/broker/templates/resolv.conf.j2 +++ /dev/null @@ -1,2 +0,0 @@ -search {{ domain_name }} -nameserver {{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }} diff --git a/openshift/openshift_ec2/roles/broker/vars/main.yml b/openshift/openshift_ec2/roles/broker/vars/main.yml deleted file mode 100644 index 86c7628..0000000 --- a/openshift/openshift_ec2/roles/broker/vars/main.yml +++ /dev/null @@ -1,82 +0,0 @@ ---- -# variables for broker - -broker_packages: - - mongodb-devel - - openshift-origin-broker - - openshift-origin-broker-util - - rubygem-openshift-origin-dns-nsupdate - - rubygem-openshift-origin-auth-mongo - - rubygem-openshift-origin-auth-remote-user - - rubygem-openshift-origin-controller - - rubygem-openshift-origin-msg-broker-mcollective - - rubygem-openshift-origin-dns-bind - - rubygem-passenger - - ruby193-mod_passenger - - mysql-devel - - openshift-origin-console - - ruby193-rubygem-actionmailer - - ruby193-rubygem-actionpack - - ruby193-rubygem-activemodel - - ruby193-rubygem-activerecord - - ruby193-rubygem-activeresource - - ruby193-rubygem-activesupport - - ruby193-rubygem-arel - - ruby193-rubygem-bigdecimal - - ruby193-rubygem-net-ssh - - ruby193-rubygem-commander - - ruby193-rubygem-archive-tar-minitar - - ruby193-rubygem-bson - - ruby193-rubygem-bson_ext - - ruby193-rubygem-builder - - ruby193-rubygem-bundler - - ruby193-rubygem-cucumber - - ruby193-rubygem-diff-lcs - - ruby193-rubygem-dnsruby - - ruby193-rubygem-erubis - - ruby193-rubygem-gherkin - - ruby193-rubygem-hike - - ruby193-rubygem-i18n - - ruby193-rubygem-journey - - ruby193-rubygem-json - - ruby193-rubygem-mail - - ruby193-rubygem-metaclass - - ruby193-rubygem-mime-types - - ruby193-rubygem-minitest - - ruby193-rubygem-mocha - - ruby193-rubygem-mongo - - ruby193-rubygem-mongoid - - ruby193-rubygem-moped - - ruby193-rubygem-multi_json - - ruby193-rubygem-open4 - - ruby193-rubygem-origin - - ruby193-rubygem-parseconfig - - ruby193-rubygem-polyglot - - ruby193-rubygem-rack - - ruby193-rubygem-rack-cache - - ruby193-rubygem-rack-ssl - - ruby193-rubygem-rack-test - - ruby193-rubygem-rails - - ruby193-rubygem-railties - - ruby193-rubygem-rake - - ruby193-rubygem-rdoc - - ruby193-rubygem-regin - - ruby193-rubygem-rest-client - - ruby193-rubygem-simplecov - - ruby193-rubygem-simplecov-html - - ruby193-rubygem-sprockets - - ruby193-rubygem-state_machine - - ruby193-rubygem-stomp - - ruby193-rubygem-systemu - - ruby193-rubygem-term-ansicolor - - ruby193-rubygem-thor - - ruby193-rubygem-tilt - - ruby193-rubygem-treetop - - ruby193-rubygem-tzinfo - - ruby193-rubygem-xml-simple - - - -auth_salt: "ceFm8El0mTLu7VLGpBFSFfmxeID+UoNfsQrAKs8dhKSQ/uAGwjWiz3VdyuB1fW/WR+R1q7yXW+sxSm9wkmuqVA==" -session_secret: "25905ebdb06d8705025531bb5cb45335c53c4f36ee534719ffffd0fe28808395d80449c6c69bc079e2ac14c8ff66639bba1513332ef9ad5ed42cc0bb21e07134" - diff --git a/openshift/openshift_ec2/roles/common/files/RPM-GPG-KEY-EPEL-6 b/openshift/openshift_ec2/roles/common/files/RPM-GPG-KEY-EPEL-6 deleted file mode 100644 index 7a20304..0000000 --- a/openshift/openshift_ec2/roles/common/files/RPM-GPG-KEY-EPEL-6 +++ /dev/null @@ -1,29 +0,0 @@ ------BEGIN PGP PUBLIC KEY BLOCK----- -Version: GnuPG v1.4.5 (GNU/Linux) - -mQINBEvSKUIBEADLGnUj24ZVKW7liFN/JA5CgtzlNnKs7sBg7fVbNWryiE3URbn1 -JXvrdwHtkKyY96/ifZ1Ld3lE2gOF61bGZ2CWwJNee76Sp9Z+isP8RQXbG5jwj/4B -M9HK7phktqFVJ8VbY2jfTjcfxRvGM8YBwXF8hx0CDZURAjvf1xRSQJ7iAo58qcHn -XtxOAvQmAbR9z6Q/h/D+Y/PhoIJp1OV4VNHCbCs9M7HUVBpgC53PDcTUQuwcgeY6 -pQgo9eT1eLNSZVrJ5Bctivl1UcD6P6CIGkkeT2gNhqindRPngUXGXW7Qzoefe+fV -QqJSm7Tq2q9oqVZ46J964waCRItRySpuW5dxZO34WM6wsw2BP2MlACbH4l3luqtp -Xo3Bvfnk+HAFH3HcMuwdaulxv7zYKXCfNoSfgrpEfo2Ex4Im/I3WdtwME/Gbnwdq -3VJzgAxLVFhczDHwNkjmIdPAlNJ9/ixRjip4dgZtW8VcBCrNoL+LhDrIfjvnLdRu -vBHy9P3sCF7FZycaHlMWP6RiLtHnEMGcbZ8QpQHi2dReU1wyr9QgguGU+jqSXYar -1yEcsdRGasppNIZ8+Qawbm/a4doT10TEtPArhSoHlwbvqTDYjtfV92lC/2iwgO6g -YgG9XrO4V8dV39Ffm7oLFfvTbg5mv4Q/E6AWo/gkjmtxkculbyAvjFtYAQARAQAB -tCFFUEVMICg2KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAjYEEwECACAFAkvS -KUICGw8GCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA7Sd8qBgi4lR/GD/wLGPv9 -qO39eyb9NlrwfKdUEo1tHxKdrhNz+XYrO4yVDTBZRPSuvL2yaoeSIhQOKhNPfEgT -9mdsbsgcfmoHxmGVcn+lbheWsSvcgrXuz0gLt8TGGKGGROAoLXpuUsb1HNtKEOwP -Q4z1uQ2nOz5hLRyDOV0I2LwYV8BjGIjBKUMFEUxFTsL7XOZkrAg/WbTH2PW3hrfS -WtcRA7EYonI3B80d39ffws7SmyKbS5PmZjqOPuTvV2F0tMhKIhncBwoojWZPExft -HpKhzKVh8fdDO/3P1y1Fk3Cin8UbCO9MWMFNR27fVzCANlEPljsHA+3Ez4F7uboF -p0OOEov4Yyi4BEbgqZnthTG4ub9nyiupIZ3ckPHr3nVcDUGcL6lQD/nkmNVIeLYP -x1uHPOSlWfuojAYgzRH6LL7Idg4FHHBA0to7FW8dQXFIOyNiJFAOT2j8P5+tVdq8 -wB0PDSH8yRpn4HdJ9RYquau4OkjluxOWf0uRaS//SUcCZh+1/KBEOmcvBHYRZA5J -l/nakCgxGb2paQOzqqpOcHKvlyLuzO5uybMXaipLExTGJXBlXrbbASfXa/yGYSAG -iVrGz9CE6676dMlm8F+s3XXE13QZrXmjloc6jwOljnfAkjTGXjiB7OULESed96MR -XtfLk0W5Ab9pd7tKDR6QHI7rgHXfCopRnZ2VVQ== -=V/6I ------END PGP PUBLIC KEY BLOCK----- diff --git a/openshift/openshift_ec2/roles/common/files/epel.repo.j2 b/openshift/openshift_ec2/roles/common/files/epel.repo.j2 deleted file mode 100644 index 0160dfe..0000000 --- a/openshift/openshift_ec2/roles/common/files/epel.repo.j2 +++ /dev/null @@ -1,26 +0,0 @@ -[epel] -name=Extra Packages for Enterprise Linux 6 - $basearch -#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch -failovermethod=priority -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 - -[epel-debuginfo] -name=Extra Packages for Enterprise Linux 6 - $basearch - Debug -#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch -failovermethod=priority -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 -gpgcheck=1 - -[epel-source] -name=Extra Packages for Enterprise Linux 6 - $basearch - Source -#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch -failovermethod=priority -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 -gpgcheck=1 diff --git a/openshift/openshift_ec2/roles/common/files/openshift.repo b/openshift/openshift_ec2/roles/common/files/openshift.repo deleted file mode 100644 index c0a4332..0000000 --- a/openshift/openshift_ec2/roles/common/files/openshift.repo +++ /dev/null @@ -1,13 +0,0 @@ -[openshift_support] -name=Extra Packages for OpenShift - $basearch -baseurl=https://mirror.openshift.com/pub/openshift/release/2/rhel-6/dependencies/x86_64/ -failovermethod=priority -enabled=1 -gpgcheck=0 - -[openshift] -name=Packages for OpenShift - $basearch -baseurl=https://mirror.openshift.com/pub/openshift/release/2/rhel-6/packages/x86_64/ -failovermethod=priority -enabled=1 -gpgcheck=0 diff --git a/openshift/openshift_ec2/roles/common/files/scl193.sh b/openshift/openshift_ec2/roles/common/files/scl193.sh deleted file mode 100644 index 54673f8..0000000 --- a/openshift/openshift_ec2/roles/common/files/scl193.sh +++ /dev/null @@ -1,10 +0,0 @@ -# Setup PATH, LD_LIBRARY_PATH and MANPATH for ruby-1.9 -ruby19_dir=$(dirname `scl enable ruby193 "which ruby"`) -export PATH=$ruby19_dir:$PATH - -ruby19_ld_libs=$(scl enable ruby193 "printenv LD_LIBRARY_PATH") -export LD_LIBRARY_PATH=$ruby19_ld_libs:$LD_LIBRARY_PATH - -ruby19_manpath=$(scl enable ruby193 "printenv MANPATH") -export MANPATH=$ruby19_manpath:$MANPATH - diff --git a/openshift/openshift_ec2/roles/common/handlers/main.yml b/openshift/openshift_ec2/roles/common/handlers/main.yml deleted file mode 100644 index 0f563a9..0000000 --- a/openshift/openshift_ec2/roles/common/handlers/main.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -# Handler for mongod - -- name: restart iptables - service: name=iptables state=restarted diff --git a/openshift/openshift_ec2/roles/common/tasks/main.yml b/openshift/openshift_ec2/roles/common/tasks/main.yml deleted file mode 100644 index d200131..0000000 --- a/openshift/openshift_ec2/roles/common/tasks/main.yml +++ /dev/null @@ -1,42 +0,0 @@ ---- -# Common tasks across nodes - -- name: Install common packages - yum : name={{ item }} state=installed - with_items: - - libselinux-python - - policycoreutils - - policycoreutils-python - - ntp - - ruby-devel - -- name: make sure we have the right time - shell: ntpdate -u 0.centos.pool.ntp.org - -- name: start the ntp service - service: name=ntpd state=started enabled=yes - -- name: Create the hosts file for all machines - template: src=hosts.j2 dest=/etc/hosts - -- name: Create the EPEL Repository. - copy: src=epel.repo.j2 dest=/etc/yum.repos.d/epel.repo - -- name: Create the OpenShift Repository. - copy: src=openshift.repo dest=/etc/yum.repos.d/openshift.repo - -- name: Create the GPG key for EPEL - copy: src=RPM-GPG-KEY-EPEL-6 dest=/etc/pki/rpm-gpg - -- name: SELinux Enforcing (Targeted) - selinux: policy=targeted state=enforcing - -- name: copy the file for ruby193 profile - copy: src=scl193.sh dest=/etc/profile.d/scl193.sh mode=755 - -- name: copy the file for mcollective profile - copy: src=scl193.sh dest=/etc/sysconfig/mcollective mode=755 - -- name: Create the iptables file - template: src=iptables.j2 dest=/etc/sysconfig/iptables - notify: restart iptables diff --git a/openshift/openshift_ec2/roles/common/templates/hosts.j2 b/openshift/openshift_ec2/roles/common/templates/hosts.j2 deleted file mode 100644 index c531cc8..0000000 --- a/openshift/openshift_ec2/roles/common/templates/hosts.j2 +++ /dev/null @@ -1,4 +0,0 @@ -127.0.0.1 localhost -{% for host in groups['all'] %} -{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }} {{ hostvars[host].ansible_hostname }} -{% endfor %} diff --git a/openshift/openshift_ec2/roles/common/templates/iptables.j2 b/openshift/openshift_ec2/roles/common/templates/iptables.j2 deleted file mode 100644 index f1229f4..0000000 --- a/openshift/openshift_ec2/roles/common/templates/iptables.j2 +++ /dev/null @@ -1,52 +0,0 @@ -# Firewall configuration written by system-config-firewall -# Manual customization of this file is not recommended. - -{% if 'broker' in group_names %} -*nat --A PREROUTING -d {{ vip }}/32 -p tcp -m tcp --dport 443 -j REDIRECT -COMMIT -{% endif %} - -*filter -:INPUT ACCEPT [0:0] -:FORWARD ACCEPT [0:0] -:OUTPUT ACCEPT [0:0] -{% if 'mongo_servers' in group_names %} --A INPUT -p tcp --dport {{ mongod_port }} -j ACCEPT -{% endif %} -{% if 'mq' in group_names %} --A INPUT -p tcp --dport 61613 -j ACCEPT --A INPUT -p tcp --dport 61616 -j ACCEPT --A INPUT -p tcp --dport 8161 -j ACCEPT -{% endif %} -{% if 'broker' in group_names %} --A INPUT -p tcp --dport 80 -j ACCEPT --A INPUT -p tcp --dport 443 -j ACCEPT -{% endif %} -{% if 'lvs' in group_names %} --A INPUT -p tcp --dport 80 -j ACCEPT --A INPUT -p tcp --dport 443 -j ACCEPT -{% endif %} -{% if 'nodes' in group_names %} --A INPUT -p tcp --dport 80 -j ACCEPT --A INPUT -p tcp --dport 443 -j ACCEPT --A INPUT -p tcp -m multiport --dports 35531:65535 -j ACCEPT -{% endif %} -{% if 'dns' in group_names %} --A INPUT -p tcp --dport {{ dns_port }} -j ACCEPT --A INPUT -p tcp --dport 80 -j ACCEPT --A INPUT -p tcp --dport 443 -j ACCEPT --A INPUT -p udp --dport {{ dns_port }} -j ACCEPT --A INPUT -p udp --dport {{ rndc_port }} -j ACCEPT --A INPUT -p tcp --dport {{ rndc_port }} -j ACCEPT -{% endif %} --A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT --A INPUT -p icmp -j ACCEPT --A INPUT -i lo -j ACCEPT --A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT --A INPUT -j REJECT --reject-with icmp-host-prohibited --A FORWARD -j REJECT --reject-with icmp-host-prohibited -COMMIT - - - diff --git a/openshift/openshift_ec2/roles/dns/handlers/main.yml b/openshift/openshift_ec2/roles/dns/handlers/main.yml deleted file mode 100644 index 47790c3..0000000 --- a/openshift/openshift_ec2/roles/dns/handlers/main.yml +++ /dev/null @@ -1,6 +0,0 @@ ---- -# handler for dns - -- name: restart named - service: name=named state=restarted enabled=yes - diff --git a/openshift/openshift_ec2/roles/dns/tasks/main.yml b/openshift/openshift_ec2/roles/dns/tasks/main.yml deleted file mode 100644 index 90dcb70..0000000 --- a/openshift/openshift_ec2/roles/dns/tasks/main.yml +++ /dev/null @@ -1,26 +0,0 @@ ---- -# tasks for the bind server - -- name: Install the bind packages - yum: name={{ item }} state=installed - with_items: - - bind - - bind-utils - -- name: Copy the key for dynamic dns updates to the domain - template: src=keyfile.j2 dest=/var/named/{{ domain_name }}.key owner=named group=named - -- name: Copy the forwarders file for bind - template: src=forwarders.conf.j2 dest=/var/named/forwarders.conf owner=named group=named - -- name: copy the db file for the domain - template: src=domain.db.j2 dest=/var/named/dynamic/{{ domain_name }}.db owner=named group=named - -- name: copy the named.conf file - template: src=named.conf.j2 dest=/etc/named.conf owner=root group=named mode=755 - notify: restart named - -- name: restore the sellinux contexts - shell: restorecon -v /var/named/forwarders.conf; restorecon -rv /var/named; restorecon /etc/named.conf; touch /opt/named.init - creates=/opt/named.init - diff --git a/openshift/openshift_ec2/roles/dns/templates/domain.db.j2 b/openshift/openshift_ec2/roles/dns/templates/domain.db.j2 deleted file mode 100644 index 29ea7a5..0000000 --- a/openshift/openshift_ec2/roles/dns/templates/domain.db.j2 +++ /dev/null @@ -1,16 +0,0 @@ -$ORIGIN . -$TTL 1 ; 1 second -{{ domain_name }} IN SOA ns1.{{ domain_name }}. hostmaster.{{ domain_name }}. ( - 2002100404 ; serial - 10800 ; refresh (3 hours) - 3600 ; retry (1 hour) - 3600000 ; expire (5 weeks 6 days 16 hours) - 7200 ; minimum (2 hours) - ) - NS ns1.{{ domain_name }}. -$ORIGIN {{ domain_name }}. -{% for host in groups['nodes'] %} -{{ hostvars[host].ansible_hostname }} A {{ hostvars[host].ansible_default_ipv4.address }} -{% endfor %} -ns1 A {{ ansible_default_ipv4.address }} - diff --git a/openshift/openshift_ec2/roles/dns/templates/forwarders.conf.j2 b/openshift/openshift_ec2/roles/dns/templates/forwarders.conf.j2 deleted file mode 100644 index 9538cbe..0000000 --- a/openshift/openshift_ec2/roles/dns/templates/forwarders.conf.j2 +++ /dev/null @@ -1 +0,0 @@ -forwarders { {{ forwarders|join(';') }}; } ; diff --git a/openshift/openshift_ec2/roles/dns/templates/keyfile.j2 b/openshift/openshift_ec2/roles/dns/templates/keyfile.j2 deleted file mode 100644 index 7aeefa2..0000000 --- a/openshift/openshift_ec2/roles/dns/templates/keyfile.j2 +++ /dev/null @@ -1,6 +0,0 @@ -key {{ domain_name }} { - -algorithm HMAC-MD5; -secret {{ dns_key }}; - -}; diff --git a/openshift/openshift_ec2/roles/dns/templates/named.conf.j2 b/openshift/openshift_ec2/roles/dns/templates/named.conf.j2 deleted file mode 100644 index 0508c0b..0000000 --- a/openshift/openshift_ec2/roles/dns/templates/named.conf.j2 +++ /dev/null @@ -1,57 +0,0 @@ -// -// named.conf -// -// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS -// server as a caching only nameserver (as a localhost DNS resolver only). -// -// See /usr/share/doc/bind*/sample/ for example named configuration files. -// - -options { - listen-on port {{ dns_port }} { any; }; - directory "/var/named"; - dump-file "/var/named/data/cache_dump.db"; - statistics-file "/var/named/data/named_stats.txt"; - memstatistics-file "/var/named/data/named_mem_stats.txt"; - allow-query { any; }; - recursion yes; - - dnssec-enable yes; - dnssec-validation yes; - dnssec-lookaside auto; - - /* Path to ISC DLV key */ - bindkeys-file "/etc/named.iscdlv.key"; - include "forwarders.conf"; - - managed-keys-directory "/var/named/dynamic"; -}; - -logging { - channel default_debug { - file "data/named.run"; - severity debug; - }; -}; -include "{{ domain_name }}.key"; - -controls { - inet * port {{ rndc_port }} allow { any; } - keys { example.com; }; -}; - -zone "{{ domain_name }}" IN { - type master; - file "dynamic/{{ domain_name }}.db"; - allow-update { key {{ domain_name }}; }; -}; - -include "/etc/named.rfc1912.zones"; -include "/etc/named.root.key"; - -zone "." IN { - type hint; - file "named.ca"; -}; - - diff --git a/openshift/openshift_ec2/roles/dns/vars/main.yml b/openshift/openshift_ec2/roles/dns/vars/main.yml deleted file mode 100644 index cb48fca..0000000 --- a/openshift/openshift_ec2/roles/dns/vars/main.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -# Variables for the bind daemon - -forwarders: - - 8.8.8.8 - - 8.8.4.4 - - diff --git a/openshift/openshift_ec2/roles/ec2/tasks/main.yml b/openshift/openshift_ec2/roles/ec2/tasks/main.yml deleted file mode 100644 index 8136ffb..0000000 --- a/openshift/openshift_ec2/roles/ec2/tasks/main.yml +++ /dev/null @@ -1,30 +0,0 @@ ---- -- name: Create Instance - ec2: > - region="{{ region }}" - zone="{{ zone }}" - id="{{ id + '_' + type }}" - ec2_access_key="{{ ec2_access_key }}" - ec2_secret_key="{{ ec2_secret_key }}" - keypair="{{ keypair }}" - instance_type="{{ instance_type }}" - image="{{ image }}" - group="{{ group }}" - wait=true - user_data="{{ type }}" - instance_tags='{"type":"{{ type }}", "id":"{{ id }}"}' - count="{{ ncount }}" - register: ec2 - -- pause: seconds=60 - when: type == 'nodes' - -- name: Add new instance to host group - add_host: hostname={{ item.public_dns_name }} groupname={{ type }} - with_items: ec2.instances - when: type != 'mq' - -- name: Add new instance to host group - add_host: hostname={{ item.public_dns_name }} groupname="mq,mongo_servers" - with_items: ec2.instances - when: type == 'mq' diff --git a/openshift/openshift_ec2/roles/ec2_remove/tasks/main.yml b/openshift/openshift_ec2/roles/ec2_remove/tasks/main.yml deleted file mode 100644 index 1e93667..0000000 --- a/openshift/openshift_ec2/roles/ec2_remove/tasks/main.yml +++ /dev/null @@ -1,25 +0,0 @@ ---- - -- name: Create Instance - ec2: > - region="{{ region }}" - zone="{{ zone }}" - id="{{ id + '_' + type }}" - ec2_access_key="{{ ec2_access_key }}" - ec2_secret_key="{{ ec2_secret_key }}" - keypair="{{ keypair }}" - instance_type="{{ instance_type }}" - image="{{ image }}" - group="{{ group }}" - wait=true - count="{{ ncount }}" - register: ec2 - -- name: Delete Instance - ec2: - region: "{{ region }}" - ec2_access_key: "{{ ec2_access_key }}" - ec2_secret_key: "{{ ec2_secret_key }}" - state: 'absent' - instance_ids: "{{ item }}" - with_items: ec2.instance_ids diff --git a/openshift/openshift_ec2/roles/lvs/tasks/main.yml b/openshift/openshift_ec2/roles/lvs/tasks/main.yml deleted file mode 100644 index 3958df3..0000000 --- a/openshift/openshift_ec2/roles/lvs/tasks/main.yml +++ /dev/null @@ -1,26 +0,0 @@ ---- -# Tasks for deploying the loadbalancer lvs - -- name: Install the lvs packages - yum: name={{ item }} state=installed - with_items: - - piranha - - wget - -- name: disable selinux - selinux: state=disabled - -- name: copy the configuration file - template: src=lvs.cf.j2 dest=/etc/sysconfig/ha/lvs.cf - -- name: copy the file for broker monitoring - template: src=check.sh dest=/opt/check.sh mode=0755 - -- name: start the services - service: name={{ item }} state=started enabled=yes - with_items: - - ipvsadm - - pulse - ignore_errors: yes - tags: test - diff --git a/openshift/openshift_ec2/roles/lvs/templates/check.sh b/openshift/openshift_ec2/roles/lvs/templates/check.sh deleted file mode 100644 index 10f16c8..0000000 --- a/openshift/openshift_ec2/roles/lvs/templates/check.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/bash -LINES=`wget -q -O - --no-check-certificate https://$1/broker/rest/api | wc -c` -if [ $LINES -gt "0" ]; then - echo "OK" -else - echo "FAILURE" -fi -exit 0 diff --git a/openshift/openshift_ec2/roles/lvs/templates/lvs.cf b/openshift/openshift_ec2/roles/lvs/templates/lvs.cf deleted file mode 100644 index b49f2a9..0000000 --- a/openshift/openshift_ec2/roles/lvs/templates/lvs.cf +++ /dev/null @@ -1,44 +0,0 @@ -serial_no = 14 -primary = 10.152.154.62 -service = lvs -backup_active = 1 -backup = 10.114.215.67 -heartbeat = 1 -heartbeat_port = 539 -keepalive = 6 -deadtime = 18 -network = direct -debug_level = NONE -monitor_links = 0 -syncdaemon = 0 -tcp_timeout = 30 -tcpfin_timeout = 30 -udp_timeout = 30 -virtual webserver { - active = 1 - address = 10.114.215.69 eth0:1 - vip_nmask = 255.255.255.255 - port = 80 - pmask = 255.255.255.255 - send = "GET / HTTP/1.0\r\n\r\n" - expect = "HTTP" - use_regex = 0 - load_monitor = none - scheduler = rr - protocol = tcp - timeout = 60 - reentry = 45 - quiesce_server = 0 - server web1 { - address = 10.35.91.109 - active = 1 - port = 80 - weight = 1 - } - server web2 { - address = 10.147.222.172 - active = 1 - port = 80 - weight = 2 - } -} diff --git a/openshift/openshift_ec2/roles/lvs/templates/lvs.cf.https b/openshift/openshift_ec2/roles/lvs/templates/lvs.cf.https deleted file mode 100644 index 3b7c806..0000000 --- a/openshift/openshift_ec2/roles/lvs/templates/lvs.cf.https +++ /dev/null @@ -1,44 +0,0 @@ -serial_no = 14 -primary = 10.152.154.62 -service = lvs -backup_active = 1 -backup = 10.114.215.67 -heartbeat = 1 -heartbeat_port = 539 -keepalive = 6 -deadtime = 18 -network = direct -debug_level = NONE -monitor_links = 0 -syncdaemon = 0 -tcp_timeout = 30 -tcpfin_timeout = 30 -udp_timeout = 30 -virtual webserver { - active = 1 - address = 10.114.215.69 eth0:1 - vip_nmask = 255.255.255.255 - port = 80 - pmask = 255.255.255.255 - send_program = "/etc/https.check" - expect = "OK" - use_regex = 0 - load_monitor = none - scheduler = rr - protocol = tcp - timeout = 60 - reentry = 45 - quiesce_server = 0 - server web1 { - address = 10.35.91.109 - active = 1 - port = 80 - weight = 1 - } - server web2 { - address = 10.147.222.172 - active = 1 - port = 80 - weight = 2 - } -} diff --git a/openshift/openshift_ec2/roles/lvs/templates/lvs.cf.j2 b/openshift/openshift_ec2/roles/lvs/templates/lvs.cf.j2 deleted file mode 100644 index e5f83fa..0000000 --- a/openshift/openshift_ec2/roles/lvs/templates/lvs.cf.j2 +++ /dev/null @@ -1,45 +0,0 @@ -serial_no = 1 -primary = {{ hostvars[groups['lvs'][0]].ansible_default_ipv4.address }} -service = lvs -backup_active = 1 -backup = {{ hostvars[groups['lvs'][1]].ansible_default_ipv4.address }} -heartbeat = 1 -heartbeat_port = 539 -keepalive = 6 -deadtime = 18 -network = direct -debug_level = NONE -monitor_links = 0 -syncdaemon = 0 -tcp_timeout = 30 -tcpfin_timeout = 30 -udp_timeout = 30 -virtual brokers { - active = 1 - address = {{ vip }} {{ hostvars[groups['lvs'][0]].ansible_default_ipv4.interface }}:1 - vip_nmask = {{ vip_netmask }} - port = 443 - persistent = 10 - pmask = 255.255.255.255 - send_program = "/opt/check.sh %h" - expect = "OK" - use_regex = 0 - load_monitor = none - scheduler = rr - protocol = tcp - timeout = 60 - reentry = 45 - quiesce_server = 0 - server web1 { - address = {{ hostvars[groups['broker'][0]].ansible_default_ipv4.address }} - active = 1 - port = 443 - weight = 0 - } - server web2 { - address = {{ hostvars[groups['broker'][1]].ansible_default_ipv4.address }} - active = 1 - port = 443 - weight = 0 - } -} diff --git a/openshift/openshift_ec2/roles/mongodb/files/10gen.repo.j2 b/openshift/openshift_ec2/roles/mongodb/files/10gen.repo.j2 deleted file mode 100644 index db76731..0000000 --- a/openshift/openshift_ec2/roles/mongodb/files/10gen.repo.j2 +++ /dev/null @@ -1,6 +0,0 @@ -[10gen] -name=10gen Repository -baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64 -gpgcheck=0 -enabled=1 - diff --git a/openshift/openshift_ec2/roles/mongodb/files/secret b/openshift/openshift_ec2/roles/mongodb/files/secret deleted file mode 100644 index 4d77cbc..0000000 --- a/openshift/openshift_ec2/roles/mongodb/files/secret +++ /dev/null @@ -1,3 +0,0 @@ -qGO6OYb64Uth9p9Tm8s9kqarydmAg1AUdgVz+ecjinaLZ1SlWxXMY1ug8AO7C/Vu -D8kA3+rE37Gv1GuZyPYi87NSfDhKXo4nJWxI00BxTBppmv2PTzbi7xLCx1+8A1uQ -4XU0HA diff --git a/openshift/openshift_ec2/roles/mongodb/hosts b/openshift/openshift_ec2/roles/mongodb/hosts deleted file mode 100644 index 7e90ce0..0000000 --- a/openshift/openshift_ec2/roles/mongodb/hosts +++ /dev/null @@ -1,29 +0,0 @@ -#The site wide list of mongodb servers - -# the mongo servers need a mongod_port variable set, and they must not conflict. -[mongo_servers] -hadoop1 mongod_port=2700 -hadoop2 mongod_port=2701 -hadoop3 mongod_port=2702 -hadoop4 mongod_port=2703 - -#The list of servers where replication should happen, by default include all servers -[replication_servers] -hadoop1 -hadoop2 -hadoop3 -hadoop4 - -#The list of mongodb configuration servers, make sure it is 1 or 3 -[mongoc_servers] -hadoop1 -hadoop2 -hadoop3 - - -#The list of servers where mongos servers would run. -[mongos_servers] -hadoop1 -hadoop2 - - diff --git a/openshift/openshift_ec2/roles/mongodb/site.yml b/openshift/openshift_ec2/roles/mongodb/site.yml deleted file mode 100644 index 1a238bc..0000000 --- a/openshift/openshift_ec2/roles/mongodb/site.yml +++ /dev/null @@ -1,22 +0,0 @@ ---- -# This Playbook would deploy the whole mongodb cluster with replication and sharding. - -- hosts: all - roles: - - role: common - -- hosts: mongo_servers - roles: - - role: mongod - -- hosts: mongoc_servers - roles: - - role: mongoc - -- hosts: mongos_servers - roles: - - role: mongos - -- hosts: mongo_servers - tasks: - - include: roles/mongod/tasks/shards.yml diff --git a/openshift/openshift_ec2/roles/mongodb/tasks/main.yml b/openshift/openshift_ec2/roles/mongodb/tasks/main.yml deleted file mode 100644 index b716479..0000000 --- a/openshift/openshift_ec2/roles/mongodb/tasks/main.yml +++ /dev/null @@ -1,43 +0,0 @@ ---- -# This role deploys the mongod processes and sets up the replication set. - - -- name: Install the mongodb package - yum: name={{ item }} state=installed - with_items: - - mongodb - - mongodb-server - - bc - - python-pip - - gcc - -- name: Install the latest pymongo package - pip: name=pymongo state=latest use_mirrors=no - -- name: Create the data directory for mongodb - file: path={{ mongodb_datadir_prefix }} owner=mongodb group=mongodb state=directory - -- name: Copy the keyfile for authentication - copy: src=secret dest={{ mongodb_datadir_prefix }}/secret owner=mongodb group=mongodb mode=0400 - -- name: Create the mongodb configuration file - template: src=mongod.conf.j2 dest=/etc/mongodb.conf - -- name: Start the mongodb service - service: name=mongod state=started - -- name: Create the file to initialize the mongod replica set - template: src=repset_init.j2 dest=/tmp/repset_init.js - -- name: Pause for a while - wait_for: port="{{ mongod_port }}" delay=30 - -- name: Initialize the replication set - shell: /usr/bin/mongo --port "{{ mongod_port }}" /tmp/repset_init.js;touch /opt/rep.init creates=/opt/rep.init - when: inventory_hostname == groups['mongo_servers'][0] - -- name: add the admin user - mongodb_user: database=admin name=admin password={{ mongo_admin_pass }} login_port={{ mongod_port }} state=present - when: inventory_hostname == groups['mongo_servers'][0] - ignore_errors: yes - diff --git a/openshift/openshift_ec2/roles/mongodb/templates/mongod.conf.j2 b/openshift/openshift_ec2/roles/mongodb/templates/mongod.conf.j2 deleted file mode 100644 index 201e9ea..0000000 --- a/openshift/openshift_ec2/roles/mongodb/templates/mongod.conf.j2 +++ /dev/null @@ -1,25 +0,0 @@ -# mongo.conf -smallfiles=true - -#where to log -logpath=/var/log/mongodb/mongodb.log - -logappend=true - -# fork and run in background -fork = true - -port = {{ mongod_port }} - -dbpath={{ mongodb_datadir_prefix }} -keyFile={{ mongodb_datadir_prefix }}/secret - -# location of pidfile -pidfilepath = /var/run/mongod.pid - - -# Ping interval for Mongo monitoring server. -#mms-interval = - -# Replication Options -replSet=openshift diff --git a/openshift/openshift_ec2/roles/mongodb/templates/repset_init.j2 b/openshift/openshift_ec2/roles/mongodb/templates/repset_init.j2 deleted file mode 100644 index 2f6f7ed..0000000 --- a/openshift/openshift_ec2/roles/mongodb/templates/repset_init.j2 +++ /dev/null @@ -1,7 +0,0 @@ -rs.initiate() -sleep(13000) -{% for host in groups['mongo_servers'] %} -rs.add("{{ host }}:{{ mongod_port }}") -sleep(8000) -{% endfor %} -printjson(rs.status()) diff --git a/openshift/openshift_ec2/roles/mq/handlers/main.yml b/openshift/openshift_ec2/roles/mq/handlers/main.yml deleted file mode 100644 index d27ede6..0000000 --- a/openshift/openshift_ec2/roles/mq/handlers/main.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -# handlers for mq - -- name: restart mq - service: name=activemq state=restarted diff --git a/openshift/openshift_ec2/roles/mq/tasks/main.yml b/openshift/openshift_ec2/roles/mq/tasks/main.yml deleted file mode 100644 index 220429c..0000000 --- a/openshift/openshift_ec2/roles/mq/tasks/main.yml +++ /dev/null @@ -1,26 +0,0 @@ ---- -# task for setting up MQ cluster - -- name: Install the packages for MQ - yum: name={{ item }} state=installed - with_items: - - java-1.6.0-openjdk - - java-1.6.0-openjdk-devel - - activemq - -- name: Copy the activemq.xml file - template: src=activemq.xml.j2 dest=/etc/activemq/activemq.xml - notify: restart mq - -- name: Copy the jetty.xml file - template: src=jetty.xml.j2 dest=/etc/activemq/jetty.xml - notify: restart mq - -- name: Copy the jetty realm properties file - template: src=jetty-realm.properties.j2 dest=/etc/activemq/jetty-realm.properties - notify: restart mq - -- name: start the active mq service - service: name=activemq state=started enabled=yes - - diff --git a/openshift/openshift_ec2/roles/mq/templates/activemq.xml.j2 b/openshift/openshift_ec2/roles/mq/templates/activemq.xml.j2 deleted file mode 100644 index 40e41ba..0000000 --- a/openshift/openshift_ec2/roles/mq/templates/activemq.xml.j2 +++ /dev/null @@ -1,178 +0,0 @@ - - - - - - - file:${activemq.conf}/credentials.properties - - - - - - - - - - - - - - - - - - - - - - - - {% for host in groups['mq'] %} - {% if hostvars[host].ansible_hostname != ansible_hostname %} - - - - - - - - {% endif %} - {% endfor %} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/openshift/openshift_ec2/roles/mq/templates/jetty-realm.properties.j2 b/openshift/openshift_ec2/roles/mq/templates/jetty-realm.properties.j2 deleted file mode 100644 index 63892d6..0000000 --- a/openshift/openshift_ec2/roles/mq/templates/jetty-realm.properties.j2 +++ /dev/null @@ -1,20 +0,0 @@ -## --------------------------------------------------------------------------- -## Licensed to the Apache Software Foundation (ASF) under one or more -## contributor license agreements. See the NOTICE file distributed with -## this work for additional information regarding copyright ownership. -## The ASF licenses this file to You under the Apache License, Version 2.0 -## (the "License"); you may not use this file except in compliance with -## the License. You may obtain a copy of the License at -## -## http://www.apache.org/licenses/LICENSE-2.0 -## -## Unless required by applicable law or agreed to in writing, software -## distributed under the License is distributed on an "AS IS" BASIS, -## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -## See the License for the specific language governing permissions and -## limitations under the License. -## --------------------------------------------------------------------------- - -# Defines users that can access the web (console, demo, etc.) -# username: password [,rolename ...] -admin: {{ admin_pass }}, admin diff --git a/openshift/openshift_ec2/roles/mq/templates/jetty.xml.j2 b/openshift/openshift_ec2/roles/mq/templates/jetty.xml.j2 deleted file mode 100644 index d53a988..0000000 --- a/openshift/openshift_ec2/roles/mq/templates/jetty.xml.j2 +++ /dev/null @@ -1,113 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - index.html - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/openshift/openshift_ec2/roles/nodes/files/cgconfig.conf b/openshift/openshift_ec2/roles/nodes/files/cgconfig.conf deleted file mode 100644 index b529884..0000000 --- a/openshift/openshift_ec2/roles/nodes/files/cgconfig.conf +++ /dev/null @@ -1,26 +0,0 @@ -# -# Copyright IBM Corporation. 2007 -# -# Authors: Balbir Singh -# This program is free software; you can redistribute it and/or modify it -# under the terms of version 2.1 of the GNU Lesser General Public License -# as published by the Free Software Foundation. -# -# This program is distributed in the hope that it would be useful, but -# WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. -# -# See man cgconfig.conf for further details. -# -# By default, mount all controllers to /cgroup/ - -mount { - cpuset = /cgroup/cpuset; - cpu = /cgroup/cpu; - cpuacct = /cgroup/cpuacct; - memory = /cgroup/memory; - devices = /cgroup/devices; - freezer = /cgroup/freezer; - net_cls = /cgroup/net_cls; - blkio = /cgroup/blkio; -} diff --git a/openshift/openshift_ec2/roles/nodes/files/cgrulesengd.pp b/openshift/openshift_ec2/roles/nodes/files/cgrulesengd.pp deleted file mode 100644 index 77f3db9..0000000 Binary files a/openshift/openshift_ec2/roles/nodes/files/cgrulesengd.pp and /dev/null differ diff --git a/openshift/openshift_ec2/roles/nodes/files/pam.sh b/openshift/openshift_ec2/roles/nodes/files/pam.sh deleted file mode 100644 index 760f7c2..0000000 --- a/openshift/openshift_ec2/roles/nodes/files/pam.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash - -for f in "runuser" "runuser-l" "sshd" "su" "system-auth-ac"; \ -do t="/etc/pam.d/$f"; \ -if ! grep -q "pam_namespace.so" "$t"; \ -then echo -e "session\t\trequired\tpam_namespace.so no_unmount_on_close" >> "$t" ; \ -fi; \ -done; - diff --git a/openshift/openshift_ec2/roles/nodes/files/sshd b/openshift/openshift_ec2/roles/nodes/files/sshd deleted file mode 100644 index 3d37588..0000000 --- a/openshift/openshift_ec2/roles/nodes/files/sshd +++ /dev/null @@ -1,13 +0,0 @@ -#%PAM-1.0 -auth required pam_sepermit.so -auth include password-auth -account required pam_nologin.so -account include password-auth -password include password-auth -# pam_openshift.so close should be the first session rule -session required pam_openshift.so close -session required pam_loginuid.so -# pam_openshift.so open should only be followed by sessions to be executed in the user context -session required pam_openshift.so open env_params -session optional pam_keyinit.so force revoke -session include password-auth diff --git a/openshift/openshift_ec2/roles/nodes/files/sshd_config b/openshift/openshift_ec2/roles/nodes/files/sshd_config deleted file mode 100644 index 1e63427..0000000 --- a/openshift/openshift_ec2/roles/nodes/files/sshd_config +++ /dev/null @@ -1,139 +0,0 @@ -# $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ - -# This is the sshd server system-wide configuration file. See -# sshd_config(5) for more information. - -# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin - -# The strategy used for options in the default sshd_config shipped with -# OpenSSH is to specify options with their default value where -# possible, but leave them commented. Uncommented options change a -# default value. - -AcceptEnv GIT_SSH -#Port 22 -#AddressFamily any -#ListenAddress 0.0.0.0 -#ListenAddress :: - -# Disable legacy (protocol version 1) support in the server for new -# installations. In future the default will change to require explicit -# activation of protocol 1 -Protocol 2 - -# HostKey for protocol version 1 -#HostKey /etc/ssh/ssh_host_key -# HostKeys for protocol version 2 -#HostKey /etc/ssh/ssh_host_rsa_key -#HostKey /etc/ssh/ssh_host_dsa_key - -# Lifetime and size of ephemeral version 1 server key -#KeyRegenerationInterval 1h -#ServerKeyBits 1024 - -# Logging -# obsoletes QuietMode and FascistLogging -#SyslogFacility AUTH -SyslogFacility AUTHPRIV -#LogLevel INFO - -# Authentication: - -#LoginGraceTime 2m -#PermitRootLogin yes -#StrictModes yes -#MaxAuthTries 6 -MaxSessions 40 - -#RSAAuthentication yes -#PubkeyAuthentication yes -#AuthorizedKeysFile .ssh/authorized_keys -#AuthorizedKeysCommand none -#AuthorizedKeysCommandRunAs nobody - -# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts -#RhostsRSAAuthentication no -# similar for protocol version 2 -#HostbasedAuthentication no -# Change to yes if you don't trust ~/.ssh/known_hosts for -# RhostsRSAAuthentication and HostbasedAuthentication -#IgnoreUserKnownHosts no -# Don't read the user's ~/.rhosts and ~/.shosts files -#IgnoreRhosts yes - -# To disable tunneled clear text passwords, change to no here! -#PasswordAuthentication yes -#PermitEmptyPasswords no -PasswordAuthentication yes - -# Change to no to disable s/key passwords -#ChallengeResponseAuthentication yes -ChallengeResponseAuthentication no - -# Kerberos options -#KerberosAuthentication no -#KerberosOrLocalPasswd yes -#KerberosTicketCleanup yes -#KerberosGetAFSToken no -#KerberosUseKuserok yes - -# GSSAPI options -#GSSAPIAuthentication no -GSSAPIAuthentication yes -#GSSAPICleanupCredentials yes -GSSAPICleanupCredentials yes -#GSSAPIStrictAcceptorCheck yes -#GSSAPIKeyExchange no - -# Set this to 'yes' to enable PAM authentication, account processing, -# and session processing. If this is enabled, PAM authentication will -# be allowed through the ChallengeResponseAuthentication and -# PasswordAuthentication. Depending on your PAM configuration, -# PAM authentication via ChallengeResponseAuthentication may bypass -# the setting of "PermitRootLogin without-password". -# If you just want the PAM account and session checks to run without -# PAM authentication, then enable this but set PasswordAuthentication -# and ChallengeResponseAuthentication to 'no'. -#UsePAM no -UsePAM yes - -# Accept locale-related environment variables -AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES -AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT -AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE -AcceptEnv XMODIFIERS - -#AllowAgentForwarding yes -#AllowTcpForwarding yes -#GatewayPorts no -#X11Forwarding no -X11Forwarding yes -#X11DisplayOffset 10 -#X11UseLocalhost yes -#PrintMotd yes -#PrintLastLog yes -#TCPKeepAlive yes -#UseLogin no -#UsePrivilegeSeparation yes -#PermitUserEnvironment no -#Compression delayed -#ClientAliveInterval 0 -#ClientAliveCountMax 3 -#ShowPatchLevel no -#UseDNS yes -#PidFile /var/run/sshd.pid -MaxStartups 40 -#PermitTunnel no -#ChrootDirectory none - -# no default banner path -#Banner none - -# override default of no subsystems -Subsystem sftp /usr/libexec/openssh/sftp-server - -# Example of overriding settings on a per-user basis -#Match User anoncvs -# X11Forwarding no -# AllowTcpForwarding no -# ForceCommand cvs server diff --git a/openshift/openshift_ec2/roles/nodes/handlers/main.yml b/openshift/openshift_ec2/roles/nodes/handlers/main.yml deleted file mode 100644 index 1eaaf0b..0000000 --- a/openshift/openshift_ec2/roles/nodes/handlers/main.yml +++ /dev/null @@ -1,10 +0,0 @@ ---- -#Handlers for nodes - -- name: restart mcollective - service: name=mcollective state=restarted - -- name: restart ssh - service: name=sshd state=restarted - async: 10 - poll: 0 diff --git a/openshift/openshift_ec2/roles/nodes/tasks/main.yml b/openshift/openshift_ec2/roles/nodes/tasks/main.yml deleted file mode 100644 index 84615c0..0000000 --- a/openshift/openshift_ec2/roles/nodes/tasks/main.yml +++ /dev/null @@ -1,107 +0,0 @@ ---- -# Tasks for the openshift nodes - -- name: Install the mcollective packages - yum: name=openshift-origin-msg-node-mcollective state=installed - -- name: Copy the mcollective configuration file - template: src=server.cfg.j2 dest=/etc/mcollective/server.cfg - notify: restart mcollective - -- name: start the mcollective service - service: name=mcollective state=started enabled=yes - -- name: Install OpenShift node packages - yum: name="{{ item }}" state=installed - with_items: - - rubygem-openshift-origin-node - - rubygem-openshift-origin-container-selinux.noarch - - rubygem-passenger-native - - rubygem-openshift-origin-msg-broker-mcollective - - openshift-origin-port-proxy - - openshift-origin-node-util - - openshift-origin-cartridge-cron - - openshift-origin-cartridge-python - - ruby193-rubygem-rest-client - - httpd - - lsof - - dbus - -- name: Copy the ssh authorized key for root - authorized_key: user=root key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}" - -- name: Copy the pam.d ssh file - copy: src=sshd dest=/etc/pam.d/sshd - register: last_run - -- name: Copy the cgconfig file - copy: src=cgconfig.conf dest=/etc/cgconfig.conf - -- name: Execute script for pam update - script: pam.sh - when: last_run.changed - -- name: Create directory for cgroups - file: path=/cgroup state=directory - -- name: restart cgroups - service: name="{{ item }}" state=restarted enabled=yes - with_items: - - cgconfig - - cgred - - httpd - - messagebus - - oddjobd - when: last_run.changed - -- name: Find root mount point of gear dir - shell: df -P /var/lib/openshift | tail -1 | awk '{ print $6 }' - register: gear_root_mount - -- name: Initialize quota db - shell: oo-init-quota creates="{{ gear_root_mount.stdout }}/aquota.user" - -- name: SELinux - configure sebooleans - seboolean: name="{{ item }}" state=true persistent=yes - with_items: - - httpd_unified - - httpd_can_network_connect - - httpd_can_network_relay - - httpd_run_stickshift - - httpd_read_user_content - - httpd_enable_homedirs - - allow_polyinstantiation - -- name: set the sysctl settings - sysctl: name=kernel.sem value="250 32000 32 4096" state=present reload=yes - -- name: set the sysctl settings - sysctl: name=net.ipv4.ip_local_port_range value="15000 35530" state=present reload=yes - -- name: set the sysctl settings - sysctl: name=net.netfilter.nf_conntrack_max value="1048576" state=present reload=yes - -- name: Copy sshd config - copy: src=sshd_config dest=/etc/ssh/sshd_config - notify: restart ssh - -- name: start the port proxy service - service: name=openshift-port-proxy state=started enabled=yes - -- name: Copy the node.conf file - template: src=node.conf.j2 dest=/etc/openshift/node.conf - -- name: Copy the se linux fix file - copy: src=cgrulesengd.pp dest=/opt/cgrulesengd.pp - register: se_run - -- name: allow se linux policy - shell: chdir=/opt semodule -i cgrulesengd.pp - when: se_run.changed - -- name: Start the openshift gears - service: name=openshift-gears state=started enabled=yes - -- name: copy the resolv.conf file - template: src=resolv.conf.j2 dest=/etc/resolv.conf - diff --git a/openshift/openshift_ec2/roles/nodes/templates/node.conf.j2 b/openshift/openshift_ec2/roles/nodes/templates/node.conf.j2 deleted file mode 100644 index 5fcb476..0000000 --- a/openshift/openshift_ec2/roles/nodes/templates/node.conf.j2 +++ /dev/null @@ -1,40 +0,0 @@ -# These should not be left at default values, even for a demo. -# "PUBLIC" networking values are ones that end-users should be able to reach. -PUBLIC_HOSTNAME="{{ ansible_hostname }}" # The node host's public hostname -PUBLIC_IP="{{ ansible_default_ipv4.address }}" # The node host's public IP address -BROKER_HOST="{{ groups['broker'][0] }}" # IP or DNS name of broker server for REST API - -# Usually (unless in a demo) this should be changed to the domain for your installation: -CLOUD_DOMAIN="example.com" # Domain suffix to use for applications (Must match broker config) - -# You may want these, depending on the complexity of your networking: -# EXTERNAL_ETH_DEV='eth0' # Specify the internet facing public ethernet device -# INTERNAL_ETH_DEV='eth1' # Specify the internal cluster facing ethernet device -INSTANCE_ID="localhost" # Set by RH EC2 automation - -# Uncomment and use the following line if you want to gear users to be members of -# additional groups besides the one with the same id as the uid. The other group -# should be an existing group. -#GEAR_SUPL_GRPS="another_group" # Supplementary groups for gear UIDs (comma separated list) - -# Generally the following should not be changed: -ENABLE_CGROUPS=1 # constrain gears in cgroups (1=yes, 0=no) -GEAR_BASE_DIR="/var/lib/openshift" # gear root directory -GEAR_SKEL_DIR="/etc/openshift/skel" # skel files to use when building a gear -GEAR_SHELL="/usr/bin/oo-trap-user" # shell to use for the gear -GEAR_GECOS="OpenShift guest" # Gecos information to populate for the gear user -GEAR_MIN_UID=500 # Lower bound of UID used to create gears -GEAR_MAX_UID=6500 # Upper bound of UID used to create gears -CARTRIDGE_BASE_PATH="/usr/libexec/openshift/cartridges" # Locations where cartridges are installed -LAST_ACCESS_DIR="/var/lib/openshift/.last_access" # Location to maintain last accessed time for gears -APACHE_ACCESS_LOG="/var/log/httpd/openshift_log" # Localion of httpd for node -PROXY_MIN_PORT_NUM=35531 # Lower bound of port numbers used to proxy ports externally -PROXY_PORTS_PER_GEAR=5 # Number of proxy ports available per gear -CREATE_APP_SYMLINKS=0 # If set to 1, creates gear-name symlinks to the UUID directories (debugging only) -OPENSHIFT_HTTP_CONF_DIR="/etc/httpd/conf.d/openshift" - -PLATFORM_LOG_FILE=/var/log/openshift/node/platform.log -PLATFORM_LOG_LEVEL=DEBUG -PLATFORM_TRACE_LOG_FILE=/var/log/openshift/node/platform-trace.log -PLATFORM_TRACE_LOG_LEVEL=DEBUG -CONTAINERIZATION_PLUGIN=openshift-origin-container-selinux diff --git a/openshift/openshift_ec2/roles/nodes/templates/resolv.conf.j2 b/openshift/openshift_ec2/roles/nodes/templates/resolv.conf.j2 deleted file mode 100644 index 9fdca84..0000000 --- a/openshift/openshift_ec2/roles/nodes/templates/resolv.conf.j2 +++ /dev/null @@ -1,2 +0,0 @@ -search {{ domain_name }} -nameserver {{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }} diff --git a/openshift/openshift_ec2/roles/nodes/templates/server.cfg.j2 b/openshift/openshift_ec2/roles/nodes/templates/server.cfg.j2 deleted file mode 100644 index bfd85d6..0000000 --- a/openshift/openshift_ec2/roles/nodes/templates/server.cfg.j2 +++ /dev/null @@ -1,28 +0,0 @@ -topicprefix = /topic/ -main_collective = mcollective -collectives = mcollective -libdir = /opt/rh/ruby193/root/usr/libexec/mcollective -logfile = /var/log/mcollective.log -loglevel = debug -daemonize = 1 -direct_addressing = 1 -registerinterval = 30 - -# Plugins -securityprovider = psk -plugin.psk = unset - -connector = stomp -plugin.stomp.pool.size = {{ groups['mq']|length() }} -{% for host in groups['mq'] %} - -plugin.stomp.pool.host{{ loop.index }} = {{ hostvars[host].ansible_hostname }} -plugin.stomp.pool.port{{ loop.index }} = 61613 -plugin.stomp.pool.user{{ loop.index }} = mcollective -plugin.stomp.pool.password{{ loop.index }} = {{ mcollective_pass }} - -{% endfor %} - -# Facts -factsource = yaml -plugin.yaml = /etc/mcollective/facts.yaml diff --git a/openshift/openshift_ec2/site.yml b/openshift/openshift_ec2/site.yml deleted file mode 100644 index cf20125..0000000 --- a/openshift/openshift_ec2/site.yml +++ /dev/null @@ -1,34 +0,0 @@ - -- hosts: all - user: root - roles: - - role: common - -- hosts: dns - user: root - roles: - - role: dns -- hosts: mongo_servers - user: root - roles: - - role: mongodb - -- hosts: mq - user: root - roles: - - role: mq - -- hosts: broker - user: root - roles: - - role: broker - -- hosts: nodes - user: root - roles: - - role: nodes - -- hosts: lvs - user: root - roles: - - role: lvs diff --git a/openshift/roles/broker/files/gem.sh b/openshift/roles/broker/files/gem.sh deleted file mode 100644 index 2883aef..0000000 --- a/openshift/roles/broker/files/gem.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/bin/bash -/usr/bin/scl enable ruby193 "gem install rspec --version 1.3.0 --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install fakefs --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install httpclient --version 2.3.2 --no-rdoc --no-ri" ; touch /opt/gem.init diff --git a/openshift/roles/broker/files/htpasswd b/openshift/roles/broker/files/htpasswd deleted file mode 100644 index abd468f..0000000 --- a/openshift/roles/broker/files/htpasswd +++ /dev/null @@ -1 +0,0 @@ -demo:k2WsPcYIRAaXs diff --git a/openshift/roles/broker/files/openshift-origin-auth-remote-basic-user.conf b/openshift/roles/broker/files/openshift-origin-auth-remote-basic-user.conf deleted file mode 100644 index 30e179d..0000000 --- a/openshift/roles/broker/files/openshift-origin-auth-remote-basic-user.conf +++ /dev/null @@ -1,25 +0,0 @@ -LoadModule auth_basic_module modules/mod_auth_basic.so -LoadModule authn_file_module modules/mod_authn_file.so -LoadModule authz_user_module modules/mod_authz_user.so - -# Turn the authenticated remote-user into an Apache environment variable for the console security controller -RewriteEngine On -RewriteCond %{LA-U:REMOTE_USER} (.+) -RewriteRule . - [E=RU:%1] -RequestHeader set X-Remote-User "%{RU}e" env=RU - - - AuthName "OpenShift Developer Console" - AuthType Basic - AuthUserFile /etc/openshift/htpasswd - require valid-user - - # The node->broker auth is handled in the Ruby code - BrowserMatch Openshift passthrough - Allow from env=passthrough - - Order Deny,Allow - Deny from all - Satisfy any - - diff --git a/openshift/roles/broker/files/openshift-origin-auth-remote-user.conf b/openshift/roles/broker/files/openshift-origin-auth-remote-user.conf deleted file mode 100644 index be9e84f..0000000 --- a/openshift/roles/broker/files/openshift-origin-auth-remote-user.conf +++ /dev/null @@ -1,39 +0,0 @@ -LoadModule auth_basic_module modules/mod_auth_basic.so -LoadModule authn_file_module modules/mod_authn_file.so -LoadModule authz_user_module modules/mod_authz_user.so - - - AuthName "OpenShift broker API" - AuthType Basic - AuthUserFile /etc/openshift/htpasswd - require valid-user - - SetEnvIfNoCase Authorization Bearer passthrough - - # The node->broker auth is handled in the Ruby code - BrowserMatchNoCase ^OpenShift passthrough - Allow from env=passthrough - - # Console traffic will hit the local port. mod_proxy will set this header automatically. - SetEnvIf X-Forwarded-For "^$" local_traffic=1 - # Turn the Console output header into the Apache environment variable for the broker remote-user plugin - SetEnvIf X-Remote-User "(..*)" REMOTE_USER=$1 - Allow from env=local_traffic - - Order Deny,Allow - Deny from all - Satisfy any - - -# The following APIs do not require auth: - - Allow from all - - - - Allow from all - - - - Allow from all - diff --git a/openshift/roles/broker/files/server_priv.pem b/openshift/roles/broker/files/server_priv.pem deleted file mode 100644 index ae1c791..0000000 --- a/openshift/roles/broker/files/server_priv.pem +++ /dev/null @@ -1,27 +0,0 @@ ------BEGIN RSA PRIVATE KEY----- -MIIEpAIBAAKCAQEAyWM85VFDBOdWz16oC7j8Q7uHHbs3UVzRhHhHkSg8avK6ETMH -piXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4blZHcDO5T1I//MyjwVNfqAQV4xin -qRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadpL0vZrzXAJREEhh/G7021q010CF+E -KTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgbCdvx+DnTw9E/YY+07/lDPOiXQFZm -7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc1BMXzHUiDVehNZ+I8ffzFltNNUL0 -Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0bwIDAQABAoIBAQDJPXpvqLlw3/92 -bx87v5mN0YneYuOPUVIorszNN8jQEkduwnCFTec2b8xRgx45AqwG3Ol/xM/V+qrd -eEvUs/fBgkQW0gj+Q7GfW5rTqA2xZou8iDmaF0/0tCbFWkoe8I8MdCkOl0Pkv1A4 -Au/UNqc8VO5tUCf2oj/EC2MOZLgCOTaerePnc+SFIf4TkerixPA9I4KYWwJQ2eXG -esSfR2f2EsUGfwOqKLEQU1JTMFkttbSAp42p+xpRaUh1FuyLHDlf3EeFmq5BPaFL -UnpzPDJTZtXjnyBrM9fb1ewiFW8x+EBmsdGooY7ptrWWhGzvxAsK9C0L2di3FBAy -gscM/rPBAoGBAPpt0xXtVWJu2ezoBfjNuqwMqGKFsOF3hi5ncOHW9nd6iZABD5Xt -KamrszxItkqiJpEacBCabgfo0FSLEOo+KqfTBK/r4dIoMwgcfhJOz+HvEC6+557n -GEFaL+evdLrxNrU41wvvfCzPK7pWaQGR1nrGohTyX5ZH4uA0Kmreof+PAoGBAM3e -IFPNrXuzhgShqFibWqJ8JdsSfMroV62aCqdJlB92lxx8JJ2lEiAMPfHmAtF1g01r -oBUcJcPfuBZ0bC1KxIvtz9d5m1f2geNGH/uwVULU3skhPBwqAs2s607/Z1S+/WRr -Af1rAs2KTJ7BDCQo8g2TPUO+sDrUzR6joxOy/Y0hAoGAbWaI7m1N/cBbZ4k9AqIt -SHgHH3M0AGtMrPz3bVGRPkTDz6sG+gIvTzX5CP7i09veaUlZZ4dvRflI+YX/D7W0 -wLgItimf70UsdgCseqb/Xb4oHaO8X8io6fPSNa6KmhhCRAzetRIb9x9SBQc2vD7P -qbcYm3n+lBI3ZKalWSaFMrUCgYEAsV0xfuISGCRIT48zafuWr6zENKUN7QcWGxQ/ -H3eN7TmP4VO3fDZukjvZ1qHzRaC32ijih61zf/ksMfRmCvOCuIfP7HXx92wC5dtR -zNdT7btWofRHRICRX8AeDzaOQP43c5+Z3Eqo5IrFjnUFz9WTDU0QmGAeluEmQ8J5 -yowIVOECgYB97fGLuEBSlKJCvmWp6cTyY+mXbiQjYYGBbYAiJWnwaK9U3bt71we/ -MQNzBHAe0mPCReVHSr68BfoWY/crV+7RKSBgrDpR0Y0DI1yn0LXXZfd3NNrTVaAb -rScbJ8Xe3qcLi3QZ3BxaWfub08Wm57wjDBBqGZyExYjjlGSpjBpVJQ== ------END RSA PRIVATE KEY----- diff --git a/openshift/roles/broker/files/server_pub.pem b/openshift/roles/broker/files/server_pub.pem deleted file mode 100644 index a0c54a7..0000000 --- a/openshift/roles/broker/files/server_pub.pem +++ /dev/null @@ -1,9 +0,0 @@ ------BEGIN PUBLIC KEY----- -MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWM85VFDBOdWz16oC7j8 -Q7uHHbs3UVzRhHhHkSg8avK6ETMHpiXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4 -blZHcDO5T1I//MyjwVNfqAQV4xinqRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadp -L0vZrzXAJREEhh/G7021q010CF+EKTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgb -Cdvx+DnTw9E/YY+07/lDPOiXQFZm7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc -1BMXzHUiDVehNZ+I8ffzFltNNUL0Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0 -bwIDAQAB ------END PUBLIC KEY----- diff --git a/openshift/roles/broker/files/ssl.conf b/openshift/roles/broker/files/ssl.conf deleted file mode 100644 index 614b1f8..0000000 --- a/openshift/roles/broker/files/ssl.conf +++ /dev/null @@ -1,74 +0,0 @@ -# -# This is the Apache server configuration file providing SSL support. -# It contains the configuration directives to instruct the server how to -# serve pages over an https connection. For detailing information about these -# directives see -# -# Do NOT simply read the instructions in here without understanding -# what they do. They're here only as hints or reminders. If you are unsure -# consult the online docs. You have been warned. -# - -LoadModule ssl_module modules/mod_ssl.so - -# -# When we also provide SSL we have to listen to the -# the HTTPS port in addition. -# -Listen 443 - -## -## SSL Global Context -## -## All SSL configuration in this context applies both to -## the main server and all SSL-enabled virtual hosts. -## - -# Pass Phrase Dialog: -# Configure the pass phrase gathering process. -# The filtering dialog program (`builtin' is a internal -# terminal dialog) has to provide the pass phrase on stdout. -SSLPassPhraseDialog builtin - -# Inter-Process Session Cache: -# Configure the SSL Session Cache: First the mechanism -# to use and second the expiring timeout (in seconds). -SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) -SSLSessionCacheTimeout 300 - -# Semaphore: -# Configure the path to the mutual exclusion semaphore the -# SSL engine uses internally for inter-process synchronization. -SSLMutex default - -# Pseudo Random Number Generator (PRNG): -# Configure one or more sources to seed the PRNG of the -# SSL library. The seed data should be of good random quality. -# WARNING! On some platforms /dev/random blocks if not enough entropy -# is available. This means you then cannot use the /dev/random device -# because it would lead to very long connection times (as long as -# it requires to make more entropy available). But usually those -# platforms additionally provide a /dev/urandom device which doesn't -# block. So, if available, use this one instead. Read the mod_ssl User -# Manual for more details. -SSLRandomSeed startup file:/dev/urandom 256 -SSLRandomSeed connect builtin -#SSLRandomSeed startup file:/dev/random 512 -#SSLRandomSeed connect file:/dev/random 512 -#SSLRandomSeed connect file:/dev/urandom 512 - -# -# Use "SSLCryptoDevice" to enable any supported hardware -# accelerators. Use "openssl engine -v" to list supported -# engine names. NOTE: If you enable an accelerator and the -# server does not start, consult the error logs and ensure -# your accelerator is functioning properly. -# -SSLCryptoDevice builtin -#SSLCryptoDevice ubsec - -## -## SSL Virtual Host Context -## - - diff --git a/openshift/roles/broker/handlers/main.yml b/openshift/roles/broker/handlers/main.yml deleted file mode 100644 index c63257d..0000000 --- a/openshift/roles/broker/handlers/main.yml +++ /dev/null @@ -1,9 +0,0 @@ ---- -# handlers for broker - -- name: restart broker - service: name=openshift-broker state=restarted - -- name: restart console - service: name=openshift-console state=restarted - diff --git a/openshift/roles/broker/tasks/main.yml b/openshift/roles/broker/tasks/main.yml deleted file mode 100644 index d49e37d..0000000 --- a/openshift/roles/broker/tasks/main.yml +++ /dev/null @@ -1,107 +0,0 @@ ---- -# Tasks for the Openshift broker installation - -- name: install mcollective common - yum: name=mcollective-common-2.2.1 state=installed - -- name: Install the broker components - yum: name="{{ item }}" state=installed disablerepo=epel - with_items: "{{ broker_packages }}" - -- name: Install mcollective - yum: name=mcollective-client - -- name: Copy the mcollective configuration file - template: src=client.cfg.j2 dest=/etc/mcollective/client.cfg - -- name: Copy the rhc client configuration file - template: src=express.conf.j2 dest=/etc/openshift/express.conf - register: last_run - -- name: Install the gems for rhc - script: gem.sh - when: last_run.changed - -- name: create the file for mcollective logging - copy: content="" dest=/var/log/mcollective-client.log owner=apache group=root - -- name: SELinux - configure sebooleans - seboolean: name="{{ item }}" state=true persistent=yes - with_items: - - httpd_unified - - httpd_execmem - - httpd_can_network_connect - - httpd_can_network_relay - - httpd_run_stickshift - - named_write_master_zones - - httpd_verify_dns - - allow_ypbind - -- name: copy the auth keyfiles - copy: src="{{ item }}" dest="/etc/openshift/{{ item }}" - with_items: - - server_priv.pem - - server_pub.pem - - htpasswd - -- name: copy the local ssh keys - copy: src="~/.ssh/{{ item }}" dest="~/.ssh/{{ item }}" - with_items: - - id_rsa.pub - - id_rsa - -- name: copy the local ssh keys to openshift dir - copy: src="~/.ssh/{{ item }}" dest="/etc/openshift/rsync_{{ item }}" - with_items: - - id_rsa.pub - - id_rsa - -- name: Copy the broker configuration file - template: src=broker.conf.j2 dest=/etc/openshift/broker.conf - notify: restart broker - -- name: Copy the console configuration file - template: src=console.conf.j2 dest=/etc/openshift/console.conf - notify: restart console - -- name: create the file for ssl.conf - copy: src=ssl.conf dest=/etc/httpd/conf.d/ssl.conf owner=apache group=root - -- name: copy the configuration file for openstack plugins - template: src="{{ item }}" dest="/etc/openshift/plugins.d/{{ item }}" - with_items: - - openshift-origin-auth-remote-user.conf - - openshift-origin-dns-bind.conf - - openshift-origin-msg-broker-mcollective.conf - -- name: Bundle the ruby gems - shell: chdir=/var/www/openshift/broker/ /usr/bin/scl enable ruby193 "bundle show"; touch bundle.init - creates=//var/www/openshift/broker/bundle.init - -- name: Copy the httpd configuration file - copy: src=openshift-origin-auth-remote-user.conf dest=/var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf - notify: restart broker - -- name: Copy the httpd configuration file for console - copy: src=openshift-origin-auth-remote-basic-user.conf dest=/var/www/openshift/console/httpd/conf.d/openshift-origin-auth-remote-basic-user.conf - notify: restart console - -- name: Fix the selinux contexts on several files - shell: fixfiles -R rubygem-passenger restore; fixfiles -R mod_passenger restore; restorecon -rv /var/run; restorecon -rv /usr/share/rubygems/gems/passenger-*; touch /opt/context.fixed creates=/opt/context.fixed - -- name: start the http and broker service - service: name="{{ item }}" state=started enabled=yes - with_items: - - httpd - - openshift-broker - -- name: Install the rhc client - gem: name={{ item }} state=latest - with_items: - - rdoc - - rhc - ignore_errors: yes - -- name: copy the resolv.conf - template: src=resolv.conf.j2 dest=/etc/resolv.conf - diff --git a/openshift/roles/broker/templates/broker.conf.j2 b/openshift/roles/broker/templates/broker.conf.j2 deleted file mode 100644 index 7d83387..0000000 --- a/openshift/roles/broker/templates/broker.conf.j2 +++ /dev/null @@ -1,47 +0,0 @@ -# Domain suffix to use for applications (Must match node config) -CLOUD_DOMAIN="{{ domain_name }}" -# Comma seperted list of valid gear sizes -VALID_GEAR_SIZES="small,medium" - -# Default number of gears to assign to a new user -DEFAULT_MAX_GEARS="100" -# Default gear size for a new gear -DEFAULT_GEAR_SIZE="small" - -#Broker datastore configuration -MONGO_REPLICA_SETS=true -# Replica set example: ": : ..." -{% set hosts = '' %} - {% for host in groups['mongo_servers'] %} - {% if loop.last %} - {% set hosts = hosts + host + ':' ~ mongod_port + ' ' %} - -MONGO_HOST_PORT="{{ hosts }}" - - {% else %} - {% set hosts = hosts + host + ':' ~ mongod_port + ', ' %} - {% endif %} - {% endfor %} - -MONGO_USER="admin" -MONGO_PASSWORD="{{ mongo_admin_pass }}" -MONGO_DB="admin" - -#Enables gear/filesystem resource usage tracking -ENABLE_USAGE_TRACKING_DATASTORE="false" -#Log resource usage information to syslog -ENABLE_USAGE_TRACKING_SYSLOG="false" - -#Enable all broker analytics -ENABLE_ANALYTICS="false" - -#Enables logging of REST API operations and success/failure -ENABLE_USER_ACTION_LOG="true" -USER_ACTION_LOG_FILE="/var/log/openshift/broker/user_action.log" - -AUTH_SALT="{{ auth_salt }}" -AUTH_PRIVKEYFILE="/etc/openshift/server_priv.pem" -AUTH_PRIVKEYPASS="" -AUTH_PUBKEYFILE="/etc/openshift/server_pub.pem" -AUTH_RSYNC_KEY_FILE="/etc/openshift/rsync_id_rsa" -SESSION_SECRET="{{ session_secret }}" diff --git a/openshift/roles/broker/templates/client.cfg.j2 b/openshift/roles/broker/templates/client.cfg.j2 deleted file mode 100644 index 39a086d..0000000 --- a/openshift/roles/broker/templates/client.cfg.j2 +++ /dev/null @@ -1,25 +0,0 @@ -topicprefix = /topic/ -main_collective = mcollective -collectives = mcollective -libdir = /opt/rh/ruby193/root/usr/libexec/mcollective -logfile = /var/log/mcollective-client.log -loglevel = debug -direct_addressing = 1 - -# Plugins -securityprovider = psk -plugin.psk = unset - -connector = stomp -plugin.stomp.pool.size = {{ groups['mq']|length() }} -{% for host in groups['mq'] %} - -plugin.stomp.pool.host{{ loop.index }} = {{ hostvars[host].ansible_hostname }} -plugin.stomp.pool.port{{ loop.index }} = 61613 -plugin.stomp.pool.user{{ loop.index }} = mcollective -plugin.stomp.pool.password{{ loop.index }} = {{ mcollective_pass }} - -{% endfor %} - - - diff --git a/openshift/roles/broker/templates/console.conf.j2 b/openshift/roles/broker/templates/console.conf.j2 deleted file mode 100644 index 3791938..0000000 --- a/openshift/roles/broker/templates/console.conf.j2 +++ /dev/null @@ -1,8 +0,0 @@ -BROKER_URL=http://localhost:8080/broker/rest - -CONSOLE_SECURITY=remote_user - -REMOTE_USER_HEADER=REMOTE_USER - -REMOTE_USER_COPY_HEADERS=X-Remote-User -SESSION_SECRET="{{ session_secret }}" diff --git a/openshift/roles/broker/templates/express.conf.j2 b/openshift/roles/broker/templates/express.conf.j2 deleted file mode 100644 index a4951f0..0000000 --- a/openshift/roles/broker/templates/express.conf.j2 +++ /dev/null @@ -1,8 +0,0 @@ -# Remote API server -libra_server = '{{ ansible_hostname }}' - -# Logging -debug = 'false' - -# Timeout -#timeout = '10' diff --git a/openshift/roles/broker/templates/openshift-origin-auth-remote-user.conf b/openshift/roles/broker/templates/openshift-origin-auth-remote-user.conf deleted file mode 100644 index 67f1545..0000000 --- a/openshift/roles/broker/templates/openshift-origin-auth-remote-user.conf +++ /dev/null @@ -1,4 +0,0 @@ -# Settings related to the Remote-User variant of an OpenShift auth plugin - -# The name of the header containing the trusted username -TRUSTED_HEADER="REMOTE_USER" diff --git a/openshift/roles/broker/templates/openshift-origin-dns-bind.conf b/openshift/roles/broker/templates/openshift-origin-dns-bind.conf deleted file mode 100644 index b5a2390..0000000 --- a/openshift/roles/broker/templates/openshift-origin-dns-bind.conf +++ /dev/null @@ -1,16 +0,0 @@ -# Settings related to the bind variant of an OpenShift DNS plugin - -# The DNS server -BIND_SERVER="{{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }}" - -# The DNS server's port -BIND_PORT=53 - -# The key name for your zone -BIND_KEYNAME="{{ domain_name }}" - -# base64-encoded key, most likely from /var/named/example.com.key. -BIND_KEYVALUE="{{ dns_key }}" - -# The base zone for the DNS server -BIND_ZONE="{{ domain_name }}" diff --git a/openshift/roles/broker/templates/openshift-origin-msg-broker-mcollective.conf b/openshift/roles/broker/templates/openshift-origin-msg-broker-mcollective.conf deleted file mode 100644 index 76c9d75..0000000 --- a/openshift/roles/broker/templates/openshift-origin-msg-broker-mcollective.conf +++ /dev/null @@ -1,25 +0,0 @@ -# Some settings to configure how mcollective handles gear placement on nodes: - -# Use districts when placing gears and moving them between hosts. Should be -# true except for particular dev/test situations. -DISTRICTS_ENABLED=true - -# Require new gears to be placed in a district; when true, placement will fail -# if there isn't a district with capacity and the right gear profile. -DISTRICTS_REQUIRE_FOR_APP_CREATE=false - -# Used as the default max gear capacity when creating a district. -DISTRICTS_MAX_CAPACITY=6000 - -# It is unlikely these will need to be changed -DISTRICTS_FIRST_UID=1000 -MCOLLECTIVE_DISCTIMEOUT=5 -MCOLLECTIVE_TIMEOUT=180 -MCOLLECTIVE_VERBOSE=false -MCOLLECTIVE_PROGRESS_BAR=0 -MCOLLECTIVE_CONFIG="/etc/mcollective/client.cfg" - -# Place gears on nodes with the requested profile; should be true, as -# a false value means gear profiles are ignored and gears are placed arbitrarily. -NODE_PROFILE_ENABLED=true - diff --git a/openshift/roles/broker/templates/resolv.conf.j2 b/openshift/roles/broker/templates/resolv.conf.j2 deleted file mode 100644 index 9fdca84..0000000 --- a/openshift/roles/broker/templates/resolv.conf.j2 +++ /dev/null @@ -1,2 +0,0 @@ -search {{ domain_name }} -nameserver {{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }} diff --git a/openshift/roles/broker/vars/main.yml b/openshift/roles/broker/vars/main.yml deleted file mode 100644 index 86c7628..0000000 --- a/openshift/roles/broker/vars/main.yml +++ /dev/null @@ -1,82 +0,0 @@ ---- -# variables for broker - -broker_packages: - - mongodb-devel - - openshift-origin-broker - - openshift-origin-broker-util - - rubygem-openshift-origin-dns-nsupdate - - rubygem-openshift-origin-auth-mongo - - rubygem-openshift-origin-auth-remote-user - - rubygem-openshift-origin-controller - - rubygem-openshift-origin-msg-broker-mcollective - - rubygem-openshift-origin-dns-bind - - rubygem-passenger - - ruby193-mod_passenger - - mysql-devel - - openshift-origin-console - - ruby193-rubygem-actionmailer - - ruby193-rubygem-actionpack - - ruby193-rubygem-activemodel - - ruby193-rubygem-activerecord - - ruby193-rubygem-activeresource - - ruby193-rubygem-activesupport - - ruby193-rubygem-arel - - ruby193-rubygem-bigdecimal - - ruby193-rubygem-net-ssh - - ruby193-rubygem-commander - - ruby193-rubygem-archive-tar-minitar - - ruby193-rubygem-bson - - ruby193-rubygem-bson_ext - - ruby193-rubygem-builder - - ruby193-rubygem-bundler - - ruby193-rubygem-cucumber - - ruby193-rubygem-diff-lcs - - ruby193-rubygem-dnsruby - - ruby193-rubygem-erubis - - ruby193-rubygem-gherkin - - ruby193-rubygem-hike - - ruby193-rubygem-i18n - - ruby193-rubygem-journey - - ruby193-rubygem-json - - ruby193-rubygem-mail - - ruby193-rubygem-metaclass - - ruby193-rubygem-mime-types - - ruby193-rubygem-minitest - - ruby193-rubygem-mocha - - ruby193-rubygem-mongo - - ruby193-rubygem-mongoid - - ruby193-rubygem-moped - - ruby193-rubygem-multi_json - - ruby193-rubygem-open4 - - ruby193-rubygem-origin - - ruby193-rubygem-parseconfig - - ruby193-rubygem-polyglot - - ruby193-rubygem-rack - - ruby193-rubygem-rack-cache - - ruby193-rubygem-rack-ssl - - ruby193-rubygem-rack-test - - ruby193-rubygem-rails - - ruby193-rubygem-railties - - ruby193-rubygem-rake - - ruby193-rubygem-rdoc - - ruby193-rubygem-regin - - ruby193-rubygem-rest-client - - ruby193-rubygem-simplecov - - ruby193-rubygem-simplecov-html - - ruby193-rubygem-sprockets - - ruby193-rubygem-state_machine - - ruby193-rubygem-stomp - - ruby193-rubygem-systemu - - ruby193-rubygem-term-ansicolor - - ruby193-rubygem-thor - - ruby193-rubygem-tilt - - ruby193-rubygem-treetop - - ruby193-rubygem-tzinfo - - ruby193-rubygem-xml-simple - - - -auth_salt: "ceFm8El0mTLu7VLGpBFSFfmxeID+UoNfsQrAKs8dhKSQ/uAGwjWiz3VdyuB1fW/WR+R1q7yXW+sxSm9wkmuqVA==" -session_secret: "25905ebdb06d8705025531bb5cb45335c53c4f36ee534719ffffd0fe28808395d80449c6c69bc079e2ac14c8ff66639bba1513332ef9ad5ed42cc0bb21e07134" - diff --git a/openshift/roles/common/files/RPM-GPG-KEY-EPEL-6 b/openshift/roles/common/files/RPM-GPG-KEY-EPEL-6 deleted file mode 100644 index 7a20304..0000000 --- a/openshift/roles/common/files/RPM-GPG-KEY-EPEL-6 +++ /dev/null @@ -1,29 +0,0 @@ ------BEGIN PGP PUBLIC KEY BLOCK----- -Version: GnuPG v1.4.5 (GNU/Linux) - -mQINBEvSKUIBEADLGnUj24ZVKW7liFN/JA5CgtzlNnKs7sBg7fVbNWryiE3URbn1 -JXvrdwHtkKyY96/ifZ1Ld3lE2gOF61bGZ2CWwJNee76Sp9Z+isP8RQXbG5jwj/4B -M9HK7phktqFVJ8VbY2jfTjcfxRvGM8YBwXF8hx0CDZURAjvf1xRSQJ7iAo58qcHn -XtxOAvQmAbR9z6Q/h/D+Y/PhoIJp1OV4VNHCbCs9M7HUVBpgC53PDcTUQuwcgeY6 -pQgo9eT1eLNSZVrJ5Bctivl1UcD6P6CIGkkeT2gNhqindRPngUXGXW7Qzoefe+fV -QqJSm7Tq2q9oqVZ46J964waCRItRySpuW5dxZO34WM6wsw2BP2MlACbH4l3luqtp -Xo3Bvfnk+HAFH3HcMuwdaulxv7zYKXCfNoSfgrpEfo2Ex4Im/I3WdtwME/Gbnwdq -3VJzgAxLVFhczDHwNkjmIdPAlNJ9/ixRjip4dgZtW8VcBCrNoL+LhDrIfjvnLdRu -vBHy9P3sCF7FZycaHlMWP6RiLtHnEMGcbZ8QpQHi2dReU1wyr9QgguGU+jqSXYar -1yEcsdRGasppNIZ8+Qawbm/a4doT10TEtPArhSoHlwbvqTDYjtfV92lC/2iwgO6g -YgG9XrO4V8dV39Ffm7oLFfvTbg5mv4Q/E6AWo/gkjmtxkculbyAvjFtYAQARAQAB -tCFFUEVMICg2KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAjYEEwECACAFAkvS -KUICGw8GCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA7Sd8qBgi4lR/GD/wLGPv9 -qO39eyb9NlrwfKdUEo1tHxKdrhNz+XYrO4yVDTBZRPSuvL2yaoeSIhQOKhNPfEgT -9mdsbsgcfmoHxmGVcn+lbheWsSvcgrXuz0gLt8TGGKGGROAoLXpuUsb1HNtKEOwP -Q4z1uQ2nOz5hLRyDOV0I2LwYV8BjGIjBKUMFEUxFTsL7XOZkrAg/WbTH2PW3hrfS -WtcRA7EYonI3B80d39ffws7SmyKbS5PmZjqOPuTvV2F0tMhKIhncBwoojWZPExft -HpKhzKVh8fdDO/3P1y1Fk3Cin8UbCO9MWMFNR27fVzCANlEPljsHA+3Ez4F7uboF -p0OOEov4Yyi4BEbgqZnthTG4ub9nyiupIZ3ckPHr3nVcDUGcL6lQD/nkmNVIeLYP -x1uHPOSlWfuojAYgzRH6LL7Idg4FHHBA0to7FW8dQXFIOyNiJFAOT2j8P5+tVdq8 -wB0PDSH8yRpn4HdJ9RYquau4OkjluxOWf0uRaS//SUcCZh+1/KBEOmcvBHYRZA5J -l/nakCgxGb2paQOzqqpOcHKvlyLuzO5uybMXaipLExTGJXBlXrbbASfXa/yGYSAG -iVrGz9CE6676dMlm8F+s3XXE13QZrXmjloc6jwOljnfAkjTGXjiB7OULESed96MR -XtfLk0W5Ab9pd7tKDR6QHI7rgHXfCopRnZ2VVQ== -=V/6I ------END PGP PUBLIC KEY BLOCK----- diff --git a/openshift/roles/common/files/epel.repo.j2 b/openshift/roles/common/files/epel.repo.j2 deleted file mode 100644 index 0160dfe..0000000 --- a/openshift/roles/common/files/epel.repo.j2 +++ /dev/null @@ -1,26 +0,0 @@ -[epel] -name=Extra Packages for Enterprise Linux 6 - $basearch -#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch -failovermethod=priority -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 - -[epel-debuginfo] -name=Extra Packages for Enterprise Linux 6 - $basearch - Debug -#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch -failovermethod=priority -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 -gpgcheck=1 - -[epel-source] -name=Extra Packages for Enterprise Linux 6 - $basearch - Source -#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch -failovermethod=priority -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 -gpgcheck=1 diff --git a/openshift/roles/common/files/openshift.repo b/openshift/roles/common/files/openshift.repo deleted file mode 100644 index c0a4332..0000000 --- a/openshift/roles/common/files/openshift.repo +++ /dev/null @@ -1,13 +0,0 @@ -[openshift_support] -name=Extra Packages for OpenShift - $basearch -baseurl=https://mirror.openshift.com/pub/openshift/release/2/rhel-6/dependencies/x86_64/ -failovermethod=priority -enabled=1 -gpgcheck=0 - -[openshift] -name=Packages for OpenShift - $basearch -baseurl=https://mirror.openshift.com/pub/openshift/release/2/rhel-6/packages/x86_64/ -failovermethod=priority -enabled=1 -gpgcheck=0 diff --git a/openshift/roles/common/files/scl193.sh b/openshift/roles/common/files/scl193.sh deleted file mode 100644 index 54673f8..0000000 --- a/openshift/roles/common/files/scl193.sh +++ /dev/null @@ -1,10 +0,0 @@ -# Setup PATH, LD_LIBRARY_PATH and MANPATH for ruby-1.9 -ruby19_dir=$(dirname `scl enable ruby193 "which ruby"`) -export PATH=$ruby19_dir:$PATH - -ruby19_ld_libs=$(scl enable ruby193 "printenv LD_LIBRARY_PATH") -export LD_LIBRARY_PATH=$ruby19_ld_libs:$LD_LIBRARY_PATH - -ruby19_manpath=$(scl enable ruby193 "printenv MANPATH") -export MANPATH=$ruby19_manpath:$MANPATH - diff --git a/openshift/roles/common/handlers/main.yml b/openshift/roles/common/handlers/main.yml deleted file mode 100644 index 0f563a9..0000000 --- a/openshift/roles/common/handlers/main.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -# Handler for mongod - -- name: restart iptables - service: name=iptables state=restarted diff --git a/openshift/roles/common/tasks/main.yml b/openshift/roles/common/tasks/main.yml deleted file mode 100644 index d200131..0000000 --- a/openshift/roles/common/tasks/main.yml +++ /dev/null @@ -1,42 +0,0 @@ ---- -# Common tasks across nodes - -- name: Install common packages - yum : name={{ item }} state=installed - with_items: - - libselinux-python - - policycoreutils - - policycoreutils-python - - ntp - - ruby-devel - -- name: make sure we have the right time - shell: ntpdate -u 0.centos.pool.ntp.org - -- name: start the ntp service - service: name=ntpd state=started enabled=yes - -- name: Create the hosts file for all machines - template: src=hosts.j2 dest=/etc/hosts - -- name: Create the EPEL Repository. - copy: src=epel.repo.j2 dest=/etc/yum.repos.d/epel.repo - -- name: Create the OpenShift Repository. - copy: src=openshift.repo dest=/etc/yum.repos.d/openshift.repo - -- name: Create the GPG key for EPEL - copy: src=RPM-GPG-KEY-EPEL-6 dest=/etc/pki/rpm-gpg - -- name: SELinux Enforcing (Targeted) - selinux: policy=targeted state=enforcing - -- name: copy the file for ruby193 profile - copy: src=scl193.sh dest=/etc/profile.d/scl193.sh mode=755 - -- name: copy the file for mcollective profile - copy: src=scl193.sh dest=/etc/sysconfig/mcollective mode=755 - -- name: Create the iptables file - template: src=iptables.j2 dest=/etc/sysconfig/iptables - notify: restart iptables diff --git a/openshift/roles/common/templates/hosts.j2 b/openshift/roles/common/templates/hosts.j2 deleted file mode 100644 index c531cc8..0000000 --- a/openshift/roles/common/templates/hosts.j2 +++ /dev/null @@ -1,4 +0,0 @@ -127.0.0.1 localhost -{% for host in groups['all'] %} -{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }} {{ hostvars[host].ansible_hostname }} -{% endfor %} diff --git a/openshift/roles/common/templates/iptables.j2 b/openshift/roles/common/templates/iptables.j2 deleted file mode 100644 index f1229f4..0000000 --- a/openshift/roles/common/templates/iptables.j2 +++ /dev/null @@ -1,52 +0,0 @@ -# Firewall configuration written by system-config-firewall -# Manual customization of this file is not recommended. - -{% if 'broker' in group_names %} -*nat --A PREROUTING -d {{ vip }}/32 -p tcp -m tcp --dport 443 -j REDIRECT -COMMIT -{% endif %} - -*filter -:INPUT ACCEPT [0:0] -:FORWARD ACCEPT [0:0] -:OUTPUT ACCEPT [0:0] -{% if 'mongo_servers' in group_names %} --A INPUT -p tcp --dport {{ mongod_port }} -j ACCEPT -{% endif %} -{% if 'mq' in group_names %} --A INPUT -p tcp --dport 61613 -j ACCEPT --A INPUT -p tcp --dport 61616 -j ACCEPT --A INPUT -p tcp --dport 8161 -j ACCEPT -{% endif %} -{% if 'broker' in group_names %} --A INPUT -p tcp --dport 80 -j ACCEPT --A INPUT -p tcp --dport 443 -j ACCEPT -{% endif %} -{% if 'lvs' in group_names %} --A INPUT -p tcp --dport 80 -j ACCEPT --A INPUT -p tcp --dport 443 -j ACCEPT -{% endif %} -{% if 'nodes' in group_names %} --A INPUT -p tcp --dport 80 -j ACCEPT --A INPUT -p tcp --dport 443 -j ACCEPT --A INPUT -p tcp -m multiport --dports 35531:65535 -j ACCEPT -{% endif %} -{% if 'dns' in group_names %} --A INPUT -p tcp --dport {{ dns_port }} -j ACCEPT --A INPUT -p tcp --dport 80 -j ACCEPT --A INPUT -p tcp --dport 443 -j ACCEPT --A INPUT -p udp --dport {{ dns_port }} -j ACCEPT --A INPUT -p udp --dport {{ rndc_port }} -j ACCEPT --A INPUT -p tcp --dport {{ rndc_port }} -j ACCEPT -{% endif %} --A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT --A INPUT -p icmp -j ACCEPT --A INPUT -i lo -j ACCEPT --A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT --A INPUT -j REJECT --reject-with icmp-host-prohibited --A FORWARD -j REJECT --reject-with icmp-host-prohibited -COMMIT - - - diff --git a/openshift/roles/dns/handlers/main.yml b/openshift/roles/dns/handlers/main.yml deleted file mode 100644 index 47790c3..0000000 --- a/openshift/roles/dns/handlers/main.yml +++ /dev/null @@ -1,6 +0,0 @@ ---- -# handler for dns - -- name: restart named - service: name=named state=restarted enabled=yes - diff --git a/openshift/roles/dns/tasks/main.yml b/openshift/roles/dns/tasks/main.yml deleted file mode 100644 index 90dcb70..0000000 --- a/openshift/roles/dns/tasks/main.yml +++ /dev/null @@ -1,26 +0,0 @@ ---- -# tasks for the bind server - -- name: Install the bind packages - yum: name={{ item }} state=installed - with_items: - - bind - - bind-utils - -- name: Copy the key for dynamic dns updates to the domain - template: src=keyfile.j2 dest=/var/named/{{ domain_name }}.key owner=named group=named - -- name: Copy the forwarders file for bind - template: src=forwarders.conf.j2 dest=/var/named/forwarders.conf owner=named group=named - -- name: copy the db file for the domain - template: src=domain.db.j2 dest=/var/named/dynamic/{{ domain_name }}.db owner=named group=named - -- name: copy the named.conf file - template: src=named.conf.j2 dest=/etc/named.conf owner=root group=named mode=755 - notify: restart named - -- name: restore the sellinux contexts - shell: restorecon -v /var/named/forwarders.conf; restorecon -rv /var/named; restorecon /etc/named.conf; touch /opt/named.init - creates=/opt/named.init - diff --git a/openshift/roles/dns/templates/domain.db.j2 b/openshift/roles/dns/templates/domain.db.j2 deleted file mode 100644 index 29ea7a5..0000000 --- a/openshift/roles/dns/templates/domain.db.j2 +++ /dev/null @@ -1,16 +0,0 @@ -$ORIGIN . -$TTL 1 ; 1 second -{{ domain_name }} IN SOA ns1.{{ domain_name }}. hostmaster.{{ domain_name }}. ( - 2002100404 ; serial - 10800 ; refresh (3 hours) - 3600 ; retry (1 hour) - 3600000 ; expire (5 weeks 6 days 16 hours) - 7200 ; minimum (2 hours) - ) - NS ns1.{{ domain_name }}. -$ORIGIN {{ domain_name }}. -{% for host in groups['nodes'] %} -{{ hostvars[host].ansible_hostname }} A {{ hostvars[host].ansible_default_ipv4.address }} -{% endfor %} -ns1 A {{ ansible_default_ipv4.address }} - diff --git a/openshift/roles/dns/templates/forwarders.conf.j2 b/openshift/roles/dns/templates/forwarders.conf.j2 deleted file mode 100644 index 9538cbe..0000000 --- a/openshift/roles/dns/templates/forwarders.conf.j2 +++ /dev/null @@ -1 +0,0 @@ -forwarders { {{ forwarders|join(';') }}; } ; diff --git a/openshift/roles/dns/templates/keyfile.j2 b/openshift/roles/dns/templates/keyfile.j2 deleted file mode 100644 index 7aeefa2..0000000 --- a/openshift/roles/dns/templates/keyfile.j2 +++ /dev/null @@ -1,6 +0,0 @@ -key {{ domain_name }} { - -algorithm HMAC-MD5; -secret {{ dns_key }}; - -}; diff --git a/openshift/roles/dns/templates/named.conf.j2 b/openshift/roles/dns/templates/named.conf.j2 deleted file mode 100644 index 3f1b143..0000000 --- a/openshift/roles/dns/templates/named.conf.j2 +++ /dev/null @@ -1,57 +0,0 @@ -// -// named.conf -// -// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS -// server as a caching only nameserver (as a localhost DNS resolver only). -// -// See /usr/share/doc/bind*/sample/ for example named configuration files. -// - -options { - listen-on port {{ dns_port }} { any; }; - directory "/var/named"; - dump-file "/var/named/data/cache_dump.db"; - statistics-file "/var/named/data/named_stats.txt"; - memstatistics-file "/var/named/data/named_mem_stats.txt"; - allow-query { any; }; - recursion yes; - - dnssec-enable yes; - dnssec-validation yes; - dnssec-lookaside auto; - - /* Path to ISC DLV key */ - bindkeys-file "/etc/named.iscdlv.key"; - include "forwarders.conf"; - - managed-keys-directory "/var/named/dynamic"; -}; - -logging { - channel default_debug { - file "data/named.run"; - severity debug; - }; -}; -include "{{ domain_name }}.key"; - -controls { - inet * port {{ rndc_port }} allow { any; } - keys { {{ domain_name }}; }; -}; - -zone "{{ domain_name }}" IN { - type master; - file "dynamic/{{ domain_name }}.db"; - allow-update { key {{ domain_name }}; }; -}; - -include "/etc/named.rfc1912.zones"; -include "/etc/named.root.key"; - -zone "." IN { - type hint; - file "named.ca"; -}; - - diff --git a/openshift/roles/dns/vars/main.yml b/openshift/roles/dns/vars/main.yml deleted file mode 100644 index cb48fca..0000000 --- a/openshift/roles/dns/vars/main.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -# Variables for the bind daemon - -forwarders: - - 8.8.8.8 - - 8.8.4.4 - - diff --git a/openshift/roles/ec2/tasks/main.yml b/openshift/roles/ec2/tasks/main.yml deleted file mode 100644 index 8136ffb..0000000 --- a/openshift/roles/ec2/tasks/main.yml +++ /dev/null @@ -1,30 +0,0 @@ ---- -- name: Create Instance - ec2: > - region="{{ region }}" - zone="{{ zone }}" - id="{{ id + '_' + type }}" - ec2_access_key="{{ ec2_access_key }}" - ec2_secret_key="{{ ec2_secret_key }}" - keypair="{{ keypair }}" - instance_type="{{ instance_type }}" - image="{{ image }}" - group="{{ group }}" - wait=true - user_data="{{ type }}" - instance_tags='{"type":"{{ type }}", "id":"{{ id }}"}' - count="{{ ncount }}" - register: ec2 - -- pause: seconds=60 - when: type == 'nodes' - -- name: Add new instance to host group - add_host: hostname={{ item.public_dns_name }} groupname={{ type }} - with_items: ec2.instances - when: type != 'mq' - -- name: Add new instance to host group - add_host: hostname={{ item.public_dns_name }} groupname="mq,mongo_servers" - with_items: ec2.instances - when: type == 'mq' diff --git a/openshift/roles/ec2_remove/tasks/main.yml b/openshift/roles/ec2_remove/tasks/main.yml deleted file mode 100644 index 1e93667..0000000 --- a/openshift/roles/ec2_remove/tasks/main.yml +++ /dev/null @@ -1,25 +0,0 @@ ---- - -- name: Create Instance - ec2: > - region="{{ region }}" - zone="{{ zone }}" - id="{{ id + '_' + type }}" - ec2_access_key="{{ ec2_access_key }}" - ec2_secret_key="{{ ec2_secret_key }}" - keypair="{{ keypair }}" - instance_type="{{ instance_type }}" - image="{{ image }}" - group="{{ group }}" - wait=true - count="{{ ncount }}" - register: ec2 - -- name: Delete Instance - ec2: - region: "{{ region }}" - ec2_access_key: "{{ ec2_access_key }}" - ec2_secret_key: "{{ ec2_secret_key }}" - state: 'absent' - instance_ids: "{{ item }}" - with_items: ec2.instance_ids diff --git a/openshift/roles/lvs/tasks/main.yml b/openshift/roles/lvs/tasks/main.yml deleted file mode 100644 index 3958df3..0000000 --- a/openshift/roles/lvs/tasks/main.yml +++ /dev/null @@ -1,26 +0,0 @@ ---- -# Tasks for deploying the loadbalancer lvs - -- name: Install the lvs packages - yum: name={{ item }} state=installed - with_items: - - piranha - - wget - -- name: disable selinux - selinux: state=disabled - -- name: copy the configuration file - template: src=lvs.cf.j2 dest=/etc/sysconfig/ha/lvs.cf - -- name: copy the file for broker monitoring - template: src=check.sh dest=/opt/check.sh mode=0755 - -- name: start the services - service: name={{ item }} state=started enabled=yes - with_items: - - ipvsadm - - pulse - ignore_errors: yes - tags: test - diff --git a/openshift/roles/lvs/templates/check.sh b/openshift/roles/lvs/templates/check.sh deleted file mode 100644 index 10f16c8..0000000 --- a/openshift/roles/lvs/templates/check.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/bash -LINES=`wget -q -O - --no-check-certificate https://$1/broker/rest/api | wc -c` -if [ $LINES -gt "0" ]; then - echo "OK" -else - echo "FAILURE" -fi -exit 0 diff --git a/openshift/roles/lvs/templates/lvs.cf b/openshift/roles/lvs/templates/lvs.cf deleted file mode 100644 index b49f2a9..0000000 --- a/openshift/roles/lvs/templates/lvs.cf +++ /dev/null @@ -1,44 +0,0 @@ -serial_no = 14 -primary = 10.152.154.62 -service = lvs -backup_active = 1 -backup = 10.114.215.67 -heartbeat = 1 -heartbeat_port = 539 -keepalive = 6 -deadtime = 18 -network = direct -debug_level = NONE -monitor_links = 0 -syncdaemon = 0 -tcp_timeout = 30 -tcpfin_timeout = 30 -udp_timeout = 30 -virtual webserver { - active = 1 - address = 10.114.215.69 eth0:1 - vip_nmask = 255.255.255.255 - port = 80 - pmask = 255.255.255.255 - send = "GET / HTTP/1.0\r\n\r\n" - expect = "HTTP" - use_regex = 0 - load_monitor = none - scheduler = rr - protocol = tcp - timeout = 60 - reentry = 45 - quiesce_server = 0 - server web1 { - address = 10.35.91.109 - active = 1 - port = 80 - weight = 1 - } - server web2 { - address = 10.147.222.172 - active = 1 - port = 80 - weight = 2 - } -} diff --git a/openshift/roles/lvs/templates/lvs.cf.https b/openshift/roles/lvs/templates/lvs.cf.https deleted file mode 100644 index 3b7c806..0000000 --- a/openshift/roles/lvs/templates/lvs.cf.https +++ /dev/null @@ -1,44 +0,0 @@ -serial_no = 14 -primary = 10.152.154.62 -service = lvs -backup_active = 1 -backup = 10.114.215.67 -heartbeat = 1 -heartbeat_port = 539 -keepalive = 6 -deadtime = 18 -network = direct -debug_level = NONE -monitor_links = 0 -syncdaemon = 0 -tcp_timeout = 30 -tcpfin_timeout = 30 -udp_timeout = 30 -virtual webserver { - active = 1 - address = 10.114.215.69 eth0:1 - vip_nmask = 255.255.255.255 - port = 80 - pmask = 255.255.255.255 - send_program = "/etc/https.check" - expect = "OK" - use_regex = 0 - load_monitor = none - scheduler = rr - protocol = tcp - timeout = 60 - reentry = 45 - quiesce_server = 0 - server web1 { - address = 10.35.91.109 - active = 1 - port = 80 - weight = 1 - } - server web2 { - address = 10.147.222.172 - active = 1 - port = 80 - weight = 2 - } -} diff --git a/openshift/roles/lvs/templates/lvs.cf.j2 b/openshift/roles/lvs/templates/lvs.cf.j2 deleted file mode 100644 index e5f83fa..0000000 --- a/openshift/roles/lvs/templates/lvs.cf.j2 +++ /dev/null @@ -1,45 +0,0 @@ -serial_no = 1 -primary = {{ hostvars[groups['lvs'][0]].ansible_default_ipv4.address }} -service = lvs -backup_active = 1 -backup = {{ hostvars[groups['lvs'][1]].ansible_default_ipv4.address }} -heartbeat = 1 -heartbeat_port = 539 -keepalive = 6 -deadtime = 18 -network = direct -debug_level = NONE -monitor_links = 0 -syncdaemon = 0 -tcp_timeout = 30 -tcpfin_timeout = 30 -udp_timeout = 30 -virtual brokers { - active = 1 - address = {{ vip }} {{ hostvars[groups['lvs'][0]].ansible_default_ipv4.interface }}:1 - vip_nmask = {{ vip_netmask }} - port = 443 - persistent = 10 - pmask = 255.255.255.255 - send_program = "/opt/check.sh %h" - expect = "OK" - use_regex = 0 - load_monitor = none - scheduler = rr - protocol = tcp - timeout = 60 - reentry = 45 - quiesce_server = 0 - server web1 { - address = {{ hostvars[groups['broker'][0]].ansible_default_ipv4.address }} - active = 1 - port = 443 - weight = 0 - } - server web2 { - address = {{ hostvars[groups['broker'][1]].ansible_default_ipv4.address }} - active = 1 - port = 443 - weight = 0 - } -} diff --git a/openshift/roles/mongodb/files/10gen.repo.j2 b/openshift/roles/mongodb/files/10gen.repo.j2 deleted file mode 100644 index db76731..0000000 --- a/openshift/roles/mongodb/files/10gen.repo.j2 +++ /dev/null @@ -1,6 +0,0 @@ -[10gen] -name=10gen Repository -baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64 -gpgcheck=0 -enabled=1 - diff --git a/openshift/roles/mongodb/files/secret b/openshift/roles/mongodb/files/secret deleted file mode 100644 index 4d77cbc..0000000 --- a/openshift/roles/mongodb/files/secret +++ /dev/null @@ -1,3 +0,0 @@ -qGO6OYb64Uth9p9Tm8s9kqarydmAg1AUdgVz+ecjinaLZ1SlWxXMY1ug8AO7C/Vu -D8kA3+rE37Gv1GuZyPYi87NSfDhKXo4nJWxI00BxTBppmv2PTzbi7xLCx1+8A1uQ -4XU0HA diff --git a/openshift/roles/mongodb/hosts b/openshift/roles/mongodb/hosts deleted file mode 100644 index 7e90ce0..0000000 --- a/openshift/roles/mongodb/hosts +++ /dev/null @@ -1,29 +0,0 @@ -#The site wide list of mongodb servers - -# the mongo servers need a mongod_port variable set, and they must not conflict. -[mongo_servers] -hadoop1 mongod_port=2700 -hadoop2 mongod_port=2701 -hadoop3 mongod_port=2702 -hadoop4 mongod_port=2703 - -#The list of servers where replication should happen, by default include all servers -[replication_servers] -hadoop1 -hadoop2 -hadoop3 -hadoop4 - -#The list of mongodb configuration servers, make sure it is 1 or 3 -[mongoc_servers] -hadoop1 -hadoop2 -hadoop3 - - -#The list of servers where mongos servers would run. -[mongos_servers] -hadoop1 -hadoop2 - - diff --git a/openshift/roles/mongodb/site.yml b/openshift/roles/mongodb/site.yml deleted file mode 100644 index 1a238bc..0000000 --- a/openshift/roles/mongodb/site.yml +++ /dev/null @@ -1,22 +0,0 @@ ---- -# This Playbook would deploy the whole mongodb cluster with replication and sharding. - -- hosts: all - roles: - - role: common - -- hosts: mongo_servers - roles: - - role: mongod - -- hosts: mongoc_servers - roles: - - role: mongoc - -- hosts: mongos_servers - roles: - - role: mongos - -- hosts: mongo_servers - tasks: - - include: roles/mongod/tasks/shards.yml diff --git a/openshift/roles/mongodb/tasks/main.yml b/openshift/roles/mongodb/tasks/main.yml deleted file mode 100644 index b716479..0000000 --- a/openshift/roles/mongodb/tasks/main.yml +++ /dev/null @@ -1,43 +0,0 @@ ---- -# This role deploys the mongod processes and sets up the replication set. - - -- name: Install the mongodb package - yum: name={{ item }} state=installed - with_items: - - mongodb - - mongodb-server - - bc - - python-pip - - gcc - -- name: Install the latest pymongo package - pip: name=pymongo state=latest use_mirrors=no - -- name: Create the data directory for mongodb - file: path={{ mongodb_datadir_prefix }} owner=mongodb group=mongodb state=directory - -- name: Copy the keyfile for authentication - copy: src=secret dest={{ mongodb_datadir_prefix }}/secret owner=mongodb group=mongodb mode=0400 - -- name: Create the mongodb configuration file - template: src=mongod.conf.j2 dest=/etc/mongodb.conf - -- name: Start the mongodb service - service: name=mongod state=started - -- name: Create the file to initialize the mongod replica set - template: src=repset_init.j2 dest=/tmp/repset_init.js - -- name: Pause for a while - wait_for: port="{{ mongod_port }}" delay=30 - -- name: Initialize the replication set - shell: /usr/bin/mongo --port "{{ mongod_port }}" /tmp/repset_init.js;touch /opt/rep.init creates=/opt/rep.init - when: inventory_hostname == groups['mongo_servers'][0] - -- name: add the admin user - mongodb_user: database=admin name=admin password={{ mongo_admin_pass }} login_port={{ mongod_port }} state=present - when: inventory_hostname == groups['mongo_servers'][0] - ignore_errors: yes - diff --git a/openshift/roles/mongodb/templates/mongod.conf.j2 b/openshift/roles/mongodb/templates/mongod.conf.j2 deleted file mode 100644 index 201e9ea..0000000 --- a/openshift/roles/mongodb/templates/mongod.conf.j2 +++ /dev/null @@ -1,25 +0,0 @@ -# mongo.conf -smallfiles=true - -#where to log -logpath=/var/log/mongodb/mongodb.log - -logappend=true - -# fork and run in background -fork = true - -port = {{ mongod_port }} - -dbpath={{ mongodb_datadir_prefix }} -keyFile={{ mongodb_datadir_prefix }}/secret - -# location of pidfile -pidfilepath = /var/run/mongod.pid - - -# Ping interval for Mongo monitoring server. -#mms-interval = - -# Replication Options -replSet=openshift diff --git a/openshift/roles/mongodb/templates/repset_init.j2 b/openshift/roles/mongodb/templates/repset_init.j2 deleted file mode 100644 index 2f6f7ed..0000000 --- a/openshift/roles/mongodb/templates/repset_init.j2 +++ /dev/null @@ -1,7 +0,0 @@ -rs.initiate() -sleep(13000) -{% for host in groups['mongo_servers'] %} -rs.add("{{ host }}:{{ mongod_port }}") -sleep(8000) -{% endfor %} -printjson(rs.status()) diff --git a/openshift/roles/mq/handlers/main.yml b/openshift/roles/mq/handlers/main.yml deleted file mode 100644 index d27ede6..0000000 --- a/openshift/roles/mq/handlers/main.yml +++ /dev/null @@ -1,5 +0,0 @@ ---- -# handlers for mq - -- name: restart mq - service: name=activemq state=restarted diff --git a/openshift/roles/mq/tasks/main.yml b/openshift/roles/mq/tasks/main.yml deleted file mode 100644 index 220429c..0000000 --- a/openshift/roles/mq/tasks/main.yml +++ /dev/null @@ -1,26 +0,0 @@ ---- -# task for setting up MQ cluster - -- name: Install the packages for MQ - yum: name={{ item }} state=installed - with_items: - - java-1.6.0-openjdk - - java-1.6.0-openjdk-devel - - activemq - -- name: Copy the activemq.xml file - template: src=activemq.xml.j2 dest=/etc/activemq/activemq.xml - notify: restart mq - -- name: Copy the jetty.xml file - template: src=jetty.xml.j2 dest=/etc/activemq/jetty.xml - notify: restart mq - -- name: Copy the jetty realm properties file - template: src=jetty-realm.properties.j2 dest=/etc/activemq/jetty-realm.properties - notify: restart mq - -- name: start the active mq service - service: name=activemq state=started enabled=yes - - diff --git a/openshift/roles/mq/templates/activemq.xml.j2 b/openshift/roles/mq/templates/activemq.xml.j2 deleted file mode 100644 index 40e41ba..0000000 --- a/openshift/roles/mq/templates/activemq.xml.j2 +++ /dev/null @@ -1,178 +0,0 @@ - - - - - - - file:${activemq.conf}/credentials.properties - - - - - - - - - - - - - - - - - - - - - - - - {% for host in groups['mq'] %} - {% if hostvars[host].ansible_hostname != ansible_hostname %} - - - - - - - - {% endif %} - {% endfor %} - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/openshift/roles/mq/templates/jetty-realm.properties.j2 b/openshift/roles/mq/templates/jetty-realm.properties.j2 deleted file mode 100644 index 63892d6..0000000 --- a/openshift/roles/mq/templates/jetty-realm.properties.j2 +++ /dev/null @@ -1,20 +0,0 @@ -## --------------------------------------------------------------------------- -## Licensed to the Apache Software Foundation (ASF) under one or more -## contributor license agreements. See the NOTICE file distributed with -## this work for additional information regarding copyright ownership. -## The ASF licenses this file to You under the Apache License, Version 2.0 -## (the "License"); you may not use this file except in compliance with -## the License. You may obtain a copy of the License at -## -## http://www.apache.org/licenses/LICENSE-2.0 -## -## Unless required by applicable law or agreed to in writing, software -## distributed under the License is distributed on an "AS IS" BASIS, -## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -## See the License for the specific language governing permissions and -## limitations under the License. -## --------------------------------------------------------------------------- - -# Defines users that can access the web (console, demo, etc.) -# username: password [,rolename ...] -admin: {{ admin_pass }}, admin diff --git a/openshift/roles/mq/templates/jetty.xml.j2 b/openshift/roles/mq/templates/jetty.xml.j2 deleted file mode 100644 index d53a988..0000000 --- a/openshift/roles/mq/templates/jetty.xml.j2 +++ /dev/null @@ -1,113 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - index.html - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/openshift/roles/nodes/files/cgconfig.conf b/openshift/roles/nodes/files/cgconfig.conf deleted file mode 100644 index b529884..0000000 --- a/openshift/roles/nodes/files/cgconfig.conf +++ /dev/null @@ -1,26 +0,0 @@ -# -# Copyright IBM Corporation. 2007 -# -# Authors: Balbir Singh -# This program is free software; you can redistribute it and/or modify it -# under the terms of version 2.1 of the GNU Lesser General Public License -# as published by the Free Software Foundation. -# -# This program is distributed in the hope that it would be useful, but -# WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. -# -# See man cgconfig.conf for further details. -# -# By default, mount all controllers to /cgroup/ - -mount { - cpuset = /cgroup/cpuset; - cpu = /cgroup/cpu; - cpuacct = /cgroup/cpuacct; - memory = /cgroup/memory; - devices = /cgroup/devices; - freezer = /cgroup/freezer; - net_cls = /cgroup/net_cls; - blkio = /cgroup/blkio; -} diff --git a/openshift/roles/nodes/files/cgrulesengd.pp b/openshift/roles/nodes/files/cgrulesengd.pp deleted file mode 100644 index 77f3db9..0000000 Binary files a/openshift/roles/nodes/files/cgrulesengd.pp and /dev/null differ diff --git a/openshift/roles/nodes/files/pam.sh b/openshift/roles/nodes/files/pam.sh deleted file mode 100644 index 760f7c2..0000000 --- a/openshift/roles/nodes/files/pam.sh +++ /dev/null @@ -1,9 +0,0 @@ -#!/bin/bash - -for f in "runuser" "runuser-l" "sshd" "su" "system-auth-ac"; \ -do t="/etc/pam.d/$f"; \ -if ! grep -q "pam_namespace.so" "$t"; \ -then echo -e "session\t\trequired\tpam_namespace.so no_unmount_on_close" >> "$t" ; \ -fi; \ -done; - diff --git a/openshift/roles/nodes/files/sshd b/openshift/roles/nodes/files/sshd deleted file mode 100644 index 3d37588..0000000 --- a/openshift/roles/nodes/files/sshd +++ /dev/null @@ -1,13 +0,0 @@ -#%PAM-1.0 -auth required pam_sepermit.so -auth include password-auth -account required pam_nologin.so -account include password-auth -password include password-auth -# pam_openshift.so close should be the first session rule -session required pam_openshift.so close -session required pam_loginuid.so -# pam_openshift.so open should only be followed by sessions to be executed in the user context -session required pam_openshift.so open env_params -session optional pam_keyinit.so force revoke -session include password-auth diff --git a/openshift/roles/nodes/files/sshd_config b/openshift/roles/nodes/files/sshd_config deleted file mode 100644 index 1e63427..0000000 --- a/openshift/roles/nodes/files/sshd_config +++ /dev/null @@ -1,139 +0,0 @@ -# $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ - -# This is the sshd server system-wide configuration file. See -# sshd_config(5) for more information. - -# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin - -# The strategy used for options in the default sshd_config shipped with -# OpenSSH is to specify options with their default value where -# possible, but leave them commented. Uncommented options change a -# default value. - -AcceptEnv GIT_SSH -#Port 22 -#AddressFamily any -#ListenAddress 0.0.0.0 -#ListenAddress :: - -# Disable legacy (protocol version 1) support in the server for new -# installations. In future the default will change to require explicit -# activation of protocol 1 -Protocol 2 - -# HostKey for protocol version 1 -#HostKey /etc/ssh/ssh_host_key -# HostKeys for protocol version 2 -#HostKey /etc/ssh/ssh_host_rsa_key -#HostKey /etc/ssh/ssh_host_dsa_key - -# Lifetime and size of ephemeral version 1 server key -#KeyRegenerationInterval 1h -#ServerKeyBits 1024 - -# Logging -# obsoletes QuietMode and FascistLogging -#SyslogFacility AUTH -SyslogFacility AUTHPRIV -#LogLevel INFO - -# Authentication: - -#LoginGraceTime 2m -#PermitRootLogin yes -#StrictModes yes -#MaxAuthTries 6 -MaxSessions 40 - -#RSAAuthentication yes -#PubkeyAuthentication yes -#AuthorizedKeysFile .ssh/authorized_keys -#AuthorizedKeysCommand none -#AuthorizedKeysCommandRunAs nobody - -# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts -#RhostsRSAAuthentication no -# similar for protocol version 2 -#HostbasedAuthentication no -# Change to yes if you don't trust ~/.ssh/known_hosts for -# RhostsRSAAuthentication and HostbasedAuthentication -#IgnoreUserKnownHosts no -# Don't read the user's ~/.rhosts and ~/.shosts files -#IgnoreRhosts yes - -# To disable tunneled clear text passwords, change to no here! -#PasswordAuthentication yes -#PermitEmptyPasswords no -PasswordAuthentication yes - -# Change to no to disable s/key passwords -#ChallengeResponseAuthentication yes -ChallengeResponseAuthentication no - -# Kerberos options -#KerberosAuthentication no -#KerberosOrLocalPasswd yes -#KerberosTicketCleanup yes -#KerberosGetAFSToken no -#KerberosUseKuserok yes - -# GSSAPI options -#GSSAPIAuthentication no -GSSAPIAuthentication yes -#GSSAPICleanupCredentials yes -GSSAPICleanupCredentials yes -#GSSAPIStrictAcceptorCheck yes -#GSSAPIKeyExchange no - -# Set this to 'yes' to enable PAM authentication, account processing, -# and session processing. If this is enabled, PAM authentication will -# be allowed through the ChallengeResponseAuthentication and -# PasswordAuthentication. Depending on your PAM configuration, -# PAM authentication via ChallengeResponseAuthentication may bypass -# the setting of "PermitRootLogin without-password". -# If you just want the PAM account and session checks to run without -# PAM authentication, then enable this but set PasswordAuthentication -# and ChallengeResponseAuthentication to 'no'. -#UsePAM no -UsePAM yes - -# Accept locale-related environment variables -AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES -AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT -AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE -AcceptEnv XMODIFIERS - -#AllowAgentForwarding yes -#AllowTcpForwarding yes -#GatewayPorts no -#X11Forwarding no -X11Forwarding yes -#X11DisplayOffset 10 -#X11UseLocalhost yes -#PrintMotd yes -#PrintLastLog yes -#TCPKeepAlive yes -#UseLogin no -#UsePrivilegeSeparation yes -#PermitUserEnvironment no -#Compression delayed -#ClientAliveInterval 0 -#ClientAliveCountMax 3 -#ShowPatchLevel no -#UseDNS yes -#PidFile /var/run/sshd.pid -MaxStartups 40 -#PermitTunnel no -#ChrootDirectory none - -# no default banner path -#Banner none - -# override default of no subsystems -Subsystem sftp /usr/libexec/openssh/sftp-server - -# Example of overriding settings on a per-user basis -#Match User anoncvs -# X11Forwarding no -# AllowTcpForwarding no -# ForceCommand cvs server diff --git a/openshift/roles/nodes/handlers/main.yml b/openshift/roles/nodes/handlers/main.yml deleted file mode 100644 index 1eaaf0b..0000000 --- a/openshift/roles/nodes/handlers/main.yml +++ /dev/null @@ -1,10 +0,0 @@ ---- -#Handlers for nodes - -- name: restart mcollective - service: name=mcollective state=restarted - -- name: restart ssh - service: name=sshd state=restarted - async: 10 - poll: 0 diff --git a/openshift/roles/nodes/tasks/main.yml b/openshift/roles/nodes/tasks/main.yml deleted file mode 100644 index 6f37b1c..0000000 --- a/openshift/roles/nodes/tasks/main.yml +++ /dev/null @@ -1,113 +0,0 @@ ---- -# Tasks for the openshift nodes - -- name: Install the mcollective packages - yum: name={{ item }} state=installed - with_items: - - mcollective-common-2.2.1 - - -- name: Install the mcollective packages - yum: name=openshift-origin-msg-node-mcollective state=installed disablerepo=epel - -- name: Copy the mcollective configuration file - template: src=server.cfg.j2 dest=/etc/mcollective/server.cfg - notify: restart mcollective - -- name: start the mcollective service - service: name=mcollective state=started enabled=yes - -- name: Install OpenShift node packages - yum: name="{{ item }}" state=installed - with_items: - - rubygem-openshift-origin-node - - rubygem-openshift-origin-container-selinux.noarch - - rubygem-passenger-native - - rubygem-openshift-origin-msg-broker-mcollective - - openshift-origin-port-proxy - - openshift-origin-node-util - - openshift-origin-cartridge-cron - - openshift-origin-cartridge-python - - ruby193-rubygem-rest-client - - httpd - - lsof - - dbus - -- name: Copy the ssh authorized key for root - authorized_key: user=root key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}" - -- name: Copy the pam.d ssh file - copy: src=sshd dest=/etc/pam.d/sshd - register: last_run - -- name: Copy the cgconfig file - copy: src=cgconfig.conf dest=/etc/cgconfig.conf - -- name: Execute script for pam update - script: pam.sh - when: last_run.changed - -- name: Create directory for cgroups - file: path=/cgroup state=directory - -- name: restart cgroups - service: name="{{ item }}" state=restarted enabled=yes - with_items: - - cgconfig - - cgred - - httpd - - messagebus - - oddjobd - when: last_run.changed - -- name: Find root mount point of gear dir - shell: df -P /var/lib/openshift | tail -1 | awk '{ print $6 }' - register: gear_root_mount - -- name: Initialize quota db - shell: oo-init-quota creates="{{ gear_root_mount.stdout }}/aquota.user" - -- name: SELinux - configure sebooleans - seboolean: name="{{ item }}" state=true persistent=yes - with_items: - - httpd_unified - - httpd_can_network_connect - - httpd_can_network_relay - - httpd_run_stickshift - - httpd_read_user_content - - httpd_enable_homedirs - - allow_polyinstantiation - -- name: set the sysctl settings - sysctl: name=kernel.sem value="250 32000 32 4096" state=present reload=yes - -- name: set the sysctl settings - sysctl: name=net.ipv4.ip_local_port_range value="15000 35530" state=present reload=yes - -- name: set the sysctl settings - sysctl: name=net.netfilter.nf_conntrack_max value="1048576" state=present reload=yes - -- name: Copy sshd config - copy: src=sshd_config dest=/etc/ssh/sshd_config - notify: restart ssh - -- name: start the port proxy service - service: name=openshift-port-proxy state=started enabled=yes - -- name: Copy the node.conf file - template: src=node.conf.j2 dest=/etc/openshift/node.conf - -- name: Copy the se linux fix file - copy: src=cgrulesengd.pp dest=/opt/cgrulesengd.pp - register: se_run - -- name: allow se linux policy - shell: chdir=/opt semodule -i cgrulesengd.pp - when: se_run.changed - -- name: Start the openshift gears - service: name=openshift-gears state=started enabled=yes - -- name: copy the resolv.conf file - template: src=resolv.conf.j2 dest=/etc/resolv.conf - diff --git a/openshift/roles/nodes/templates/node.conf.j2 b/openshift/roles/nodes/templates/node.conf.j2 deleted file mode 100644 index 5fcb476..0000000 --- a/openshift/roles/nodes/templates/node.conf.j2 +++ /dev/null @@ -1,40 +0,0 @@ -# These should not be left at default values, even for a demo. -# "PUBLIC" networking values are ones that end-users should be able to reach. -PUBLIC_HOSTNAME="{{ ansible_hostname }}" # The node host's public hostname -PUBLIC_IP="{{ ansible_default_ipv4.address }}" # The node host's public IP address -BROKER_HOST="{{ groups['broker'][0] }}" # IP or DNS name of broker server for REST API - -# Usually (unless in a demo) this should be changed to the domain for your installation: -CLOUD_DOMAIN="example.com" # Domain suffix to use for applications (Must match broker config) - -# You may want these, depending on the complexity of your networking: -# EXTERNAL_ETH_DEV='eth0' # Specify the internet facing public ethernet device -# INTERNAL_ETH_DEV='eth1' # Specify the internal cluster facing ethernet device -INSTANCE_ID="localhost" # Set by RH EC2 automation - -# Uncomment and use the following line if you want to gear users to be members of -# additional groups besides the one with the same id as the uid. The other group -# should be an existing group. -#GEAR_SUPL_GRPS="another_group" # Supplementary groups for gear UIDs (comma separated list) - -# Generally the following should not be changed: -ENABLE_CGROUPS=1 # constrain gears in cgroups (1=yes, 0=no) -GEAR_BASE_DIR="/var/lib/openshift" # gear root directory -GEAR_SKEL_DIR="/etc/openshift/skel" # skel files to use when building a gear -GEAR_SHELL="/usr/bin/oo-trap-user" # shell to use for the gear -GEAR_GECOS="OpenShift guest" # Gecos information to populate for the gear user -GEAR_MIN_UID=500 # Lower bound of UID used to create gears -GEAR_MAX_UID=6500 # Upper bound of UID used to create gears -CARTRIDGE_BASE_PATH="/usr/libexec/openshift/cartridges" # Locations where cartridges are installed -LAST_ACCESS_DIR="/var/lib/openshift/.last_access" # Location to maintain last accessed time for gears -APACHE_ACCESS_LOG="/var/log/httpd/openshift_log" # Localion of httpd for node -PROXY_MIN_PORT_NUM=35531 # Lower bound of port numbers used to proxy ports externally -PROXY_PORTS_PER_GEAR=5 # Number of proxy ports available per gear -CREATE_APP_SYMLINKS=0 # If set to 1, creates gear-name symlinks to the UUID directories (debugging only) -OPENSHIFT_HTTP_CONF_DIR="/etc/httpd/conf.d/openshift" - -PLATFORM_LOG_FILE=/var/log/openshift/node/platform.log -PLATFORM_LOG_LEVEL=DEBUG -PLATFORM_TRACE_LOG_FILE=/var/log/openshift/node/platform-trace.log -PLATFORM_TRACE_LOG_LEVEL=DEBUG -CONTAINERIZATION_PLUGIN=openshift-origin-container-selinux diff --git a/openshift/roles/nodes/templates/resolv.conf.j2 b/openshift/roles/nodes/templates/resolv.conf.j2 deleted file mode 100644 index 9fdca84..0000000 --- a/openshift/roles/nodes/templates/resolv.conf.j2 +++ /dev/null @@ -1,2 +0,0 @@ -search {{ domain_name }} -nameserver {{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }} diff --git a/openshift/roles/nodes/templates/server.cfg.j2 b/openshift/roles/nodes/templates/server.cfg.j2 deleted file mode 100644 index bfd85d6..0000000 --- a/openshift/roles/nodes/templates/server.cfg.j2 +++ /dev/null @@ -1,28 +0,0 @@ -topicprefix = /topic/ -main_collective = mcollective -collectives = mcollective -libdir = /opt/rh/ruby193/root/usr/libexec/mcollective -logfile = /var/log/mcollective.log -loglevel = debug -daemonize = 1 -direct_addressing = 1 -registerinterval = 30 - -# Plugins -securityprovider = psk -plugin.psk = unset - -connector = stomp -plugin.stomp.pool.size = {{ groups['mq']|length() }} -{% for host in groups['mq'] %} - -plugin.stomp.pool.host{{ loop.index }} = {{ hostvars[host].ansible_hostname }} -plugin.stomp.pool.port{{ loop.index }} = 61613 -plugin.stomp.pool.user{{ loop.index }} = mcollective -plugin.stomp.pool.password{{ loop.index }} = {{ mcollective_pass }} - -{% endfor %} - -# Facts -factsource = yaml -plugin.yaml = /etc/mcollective/facts.yaml diff --git a/openshift/site.yml b/openshift/site.yml deleted file mode 100644 index cf20125..0000000 --- a/openshift/site.yml +++ /dev/null @@ -1,34 +0,0 @@ - -- hosts: all - user: root - roles: - - role: common - -- hosts: dns - user: root - roles: - - role: dns -- hosts: mongo_servers - user: root - roles: - - role: mongodb - -- hosts: mq - user: root - roles: - - role: mq - -- hosts: broker - user: root - roles: - - role: broker - -- hosts: nodes - user: root - roles: - - role: nodes - -- hosts: lvs - user: root - roles: - - role: lvs diff --git a/play-webapp/LICENSE.md b/play-webapp/LICENSE.md deleted file mode 100644 index 2b437ec..0000000 --- a/play-webapp/LICENSE.md +++ /dev/null @@ -1,4 +0,0 @@ -Copyright (C) 2013 AnsibleWorks, Inc. - -This work is licensed under the Creative Commons Attribution 3.0 Unported License. -To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/deed.en_US. diff --git a/play-webapp/README.md b/play-webapp/README.md deleted file mode 100644 index aea6330..0000000 --- a/play-webapp/README.md +++ /dev/null @@ -1,108 +0,0 @@ -# Deploying a Play/Scala-based web application with Ansible - -- Requires Ansible 1.2 -- Expects CentOS/RHEL 6 hosts (64 bit) - -### A Primer into Play Framework ----------------------------------- - -- Play Framework: Play is a pure Java and Scala framework used to develop Web -Applications, It focuses on developer productivity, modern web and mobile -applications, and predictable, minimal resource consumption (CPU, memory, -threads) resulting in highly performant, highly scalable applications, Play -compiles Java and Scala sources directly and hot-reloads them into the JVM -without the need to restart the server. - -- Akka: Akka is a toolkit and runtime for building highly concurrent, -distributed, and fault tolerant event-driven applications on the JVM. - -- Scala: Scala is a general purpose programming language designed to express -common programming patterns in a concise, elegant, and type-safe way. Scala -smoothly integrates features of object-oriented and functional languages, -enabling developers to be more productive while retaining full interoperability -with Java and taking advantage of modern multicore hardware. Scala makes it -easy to avoid shared state, so that computation can be readily distributed -across cores on a multicore server, and across servers in a datacenter. This -makes Scala an especially good match for modern multicore CPUs and distributed -cloud-computing workloads that require concurrency and parallelism. - - -## Example Deployment using Ansible - -This example deploys a very simple application which takes a hostname as a parameter -from the user and uses Ansible itself to gather and display facts from that machine. -It shows how to deploy a simple Play-based app, as well as how to call out to Ansible -from inside Scala. - -Before running the playbook, modify the inventory file 'hosts' to match your -environment. Here's an example inventory: - - [webapp_server] - play_server - -Run the playbook to deploy the app: - - ansible-playbook -i hosts site.yml - -Once the playbooks complete, you can check the deployment by logging into the -server console at `http://:9000/`. You should get a page similar to -image below. - -![Alt text](images/play_webapp.png "webapp") - -## Fetching Facts from Hosts - -To use the example webapp and fetch facts from a host, enter the hostname of -host as shown in the figure above and press submit. Please note that the -application uses Ansible to gather facts so the hosts should have SSH keys -set up and the host entry should be available in the Ansible inventory file in -/etc/ansible/hosts. - -Upon submission, the application should return a valid json consisting the host -facts: - - localhost | success >> { - "ansible_facts": { - "ansible_all_ipv4_addresses": [ - "192.168.2.51" - ], - "ansible_all_ipv6_addresses": [ - "fe80::5054:ff:fe58:776d" - ], - "ansible_architecture": "x86_64", - "ansible_bios_date": "01/01/2007", - "ansible_bios_version": "0.5.1", - "ansible_cmdline": { - "KEYBOARDTYPE": "pc", - "KEYTABLE": "us", - "LANG": "en_US.UTF-8", - "SYSFONT": "latarcyrheb-sun16", - "quiet": true, - "rd_NO_DM": true, - "rd_NO_LUKS": true, - "rd_NO_LVM": true, - "rd_NO_MD": true, - "rhgb": true, - "ro": true, - "root": "UUID=5202a2bc-1a30-424f-855b-5d51a3cba8df" - }, - "ansible_date_time": { - "date": "2013-05-25", - "day": "25", - "epoch": "1369483888", - "hour": "17", - "iso8601": "2013-05-25T12:11:28Z", - "iso8601_micro": "2013-05-25T12:11:28.551538Z", - "minute": "41", - "month": "05", - "second": "28", - "time": "17:41:28", - "tz": "IST", - "year": "2013" - }, - - -The facts can also be fetched by making a GET request with following url. - - http://:9000/inventoryID?hostname= - diff --git a/play-webapp/hosts b/play-webapp/hosts deleted file mode 100644 index f383b9c..0000000 --- a/play-webapp/hosts +++ /dev/null @@ -1,2 +0,0 @@ -[webapp_server] -webserver1 diff --git a/play-webapp/images/play_webapp.png b/play-webapp/images/play_webapp.png deleted file mode 100644 index f104aeb..0000000 Binary files a/play-webapp/images/play_webapp.png and /dev/null differ diff --git a/play-webapp/roles/common/files/RPM-GPG-KEY-EPEL-6 b/play-webapp/roles/common/files/RPM-GPG-KEY-EPEL-6 deleted file mode 100644 index 7a20304..0000000 --- a/play-webapp/roles/common/files/RPM-GPG-KEY-EPEL-6 +++ /dev/null @@ -1,29 +0,0 @@ ------BEGIN PGP PUBLIC KEY BLOCK----- -Version: GnuPG v1.4.5 (GNU/Linux) - -mQINBEvSKUIBEADLGnUj24ZVKW7liFN/JA5CgtzlNnKs7sBg7fVbNWryiE3URbn1 -JXvrdwHtkKyY96/ifZ1Ld3lE2gOF61bGZ2CWwJNee76Sp9Z+isP8RQXbG5jwj/4B -M9HK7phktqFVJ8VbY2jfTjcfxRvGM8YBwXF8hx0CDZURAjvf1xRSQJ7iAo58qcHn -XtxOAvQmAbR9z6Q/h/D+Y/PhoIJp1OV4VNHCbCs9M7HUVBpgC53PDcTUQuwcgeY6 -pQgo9eT1eLNSZVrJ5Bctivl1UcD6P6CIGkkeT2gNhqindRPngUXGXW7Qzoefe+fV -QqJSm7Tq2q9oqVZ46J964waCRItRySpuW5dxZO34WM6wsw2BP2MlACbH4l3luqtp -Xo3Bvfnk+HAFH3HcMuwdaulxv7zYKXCfNoSfgrpEfo2Ex4Im/I3WdtwME/Gbnwdq -3VJzgAxLVFhczDHwNkjmIdPAlNJ9/ixRjip4dgZtW8VcBCrNoL+LhDrIfjvnLdRu -vBHy9P3sCF7FZycaHlMWP6RiLtHnEMGcbZ8QpQHi2dReU1wyr9QgguGU+jqSXYar -1yEcsdRGasppNIZ8+Qawbm/a4doT10TEtPArhSoHlwbvqTDYjtfV92lC/2iwgO6g -YgG9XrO4V8dV39Ffm7oLFfvTbg5mv4Q/E6AWo/gkjmtxkculbyAvjFtYAQARAQAB -tCFFUEVMICg2KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAjYEEwECACAFAkvS -KUICGw8GCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA7Sd8qBgi4lR/GD/wLGPv9 -qO39eyb9NlrwfKdUEo1tHxKdrhNz+XYrO4yVDTBZRPSuvL2yaoeSIhQOKhNPfEgT -9mdsbsgcfmoHxmGVcn+lbheWsSvcgrXuz0gLt8TGGKGGROAoLXpuUsb1HNtKEOwP -Q4z1uQ2nOz5hLRyDOV0I2LwYV8BjGIjBKUMFEUxFTsL7XOZkrAg/WbTH2PW3hrfS -WtcRA7EYonI3B80d39ffws7SmyKbS5PmZjqOPuTvV2F0tMhKIhncBwoojWZPExft -HpKhzKVh8fdDO/3P1y1Fk3Cin8UbCO9MWMFNR27fVzCANlEPljsHA+3Ez4F7uboF -p0OOEov4Yyi4BEbgqZnthTG4ub9nyiupIZ3ckPHr3nVcDUGcL6lQD/nkmNVIeLYP -x1uHPOSlWfuojAYgzRH6LL7Idg4FHHBA0to7FW8dQXFIOyNiJFAOT2j8P5+tVdq8 -wB0PDSH8yRpn4HdJ9RYquau4OkjluxOWf0uRaS//SUcCZh+1/KBEOmcvBHYRZA5J -l/nakCgxGb2paQOzqqpOcHKvlyLuzO5uybMXaipLExTGJXBlXrbbASfXa/yGYSAG -iVrGz9CE6676dMlm8F+s3XXE13QZrXmjloc6jwOljnfAkjTGXjiB7OULESed96MR -XtfLk0W5Ab9pd7tKDR6QHI7rgHXfCopRnZ2VVQ== -=V/6I ------END PGP PUBLIC KEY BLOCK----- diff --git a/play-webapp/roles/common/files/epel.repo b/play-webapp/roles/common/files/epel.repo deleted file mode 100644 index 0160dfe..0000000 --- a/play-webapp/roles/common/files/epel.repo +++ /dev/null @@ -1,26 +0,0 @@ -[epel] -name=Extra Packages for Enterprise Linux 6 - $basearch -#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch -failovermethod=priority -enabled=1 -gpgcheck=1 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 - -[epel-debuginfo] -name=Extra Packages for Enterprise Linux 6 - $basearch - Debug -#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch -failovermethod=priority -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 -gpgcheck=1 - -[epel-source] -name=Extra Packages for Enterprise Linux 6 - $basearch - Source -#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS -mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch -failovermethod=priority -enabled=0 -gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 -gpgcheck=1 diff --git a/play-webapp/roles/common/handlers/main.yml b/play-webapp/roles/common/handlers/main.yml deleted file mode 100644 index 29856cc..0000000 --- a/play-webapp/roles/common/handlers/main.yml +++ /dev/null @@ -1,3 +0,0 @@ ---- -- name: restart iptables - service: name=iptables state=restarted diff --git a/play-webapp/roles/common/tasks/main.yml b/play-webapp/roles/common/tasks/main.yml deleted file mode 100644 index 033323d..0000000 --- a/play-webapp/roles/common/tasks/main.yml +++ /dev/null @@ -1,30 +0,0 @@ ---- -# plays common across all servers - -- name: Copy the EPEL Repository. - copy: src=epel.repo dest=/etc/yum.repos.d/epel.repo - -- name: Create the GPG key for EPEL - copy: src=RPM-GPG-KEY-EPEL-6 dest=/etc/pki/rpm-gpg - -- name: install Java and other dependencies - yum: name={{ item }} state=installed - with_items: - - java-1.6.0-openjdk - - java-1.6.0-openjdk-devel - - libselinux-python - - ansible - - zip - - unzip - - wget - -- name: Ensure the user has a ssh key - user: name={{ ansible_ssh_user }} generate_ssh_key=yes - -- name: Make sure we have the authorized_key for local user - shell: cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ; touch ~/.ssh/key_authorized - creates=~/.ssh/key_authorized - -- name: setup iptables rules - template: src=iptables.j2 dest=/etc/sysconfig/iptables - notify: restart iptables diff --git a/play-webapp/roles/common/templates/iptables.j2 b/play-webapp/roles/common/templates/iptables.j2 deleted file mode 100644 index 941c94f..0000000 --- a/play-webapp/roles/common/templates/iptables.j2 +++ /dev/null @@ -1,15 +0,0 @@ -# Firewall configuration written by system-config-firewall -# Manual customization of this file is not recommended. -*filter -:INPUT ACCEPT [0:0] -:FORWARD ACCEPT [0:0] -:OUTPUT ACCEPT [0:0] - --A INPUT -p tcp --dport 9000 -j ACCEPT --A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT --A INPUT -p icmp -j ACCEPT --A INPUT -i lo -j ACCEPT --A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT --A INPUT -j REJECT --reject-with icmp-host-prohibited --A FORWARD -j REJECT --reject-with icmp-host-prohibited -COMMIT diff --git a/play-webapp/roles/webapp/files/ansible-facts-webapp.tgz b/play-webapp/roles/webapp/files/ansible-facts-webapp.tgz deleted file mode 100644 index 267ea50..0000000 Binary files a/play-webapp/roles/webapp/files/ansible-facts-webapp.tgz and /dev/null differ diff --git a/play-webapp/roles/webapp/files/play.initscript b/play-webapp/roles/webapp/files/play.initscript deleted file mode 100755 index ad441d8..0000000 --- a/play-webapp/roles/webapp/files/play.initscript +++ /dev/null @@ -1,108 +0,0 @@ -#!/bin/bash -# chkconfig: 345 20 80 -# description: Play start/shutdown script -# processname: play -# -# Instalation: -# copy file to /etc/init.d -# chmod +x /etc/init.d/play -# chkconfig --add /etc/init.d/play -# chkconfig play on -# -# Usage: (as root) -# service play start -# service play stop -# service play status -# -# Remember, you need python 2.6 to run the play command, it doesn't come standard with RedHat/Centos 5.5 -# Also, you may want to temporarily remove the >/dev/null for debugging purposes - -# Path to play install folder -PLAY_HOME=/opt/play -PLAY=$PLAY_HOME/play - -# Source function library. -. /etc/rc.d/init.d/functions - -# Path to the JVM -JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/ -export JAVA_HOME - -# User running the Play process -USER=root - -# Path to the application -APPLICATION_PATH=/opt/webapp/ -APPLICATION_MODE=prod - -#APPLICATION2_PATH=/path/to/application2 -#APPLICATION_MODE=prod - -# source function library -. /etc/init.d/functions -RETVAL=0 - -start() { - echo -n "Starting Play service: " - daemon "/opt/webapp/target/start >/dev/null &" - RETVAL=$? - - # You may want to start more applications as follows - # [ $RETVAL -eq 0 ] && su -s /bin/sh $USER -c "${PLAY} start ${APPLICATION2_PATH} --%${APPLICATION_MODE} > /dev/null" - # RETVAL=$? - - if [ $RETVAL -eq 0 ]; then - echo_success - else - echo_failure - fi - echo -} -stop() { - echo -n "Shutting down Play service: " - (cd ${APPLICATION_PATH} ; ${PLAY} stop .) - - RETVAL=$? - - if [ $RETVAL -eq 0 ]; then - echo_success - else - echo_failure - fi - echo -} -status() { - test -f ${APPLICATION_PATH}/RUNNING_PID - RETVAL=$? - if [ $RETVAL -eq 0 ]; then - echo play is running as pid `cat ${APPLICATION_PATH}/RUNNING_PID` - else - echo play is not running - fi -} -clean() { - rm -f ${APPLICATION_PATH}/RUNNING_PID -} -case "$1" in - start) - clean - start - ;; - stop) - stop - ;; - restart|reload) - stop - sleep 10 - start - ;; - status) - status - ;; - clean) - clean - ;; - *) - echo "Usage: $0 {start|stop|restart|status}" -esac -exit 0 diff --git a/play-webapp/roles/webapp/handlers/main.yml b/play-webapp/roles/webapp/handlers/main.yml deleted file mode 100644 index b48267a..0000000 --- a/play-webapp/roles/webapp/handlers/main.yml +++ /dev/null @@ -1,3 +0,0 @@ ---- -- name: restart play - service: name=play state=restarted diff --git a/play-webapp/roles/webapp/tasks/main.yml b/play-webapp/roles/webapp/tasks/main.yml deleted file mode 100644 index f9b63c4..0000000 --- a/play-webapp/roles/webapp/tasks/main.yml +++ /dev/null @@ -1,32 +0,0 @@ ---- -- name: copy the webapp - copy: src=ansible-facts-webapp.tgz dest=/opt/ - -- name: download the play framework - get_url: url=http://downloads.typesafe.com/play/2.1.1/play-2.1.1.zip dest=/opt/play-2.1.1.zip - -- name: extract the play framework - command: chdir=/opt/ unzip play-2.1.1.zip - creates=/opt/play-2.1.1 - -- name: symlink for convenience - file: src=/opt/play-2.1.1 path=/opt/play state=link - -- name: extract the webapp folder - command: chdir=/opt/ tar -xvzf ansible-facts-webapp.tgz - creates=/opt/webapp - -- name: Build and compile the webapp - shell: chdir=/opt/webapp export PATH=$PATH:/opt/play-2.1.1/; play clean compile stage - creates=/opt/webapp/target - notify: restart play - -- name: Copy the ansible hosts file - copy: src=hosts dest=/etc/ansible/hosts - -- name: Copy the webapp startup file - copy: src=play.initscript dest=/etc/init.d/play mode=0755 - notify: restart play - -- name: Start Play - service: name=play state=started enabled=yes diff --git a/play-webapp/site.yml b/play-webapp/site.yml deleted file mode 100644 index a93f0ac..0000000 --- a/play-webapp/site.yml +++ /dev/null @@ -1,8 +0,0 @@ ---- -# Main Play webapp deployment playbook - -- hosts: webapp_server - user: root - roles: - - role: common - - role: webapp diff --git a/riak/README.md b/riak/README.md deleted file mode 100644 index 9b3989a..0000000 --- a/riak/README.md +++ /dev/null @@ -1,57 +0,0 @@ -#### Introduction - -These example playbooks should help you get an idea of how to use the riak ansible module. These playbooks were tested on Ubuntu Precise (12.04) and CentOS 6.4, both on x86_64. - -These playbooks do not currently support selinux. - -#### About Riak - -Riak is distributed key-value store that is architected for: - -* **Availability**: Riak replicates and retrieves data intelligently so it is available for read and write operations, even in failure conditions -* **Fault-Tolerance**: You can lose access to many nodes due to network partition or hardware failure without losing data -* **Operational Simplicity**: Add new machines to your Riak cluster easily without incurring a larger operational burden – the same ops tasks apply to small clusters as large clusters -* **Scalability**: Riak automatically distributes data around the cluster and yields a near-linear performance increase as you add capacity. - -For more information, please visit [http://docs.basho.com/riak/latest/](http://docs.basho.com/riak/latest/) - -#### Requirements - -After checking out the ansible-examples project: - - cd ansible-examples/riak - ansible-galaxy install -r roles.txt -p roles - -This will pull down the roles that are required to run this playbook from Ansible Galaxy and place them in ansible-examples/riak/roles. Should you have another directory you have configured for roles, specify it with the `-p` option. - -### Riak Role Documentation - -Documentation for the Riak role [can be found here](https://github.com/basho/ansible-riak/blob/master/README.md). This covers all of the variables one can use with the Riak role. - - -#### Playbooks - -Here are the playbooks that you can use with the ansible-playbook commands: - - -* **setup_riak.yml** - installs riak onto nodes -* **form_cluster.yml** - forms a riak cluster -* **rolling_restart.yml** - demonstrates the ability to perform a rolling -configuration change. Similar principals could apply to performing -rolling upgrades of Riak itself. - - -#### Using Vagrant - -Install vagrant! - - -run: - - ssh-add ~/.vagrant.d/insecure_private_key - vagrant up - ansible-playbook -v -u vagrant setup_riak.yml -i hosts - -ssh to your nodes - - vagrant ssh riak-1.local diff --git a/riak/Vagrantfile b/riak/Vagrantfile deleted file mode 100644 index e27563b..0000000 --- a/riak/Vagrantfile +++ /dev/null @@ -1,56 +0,0 @@ -# -*- mode: ruby -*- -# vi: set ft=ruby : - -CENTOS = { - box: "opscode-centos-6.5", - virtualbox_url: "http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.5_chef-provisionerless.box", - vmware_fusion_url: "http://opscode-vm-bento.s3.amazonaws.com/vagrant/vmware/opscode_centos-6.5_chef-provisionerless.box" -} -UBUNTU = { - box: "opscode-ubuntu-12.04", - virtualbox_url: "http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-12.04_chef-provisionerless.box", - vmware_fusion_url: "http://opscode-vm-bento.s3.amazonaws.com/vagrant/vmware/opscode_ubuntu-12.04_chef-provisionerless.box" -} - -VAGRANTFILE_API_VERSION = "2" -NODES = ENV["ARBY_NODES"].nil? ? 3 : ENV["ARBY_NODES"].to_i -OS = ENV["ARBY_OS"].nil? ? CENTOS : Kernel.const_get(ENV["ARBY_OS"]) - -Vagrant.configure(VAGRANTFILE_API_VERSION) do |cluster| - # Utilize the Cachier plugin to cache downloaded packages. - unless ENV["ARBY_CACHE"].nil? - cluster.cache.auto_detect = true - end - - cluster.vm.box = OS[:box] - - cluster.vm.provider :virtualbox do |vb, override| - override.vm.box_url = OS[:virtualbox_url] - end - - cluster.vm.provider :vmware_fusion do |vm, override| - override.vm.box_url = OS[:vmware_fusion_url] - end - - # Nodes for Riak, Riak CS, and Stanchion. - (1..NODES).each do |index| - last_octet = index + 5 - vm_name = "riak-#{index}.local" - - cluster.vm.define vm_name do |config| - config.vm.provider :virtualbox do |vb, override| - vb.customize ["modifyvm", :id, "--memory", "1024"] - vb.customize ["modifyvm", :id, "--cpus", "1"] - end - - config.vm.provider :vmware_fusion do |vm, override| - vm.vmx["memsize"] = "1024" - vm.vmx["numvcpus"] = "1" - end - - config.vm.hostname = vm_name - config.vm.network :private_network, ip: "10.42.0.#{last_octet}" - end - end - -end diff --git a/riak/form_cluster.yml b/riak/form_cluster.yml deleted file mode 100644 index dd39a45..0000000 --- a/riak/form_cluster.yml +++ /dev/null @@ -1,36 +0,0 @@ -- hosts: riak_cluster[0] - sudo: True - tasks: - - name: collect riak facts - riak: command=ping - register: riak_outputs - -- hosts: riak_cluster:!riak_cluster[0] - vars: - primary_node: "{{ hostvars[groups['riak_cluster'][0]]['riak_outputs']['node_name'] }}" - sudo: True - tasks: - - name: join riak cluster - riak: command=join target_node={{ primary_node }} - -- hosts: riak_cluster[-1] - sudo: True - tasks: - - name: wait for nodes to settle - pause: seconds=30 - - name: plan cluster changes - riak: command=plan - notify: - - wait for ring - - commit cluster changes - - wait for handoffs - - handlers: - - name: commit cluster changes - riak: command=commit - - - name: wait for handoffs - riak: wait_for_handoffs=1200 - - - name: wait for ring - riak: wait_for_ring=600 diff --git a/riak/inventory/group_vars/all b/riak/inventory/group_vars/all deleted file mode 100644 index 41650a1..0000000 --- a/riak/inventory/group_vars/all +++ /dev/null @@ -1 +0,0 @@ -riak_iface: eth1 diff --git a/riak/inventory/hosts b/riak/inventory/hosts deleted file mode 100644 index 6cf6eac..0000000 --- a/riak/inventory/hosts +++ /dev/null @@ -1,7 +0,0 @@ -[riak_cluster] -10.42.0.6 -10.42.0.7 -10.42.0.8 - -[riak_servers:children] -riak_cluster diff --git a/riak/roles.txt b/riak/roles.txt deleted file mode 100644 index 2861de7..0000000 --- a/riak/roles.txt +++ /dev/null @@ -1,2 +0,0 @@ -basho.riak-common,v1.0.2 -basho.riak,v1.0.7 diff --git a/riak/rolling_restart.yml b/riak/rolling_restart.yml deleted file mode 100644 index 84fc841..0000000 --- a/riak/rolling_restart.yml +++ /dev/null @@ -1,9 +0,0 @@ ---- -- hosts: riak_cluster - serial: 1 - sudo: True - pre_tasks: - - name: make sure there are no transfers happening - riak: wait_for_handoffs=600 - roles: - - basho.riak diff --git a/riak/setup_riak.yml b/riak/setup_riak.yml deleted file mode 100644 index 9108ec4..0000000 --- a/riak/setup_riak.yml +++ /dev/null @@ -1,6 +0,0 @@ -- hosts: riak_cluster - sudo: True - roles: - - { role: basho.riak, tags: ["riak"] } - -- include: form_cluster.yml