parent
296ec11f54
commit
1152a592f5
Binary file not shown.
@ -0,0 +1,310 @@ |
||||
# Deploying a Highly Available production ready OpenShift Deployment |
||||
|
||||
- Requires Ansible 1.3 |
||||
- Expects CentOS/RHEL 6 hosts (64 bit) |
||||
|
||||
|
||||
## A Primer into OpenShift Architecture |
||||
|
||||
###OpenShift Overview |
||||
|
||||
OpenShift Origin is the next generation application hosting platform which enables the users to create, deploy and manage applications within their cloud. In other words, it provides a PaaS service (Platform as a Service). This alleviates the developers from time consuming processes like machine provisioning and necessary application deployments. OpenShift provides disk space, CPU resources, memory, network connectivity, and various application deployment platforms like JBoss, Python, MySQL, etc., so the developers can spend their time on coding and testing new applications rather than spending time figuring out how to acquire and configure these resources. |
||||
|
||||
|
||||
###OpenShift Components |
||||
|
||||
Here's a list and a brief overview of the diffrent components used by OpenShift. |
||||
|
||||
- Broker: is the single point of contact for all application management activities. It is responsible for managing user logins, DNS, application state, and general orchestration of the application. Customers don’t contact the broker directly; instead they use the Web console, CLI tools, or JBoss tools to interact with Broker over a REST-based API. |
||||
|
||||
- Cartridges: provide the actual functionality necessary to run the user application. OpenShift currently supports many language Cartridges like JBoss, PHP, Ruby, etc., as well as many database Cartridges such as Postgres, MySQL, MongoDB, etc. In case a user need to deploy or create a PHP application with MySQL as a backend, they can just ask the broker to deploy a PHP and a MySQL cartridge on separate “Gears”. |
||||
|
||||
- Gear: Gears provide a resource-constrained container to run one or more Cartridges. They limit the amount of RAM and disk space available to a Cartridge. For simplicity we can consider this as a separate VM or Linux container for running an application for a specific tenant, but in reality they are containers created by SELinux contexts and PAM namespacing. |
||||
|
||||
- Node: are the physical machines where Gears are allocated. Gears are generally over-allocated on nodes since not all applications are active at the same time. |
||||
|
||||
- BSN (Broker Support Nodes): are the nodes which run applications for OpenShift management. For example, OpenShift uses MongoDB to store various user/app details, and it also uses ActiveMQ to communicate with different application nodes via MCollective. The nodes which host these supporting applications are called as Broker Support Nodes. |
||||
|
||||
- Districts: are resource pools which can be used to separate the application nodes based on performance or environments. For example, in a production deployment we can have two Districts of Nodes, one of which has resources with lower memory/CPU/disk requirements, and another for high performance applications. |
||||
|
||||
|
||||
### An Overview of application creation process in OpenShift. |
||||
|
||||
![Alt text](/images/app_deploy.png "App") |
||||
|
||||
The above figure depicts an overview of the different steps involved in creating an application in OpenShift. If a developer wants to create or deploy a JBoss & MySQL application, they can request the same from different client tools that are available, the choice can be an Eclipse IDE , command line tool (RHC) or even a web browser (management console). |
||||
|
||||
Once the user has instructed the client tool to deploy a JBoss & MySQL application, the client tool makes a web service request to the broker to provision the resources. The broker in turn queries the Nodes for Gear and Cartridge availability, and if the resources are available, two Gears are created and JBoss and MySQL Cartridges are deployed on them. The user is then notified and they can then access the Gears via SSH and start deploying the code. |
||||
|
||||
|
||||
### Deployment Diagram of OpenShift via Ansible. |
||||
|
||||
![Alt text](/images/arch.png "App") |
||||
|
||||
The above diagram shows the Ansible playbooks deploying a highly-available Openshift PaaS environment. The deployment has two servers running LVS (Piranha) for load balancing and provides HA for the Brokers. Two instances of Brokers also run for fault tolerance. Ansible also configures a DNS server which provides name resolution for all the new apps created in the OpenShift environment. |
||||
|
||||
Three BSN (Broker Support Node) nodes provide a replicated MongoDB deployment and the same nodes run three instances of a highly-available ActiveMQ cluster. There is no limitation on the number of application nodes you can deploy–the user just needs to add the hostnames of the OpenShift nodes to the Ansible inventory and Ansible will configure all of them. |
||||
|
||||
Note: As a best practice if the deployment is in an actual production environment it is recommended to integrate with the infrastructure’s internal DNS server for name resolution and use LDAP or integrate with an existing Active Directory for user authentication. |
||||
|
||||
|
||||
## Deployment Steps for OpenShift via Ansible |
||||
|
||||
As a first step probably you may want to setup ansible, Assuming the Ansible host is Rhel variant install the EPEL package. |
||||
|
||||
yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm |
||||
|
||||
Once the epel repo is installed ansible can be installed via the following command. |
||||
|
||||
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm |
||||
|
||||
It is recommended to use seperate machines for the different components of Openshift, but if you are testing it out you could combine the services but atleast four nodes are mandatory as the mongodb and activemq cluster needs atleast three for the cluster to work properly. |
||||
|
||||
As a first step checkout this repository onto you ansible management host and setup the inventory(hosts) as follows. |
||||
|
||||
git checkout https://github.com/ansible/ansible-examples.git |
||||
|
||||
[dns] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
|
||||
[mongo_servers] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
ec2-54-227-131-56.compute-1.amazonaws.com |
||||
ec2-54-227-169-137.compute-1.amazonaws.com |
||||
|
||||
[mq] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
ec2-54-227-131-56.compute-1.amazonaws.com |
||||
ec2-54-227-169-137.compute-1.amazonaws.com |
||||
|
||||
[broker] |
||||
ec2-54-227-63-48.compute-1.amazonaws.com |
||||
ec2-54-227-171-2.compute-1.amazonaws.com |
||||
|
||||
[nodes] |
||||
ec2-54-227-146-187.compute-1.amazonaws.com |
||||
|
||||
[lvs] |
||||
ec2-54-227-176-123.compute-1.amazonaws.com |
||||
ec2-54-227-177-87.compute-1.amazonaws.com |
||||
|
||||
Once the inventroy is setup with hosts in your environment the Openshift stack can be deployed easily by issuing the following command. |
||||
|
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
|
||||
|
||||
### Verifying the Installation |
||||
|
||||
Once the stack has been succesfully deployed, we can check if the diffrent components has been deployed correctly. |
||||
|
||||
- Mongodb: Login to any bsn node running mongodb and issue the following command and a similar output should be displayed. Which displays that the mongo cluster is up with a primary node and two secondary nodes. |
||||
|
||||
|
||||
[root@ip-10-165-33-186 ~]# mongo 127.0.0.1:2700/admin -u admin -p passme |
||||
MongoDB shell version: 2.2.3 |
||||
connecting to: 127.0.0.1:2700/admin |
||||
openshift:PRIMARY> rs.status() |
||||
{ |
||||
"set" : "openshift", |
||||
"date" : ISODate("2013-07-21T18:56:27Z"), |
||||
"myState" : 1, |
||||
"members" : [ |
||||
{ |
||||
"_id" : 0, |
||||
"name" : "ip-10-165-33-186:2700", |
||||
"health" : 1, |
||||
"state" : 1, |
||||
"stateStr" : "PRIMARY", |
||||
"uptime" : 804, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"self" : true |
||||
}, |
||||
{ |
||||
"_id" : 1, |
||||
"name" : "ec2-54-227-131-56.compute-1.amazonaws.com:2700", |
||||
"health" : 1, |
||||
"state" : 2, |
||||
"stateStr" : "SECONDARY", |
||||
"uptime" : 431, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), |
||||
"pingMs" : 0 |
||||
}, |
||||
{ |
||||
"_id" : 2, |
||||
"name" : "ec2-54-227-169-137.compute-1.amazonaws.com:2700", |
||||
"health" : 1, |
||||
"state" : 2, |
||||
"stateStr" : "SECONDARY", |
||||
"uptime" : 423, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), |
||||
"pingMs" : 0 |
||||
} |
||||
], |
||||
"ok" : 1 |
||||
} |
||||
openshift:PRIMARY> |
||||
|
||||
- ActiveMQ: To verify the cluster status of activeMQ browse to the following url pointing to any one of the mq nodes and provide the credentials as user admin and password as specified in the group_vars/all file. The browser should bring up a page similar to shown below, which shows the other two mq nodes in the cluster to which this node as joined. |
||||
|
||||
http://ec2-54-226-116-175.compute-1.amazonaws.com:8161/admin/network.jsp |
||||
|
||||
|
||||
![Alt text](/images/mq.png "App") |
||||
|
||||
- Broker: To check if the broker node is installed/configured succesfully, issue the following command on any broker node and a similar output should be displayed. Make sure there is a PASS at the end. |
||||
|
||||
[root@ip-10-118-127-30 ~]# oo-accept-broker -v |
||||
INFO: Broker package is: openshift-origin-broker |
||||
INFO: checking packages |
||||
INFO: checking package ruby |
||||
INFO: checking package rubygem-openshift-origin-common |
||||
INFO: checking package rubygem-openshift-origin-controller |
||||
INFO: checking package openshift-origin-broker |
||||
INFO: checking package ruby193-rubygem-rails |
||||
INFO: checking package ruby193-rubygem-passenger |
||||
INFO: checking package ruby193-rubygems |
||||
INFO: checking ruby requirements |
||||
INFO: checking ruby requirements for openshift-origin-controller |
||||
INFO: checking ruby requirements for config/application |
||||
INFO: checking that selinux modules are loaded |
||||
NOTICE: SELinux is Enforcing |
||||
NOTICE: SELinux is Enforcing |
||||
INFO: SELinux boolean httpd_unified is enabled |
||||
INFO: SELinux boolean httpd_can_network_connect is enabled |
||||
INFO: SELinux boolean httpd_can_network_relay is enabled |
||||
INFO: SELinux boolean httpd_run_stickshift is enabled |
||||
INFO: SELinux boolean allow_ypbind is enabled |
||||
INFO: checking firewall settings |
||||
INFO: checking mongo datastore configuration |
||||
INFO: Datastore Host: ec2-54-226-116-175.compute-1.amazonaws.com |
||||
INFO: Datastore Port: 2700 |
||||
INFO: Datastore User: admin |
||||
INFO: Datastore SSL: false |
||||
INFO: Datastore Password has been set to non-default |
||||
INFO: Datastore DB Name: admin |
||||
INFO: Datastore: mongo db service is remote |
||||
INFO: checking mongo db login access |
||||
INFO: mongo db login successful: ec2-54-226-116-175.compute-1.amazonaws.com:2700/admin --username admin |
||||
INFO: checking services |
||||
INFO: checking cloud user authentication |
||||
INFO: auth plugin = OpenShift::RemoteUserAuthService |
||||
INFO: auth plugin: OpenShift::RemoteUserAuthService |
||||
INFO: checking remote-user auth configuration |
||||
INFO: Auth trusted header: REMOTE_USER |
||||
INFO: Auth passthrough is enabled for OpenShift services |
||||
INFO: Got HTTP 200 response from https://localhost/broker/rest/api |
||||
INFO: Got HTTP 200 response from https://localhost/broker/rest/cartridges |
||||
INFO: Got HTTP 401 response from https://localhost/broker/rest/user |
||||
INFO: Got HTTP 401 response from https://localhost/broker/rest/domains |
||||
INFO: checking dynamic dns plugin |
||||
INFO: dynamic dns plugin = OpenShift::BindPlugin |
||||
INFO: checking bind dns plugin configuration |
||||
INFO: DNS Server: 10.165.33.186 |
||||
INFO: DNS Port: 53 |
||||
INFO: DNS Zone: example.com |
||||
INFO: DNS Domain Suffix: example.com |
||||
INFO: DNS Update Auth: key |
||||
INFO: DNS Key Name: example.com |
||||
INFO: DNS Key Value: ***** |
||||
INFO: adding txt record named testrecord.example.com to server 10.165.33.186: key0 |
||||
INFO: txt record successfully added |
||||
INFO: deleteing txt record named testrecord.example.com to server 10.165.33.186: key0 |
||||
INFO: txt record successfully deleted |
||||
INFO: checking messaging configuration |
||||
INFO: messaging plugin = OpenShift::MCollectiveApplicationContainerProxy |
||||
PASS |
||||
|
||||
- Node: To verify if the node installation/configuration has been successfull, issue the follwoing command and check for a similar output as shown below. |
||||
|
||||
[root@ip-10-152-154-18 ~]# oo-accept-node -v |
||||
INFO: using default accept-node extensions |
||||
INFO: loading node configuration file /etc/openshift/node.conf |
||||
INFO: loading resource limit file /etc/openshift/resource_limits.conf |
||||
INFO: finding external network device |
||||
INFO: checking node public hostname resolution |
||||
INFO: checking selinux status |
||||
INFO: checking selinux openshift-origin policy |
||||
INFO: checking selinux booleans |
||||
INFO: checking package list |
||||
INFO: checking services |
||||
INFO: checking kernel semaphores >= 512 |
||||
INFO: checking cgroups configuration |
||||
INFO: checking cgroups processes |
||||
INFO: checking filesystem quotas |
||||
INFO: checking quota db file selinux label |
||||
INFO: checking 0 user accounts |
||||
INFO: checking application dirs |
||||
INFO: checking system httpd configs |
||||
INFO: checking cartridge repository |
||||
PASS |
||||
|
||||
- LVS (LoadBalancer): To check the LoadBalncer Login to the active loadbalancer and issue the follwing command, the output would show the two broker to which the loadbalancer is balancing the traffic. |
||||
|
||||
[root@ip-10-145-204-43 ~]# ipvsadm |
||||
IP Virtual Server version 1.2.1 (size=4096) |
||||
Prot LocalAddress:Port Scheduler Flags |
||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn |
||||
TCP ip-192-168-1-1.ec2.internal: rr |
||||
-> ec2-54-227-63-48.compute-1.a Route 1 0 0 |
||||
-> ec2-54-227-171-2.compute-1.a Route 2 0 0 |
||||
|
||||
## Creating an APP in Openshift |
||||
|
||||
To create an App in openshift access the management console via any browser, the VIP specified in group_vars/all can used or ip address of any broker node can used. |
||||
|
||||
https://<ip-of-broker-or-vip>/ |
||||
|
||||
The page would as a login, give it as demo/passme. Once logged in follow the screen instructions to create your first Application. |
||||
Note: Python2.6 cartridge is by default installed by plabooks, so choose python2.6 as the cartridge. |
||||
|
||||
## Deploying Openshift in EC2 |
||||
|
||||
The repo also has playbook that would deploy the Highly Available Openshift in EC2. The playbooks should also be able to deploy the cluster in any ec2 api compatible clouds like Eucalyptus etc.. |
||||
|
||||
Before deploying Please make sure: |
||||
|
||||
- A security groups is created which allows ssh and HTTP/HTTPS traffic. |
||||
- The access/secret key is entered in group_vars/all |
||||
- Also specify the number of nodes required for the cluser in group_vars/all in the variable "count". |
||||
|
||||
Once that is done the cluster can be deployed simply by issuing the command. |
||||
|
||||
ansible-playbook -i ec2hosts ec2.yml -e id=openshift |
||||
|
||||
Note: 'id' is a unique identifier for the cluster, if you are deploying multiple clusters, please make sure the value given is seperate for each deployments. Also the role of the created instances can figured out checking the tags tab in ec2 console. |
||||
|
||||
###Remove the cluster from EC2. |
||||
|
||||
To remove the deployed openshift cluster in ec2, just run the following command. The id paramter should be the same which was given to create the Instance. |
||||
|
||||
Note: The id can be figured out by checking the tags tab in the ec2 console. |
||||
|
||||
ansible-playbook -i ec2hosts ec2_remove.yml -e id=openshift5 |
||||
|
||||
|
||||
|
||||
## HA Tests |
||||
|
||||
Few test's that can be performed to test High Availability are: |
||||
|
||||
- Shutdown any broker and try to create a new Application |
||||
- Shutdown anyone mongo/mq node and try to create a new Appliaction. |
||||
- Shutdown any loadbalaning machine, and the manamgement application should be available via the VirtualIP. |
||||
|
||||
|
||||
|
@ -0,0 +1 @@ |
||||
Subproject commit 4e96ffc6d1ad0b6f873e081a8be9cd6a8dd75382 |
@ -0,0 +1,115 @@ |
||||
# config file for ansible -- http://ansibleworks.com/ |
||||
# ================================================== |
||||
|
||||
# nearly all parameters can be overridden in ansible-playbook |
||||
# or with command line flags. ansible will read ~/.ansible.cfg, |
||||
# ansible.cfg in the current working directory or |
||||
# /etc/ansible/ansible.cfg, whichever it finds first |
||||
|
||||
[defaults] |
||||
|
||||
# some basic default values... |
||||
|
||||
hostfile = /etc/ansible/hosts |
||||
library = /usr/share/ansible |
||||
remote_tmp = $HOME/.ansible/tmp |
||||
pattern = * |
||||
forks = 5 |
||||
poll_interval = 15 |
||||
sudo_user = root |
||||
#ask_sudo_pass = True |
||||
#ask_pass = True |
||||
transport = smart |
||||
remote_port = 22 |
||||
|
||||
# uncomment this to disable SSH key host checking |
||||
host_key_checking = False |
||||
|
||||
# change this for alternative sudo implementations |
||||
sudo_exe = sudo |
||||
|
||||
# what flags to pass to sudo |
||||
#sudo_flags = -H |
||||
|
||||
# SSH timeout |
||||
timeout = 10 |
||||
|
||||
# default user to use for playbooks if user is not specified |
||||
# (/usr/bin/ansible will use current user as default) |
||||
#remote_user = root |
||||
|
||||
# logging is off by default unless this path is defined |
||||
# if so defined, consider logrotate |
||||
#log_path = /var/log/ansible.log |
||||
|
||||
# default module name for /usr/bin/ansible |
||||
#module_name = command |
||||
|
||||
# use this shell for commands executed under sudo |
||||
# you may need to change this to bin/bash in rare instances |
||||
# if sudo is constrained |
||||
#executable = /bin/sh |
||||
|
||||
# if inventory variables overlap, does the higher precedence one win |
||||
# or are hash values merged together? The default is 'replace' but |
||||
# this can also be set to 'merge'. |
||||
#hash_behaviour = replace |
||||
|
||||
# How to handle variable replacement - as of 1.2, Jinja2 variable syntax is |
||||
# preferred, but we still support the old $variable replacement too. |
||||
# Turn off ${old_style} variables here if you like. |
||||
#legacy_playbook_variables = yes |
||||
|
||||
# list any Jinja2 extensions to enable here: |
||||
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n |
||||
|
||||
# if set, always use this private key file for authentication, same as |
||||
# if passing --private-key to ansible or ansible-playbook |
||||
#private_key_file = /path/to/file |
||||
|
||||
# format of string {{ ansible_managed }} available within Jinja2 |
||||
# templates indicates to users editing templates files will be replaced. |
||||
# replacing {file}, {host} and {uid} and strftime codes with proper values. |
||||
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} |
||||
|
||||
# by default (as of 1.3), Ansible will raise errors when attempting to dereference |
||||
# Jinja2 variables that are not set in templates or action lines. Uncomment this line |
||||
# to revert the behavior to pre-1.3. |
||||
#error_on_undefined_vars = False |
||||
|
||||
# set plugin path directories here, seperate with colons |
||||
action_plugins = /usr/share/ansible_plugins/action_plugins |
||||
callback_plugins = /usr/share/ansible_plugins/callback_plugins |
||||
connection_plugins = /usr/share/ansible_plugins/connection_plugins |
||||
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins |
||||
vars_plugins = /usr/share/ansible_plugins/vars_plugins |
||||
filter_plugins = /usr/share/ansible_plugins/filter_plugins |
||||
|
||||
# don't like cows? that's unfortunate. |
||||
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 |
||||
#nocows = 1 |
||||
|
||||
# don't like colors either? |
||||
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 |
||||
#nocolor = 1 |
||||
|
||||
[paramiko_connection] |
||||
|
||||
# uncomment this line to cause the paramiko connection plugin to not record new host |
||||
# keys encountered. Increases performance on new host additions. Setting works independently of the |
||||
# host key checking setting above. |
||||
|
||||
#record_host_keys=False |
||||
|
||||
[ssh_connection] |
||||
|
||||
# ssh arguments to use |
||||
# Leaving off ControlPersist will result in poor performance, so use |
||||
# paramiko on older platforms rather than removing it |
||||
#ssh_args = -o ControlMaster=auto -o ControlPersist=60s |
||||
|
||||
# if True, make ansible use scp if the connection type is ssh |
||||
# (default is sftp) |
||||
#scp_if_ssh = True |
||||
|
||||
|
@ -0,0 +1,61 @@ |
||||
- hosts: localhost |
||||
connection: local |
||||
pre_tasks: |
||||
- fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" |
||||
when: id is not defined |
||||
|
||||
roles: |
||||
- role: ec2 |
||||
type: dns |
||||
ncount: 1 |
||||
|
||||
- role: ec2 |
||||
type: mq |
||||
ncount: 3 |
||||
|
||||
- role: ec2 |
||||
type: broker |
||||
ncount: 2 |
||||
|
||||
- role: ec2 |
||||
type: nodes |
||||
ncount: "{{ count }}" |
||||
|
||||
post_tasks: |
||||
- name: Wait for the instance to come up |
||||
wait_for: delay=10 host={{ item.public_dns_name }} port=22 state=started timeout=360 |
||||
with_items: ec2.instances |
||||
|
||||
- debug: msg="{{ groups }}" |
||||
|
||||
- hosts: all:!localhost |
||||
user: root |
||||
roles: |
||||
- role: common |
||||
|
||||
- hosts: dns |
||||
user: root |
||||
roles: |
||||
- role: dns |
||||
|
||||
- hosts: mongo_servers |
||||
user: root |
||||
roles: |
||||
- role: mongodb |
||||
|
||||
- hosts: mq |
||||
user: root |
||||
roles: |
||||
- role: mq |
||||
|
||||
- hosts: broker |
||||
user: root |
||||
roles: |
||||
- role: broker |
||||
|
||||
- hosts: nodes |
||||
user: root |
||||
roles: |
||||
- role: nodes |
||||
|
||||
|
@ -0,0 +1,23 @@ |
||||
- hosts: localhost |
||||
connection: local |
||||
pre_tasks: |
||||
- fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" |
||||
when: id is not defined |
||||
|
||||
roles: |
||||
- role: ec2_remove |
||||
type: dns |
||||
ncount: 1 |
||||
|
||||
- role: ec2_remove |
||||
type: mq |
||||
ncount: 3 |
||||
|
||||
- role: ec2_remove |
||||
type: broker |
||||
ncount: 2 |
||||
|
||||
- role: ec2_remove |
||||
type: nodes |
||||
ncount: "{{ count }}" |
||||
|
@ -0,0 +1 @@ |
||||
localhost |
@ -0,0 +1,32 @@ |
||||
--- |
||||
# Global Vars for OpenShift |
||||
|
||||
#EC2 specific varibles |
||||
ec2_access_key: "AKIUFDNXQ" |
||||
ec2_secret_key: "RyhTz1wzZ3kmtMEu" |
||||
keypair: "axialkey" |
||||
instance_type: "m1.small" |
||||
image: "ami-bf5021d6" |
||||
group: "default" |
||||
count: 2 |
||||
ec2_elbs: oselb |
||||
region: "us-east-1" |
||||
zone: "us-east-1a" |
||||
|
||||
iface: '{{ ansible_default_ipv4.interface }}' |
||||
|
||||
domain_name: example.com |
||||
dns_port: 53 |
||||
rndc_port: 953 |
||||
dns_key: "YG70pT2h9xmn9DviT+E6H8MNlJ9wc7Xa9qpCOtuonj3oLJGBBA8udXUsJnoGdMSIIw2pk9lw9QL4rv8XQNBRLQ==" |
||||
|
||||
mongodb_datadir_prefix: /data/ |
||||
mongod_port: 2700 |
||||
mongo_admin_pass: passme |
||||
|
||||
mcollective_pass: passme |
||||
admin_pass: passme |
||||
amquser_pass: passme |
||||
|
||||
vip: 192.168.2.15 |
||||
vip_netmask: 255.255.255.0 |
@ -0,0 +1,24 @@ |
||||
[dns] |
||||
vm1 |
||||
|
||||
[mongo_servers] |
||||
vm1 |
||||
vm2 |
||||
vm3 |
||||
|
||||
[mq] |
||||
vm1 |
||||
vm2 |
||||
vm3 |
||||
|
||||
[broker] |
||||
vm6 |
||||
vm7 |
||||
|
||||
[nodes] |
||||
vm4 |
||||
|
||||
[lvs] |
||||
vm5 |
||||
vm3 |
||||
|
After Width: | Height: | Size: 194 KiB |
After Width: | Height: | Size: 163 KiB |
After Width: | Height: | Size: 195 KiB |
Binary file not shown.
@ -0,0 +1,284 @@ |
||||
# Deploying a Highly Available production ready OpenShift Deployment |
||||
|
||||
- Requires Ansible 1.2 |
||||
- Expects CentOS/RHEL 6 hosts (64 bit) |
||||
|
||||
|
||||
## A Primer into OpenShift Architecture |
||||
|
||||
###OpenShift Overview |
||||
|
||||
OpenShift Origin enables the users to create, deploy and manage applications within the cloud, or in other words it provies a PaaS service (Platform as a service). This aleviates the developers from time consuming processes like machine provisioning and neccesary appliaction deployments. OpenShift provides disk space, CPU resources, memory, network connectivity, and various application like JBoss, python, MySQL etc... So that the developer can spent time on coding, testing his/her new application rather than spending time on figuring out how to get/configure those resources. |
||||
|
||||
###OpenShift Components |
||||
|
||||
Here's a list and a brief overview of the diffrent components used by OpenShift. |
||||
|
||||
- Broker: is the single point of contact for all application management activities. It is responsible for managing user logins, DNS, application state, and general orchestration of the application. Customers don't contact the broker directly; instead they use the Web console, CLI tools, or JBoss tools to interact with Broker over a REST based API. |
||||
|
||||
- Cartridges: provide the actual functionality necessary to run the user application. Openshift currently supports many language cartridges like JBoss, PHP, Ruby, etc., as well as many DB cartridges such as Postgres, Mysql, Mongo, etc. So incase a user need to deploy or create an php application with mysql as backend, he/she can just ask the broker to deploy a php and an mysql cartridgeon seperate gears. |
||||
|
||||
- Gear: Gears provide a resource-constrained container to run one or more cartridges. They limit the amount of RAM and disk space available to a cartridge. For simplicity we can consider this as a seperate vm or linux container for running application for a specific tenant, but in reality they are containers created by selinux contexts and pam namespacing. |
||||
|
||||
- Node: are the physical machines where gears are allocated. Gears are generally over-allocated on nodes since not all applications are active at the same time. |
||||
|
||||
- BSN (Broker support Nodes): are the nodes which run applications for OpenShift management. for example OpenShift uses mongodb to |
||||
store various user/app details, it also uses ActiveMQ for communincating with different application nodes via Mcollective. These nodes which hosts this supporing applications are called as broker support nodes. |
||||
|
||||
- Districts: are resource pools which can be used to seperate the application nodes based on performance or environments. so for example in a production deployment we can have two districts of nodes one which has resources with lower memory/cpu/disk requirements and another for high performance applications. |
||||
|
||||
### An Overview of application creation process in OpenShift. |
||||
|
||||
![Alt text](/images/app_deploy.png "App") |
||||
|
||||
|
||||
The above figure depicts an overview of diffrent steps invovled in creating an application in OpenShift. So if a developer wants to create or deploy a JBoss & Myql application the user can request the same from diffrent client tools that are available, the choice can be an Eclipse IDE or command line tool (rhc) or even a web browser. |
||||
|
||||
Once the user has instructed the client it makes a web service request to the Broker, the broker inturn check for available resources in the nodes and checks for gear and cartridge availability and if the resources are available two gears are created and JBoss and Mysql cartridges are deployed on them. The user is then notified and the user can then access the gears via ssh and start deploying the code. |
||||
|
||||
|
||||
### Deployment Diagram of OpenShift via Ansible. |
||||
|
||||
![Alt text](/images/arch.png "App") |
||||
|
||||
As the above diagram shows the Ansible playbooks deploys a highly available Openshift Paas environment. The deployment has two servers running lvs (piranha) for loadbalancing and ha for the brokers. Two instances of brokers also run for fault tolerence. Ansible also configures a dns server which provides name resolution for all the new apps created in the Openshift environment. |
||||
|
||||
Three bsn(broker support nodes) nodes provide a replicated mongodb deployment and the same nodes three instances of higly available activeMQ cluster. There is no limitation on the number of application nodes you can add, just add the hostnames of the application nodes in the ansible inventory and ansible will configure all of them for you. |
||||
|
||||
Note: As a best practise if you are deploying an actual production environemnt it is recommended to integrate with your internal DNS server for name resolution and use LDAP or integrate with an existing Active Directory for user authentication. |
||||
|
||||
## Deployment Steps for OpenShift via Ansible |
||||
|
||||
As a first step probably you may want to setup ansible, Assuming the Ansible host is Rhel variant install the EPEL package. |
||||
|
||||
yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm |
||||
|
||||
Once the epel repo is installed ansible can be installed via the following command. |
||||
|
||||
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm |
||||
|
||||
It is recommended to use seperate machines for the different components of Openshift, but if you are testing it out you could combine the services but atleast four nodes are mandatory as the mongodb and activemq cluster needs atleast three for the cluster to work properly. |
||||
|
||||
As a first step checkout this repository onto you ansible management host and setup the inventory(hosts) as follows. |
||||
|
||||
git checkout https://github.com/ansible/ansible-examples.git |
||||
|
||||
[dns] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
|
||||
[mongo_servers] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
ec2-54-227-131-56.compute-1.amazonaws.com |
||||
ec2-54-227-169-137.compute-1.amazonaws.com |
||||
|
||||
[mq] |
||||
ec2-54-226-116-175.compute-1.amazonaws.com |
||||
ec2-54-227-131-56.compute-1.amazonaws.com |
||||
ec2-54-227-169-137.compute-1.amazonaws.com |
||||
|
||||
[broker] |
||||
ec2-54-227-63-48.compute-1.amazonaws.com |
||||
ec2-54-227-171-2.compute-1.amazonaws.com |
||||
|
||||
[nodes] |
||||
ec2-54-227-146-187.compute-1.amazonaws.com |
||||
|
||||
[lvs] |
||||
ec2-54-227-176-123.compute-1.amazonaws.com |
||||
ec2-54-227-177-87.compute-1.amazonaws.com |
||||
|
||||
Once the inventroy is setup with hosts in your environment the Openshift stack can be deployed easily by issuing the following command. |
||||
|
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
|
||||
|
||||
### Verifying the Installation |
||||
|
||||
Once the stack has been succesfully deployed, we can check if the diffrent components has been deployed correctly. |
||||
|
||||
- Mongodb: Login to any bsn node running mongodb and issue the following command and a similar output should be displayed. Which displays that the mongo cluster is up with a primary node and two secondary nodes. |
||||
|
||||
|
||||
[root@ip-10-165-33-186 ~]# mongo 127.0.0.1:2700/admin -u admin -p passme |
||||
MongoDB shell version: 2.2.3 |
||||
connecting to: 127.0.0.1:2700/admin |
||||
openshift:PRIMARY> rs.status() |
||||
{ |
||||
"set" : "openshift", |
||||
"date" : ISODate("2013-07-21T18:56:27Z"), |
||||
"myState" : 1, |
||||
"members" : [ |
||||
{ |
||||
"_id" : 0, |
||||
"name" : "ip-10-165-33-186:2700", |
||||
"health" : 1, |
||||
"state" : 1, |
||||
"stateStr" : "PRIMARY", |
||||
"uptime" : 804, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"self" : true |
||||
}, |
||||
{ |
||||
"_id" : 1, |
||||
"name" : "ec2-54-227-131-56.compute-1.amazonaws.com:2700", |
||||
"health" : 1, |
||||
"state" : 2, |
||||
"stateStr" : "SECONDARY", |
||||
"uptime" : 431, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), |
||||
"pingMs" : 0 |
||||
}, |
||||
{ |
||||
"_id" : 2, |
||||
"name" : "ec2-54-227-169-137.compute-1.amazonaws.com:2700", |
||||
"health" : 1, |
||||
"state" : 2, |
||||
"stateStr" : "SECONDARY", |
||||
"uptime" : 423, |
||||
"optime" : { |
||||
"t" : 1374432940000, |
||||
"i" : 1 |
||||
}, |
||||
"optimeDate" : ISODate("2013-07-21T18:55:40Z"), |
||||
"lastHeartbeat" : ISODate("2013-07-21T18:56:26Z"), |
||||
"pingMs" : 0 |
||||
} |
||||
], |
||||
"ok" : 1 |
||||
} |
||||
openshift:PRIMARY> |
||||
|
||||
- ActiveMQ: To verify the cluster status of activeMQ browse to the following url pointing to any one of the mq nodes and provide the credentials as user admin and password as specified in the group_vars/all file. The browser should bring up a page similar to shown below, which shows the other two mq nodes in the cluster to which this node as joined. |
||||
|
||||
http://ec2-54-226-116-175.compute-1.amazonaws.com:8161/admin/network.jsp |
||||
|
||||
|
||||
![Alt text](/images/mq.png "App") |
||||
|
||||
- Broker: To check if the broker node is installed/configured succesfully, issue the following command on any broker node and a similar output should be displayed. Make sure there is a PASS at the end. |
||||
|
||||
[root@ip-10-118-127-30 ~]# oo-accept-broker -v |
||||
INFO: Broker package is: openshift-origin-broker |
||||
INFO: checking packages |
||||
INFO: checking package ruby |
||||
INFO: checking package rubygem-openshift-origin-common |
||||
INFO: checking package rubygem-openshift-origin-controller |
||||
INFO: checking package openshift-origin-broker |
||||
INFO: checking package ruby193-rubygem-rails |
||||
INFO: checking package ruby193-rubygem-passenger |
||||
INFO: checking package ruby193-rubygems |
||||
INFO: checking ruby requirements |
||||
INFO: checking ruby requirements for openshift-origin-controller |
||||
INFO: checking ruby requirements for config/application |
||||
INFO: checking that selinux modules are loaded |
||||
NOTICE: SELinux is Enforcing |
||||
NOTICE: SELinux is Enforcing |
||||
INFO: SELinux boolean httpd_unified is enabled |
||||
INFO: SELinux boolean httpd_can_network_connect is enabled |
||||
INFO: SELinux boolean httpd_can_network_relay is enabled |
||||
INFO: SELinux boolean httpd_run_stickshift is enabled |
||||
INFO: SELinux boolean allow_ypbind is enabled |
||||
INFO: checking firewall settings |
||||
INFO: checking mongo datastore configuration |
||||
INFO: Datastore Host: ec2-54-226-116-175.compute-1.amazonaws.com |
||||
INFO: Datastore Port: 2700 |
||||
INFO: Datastore User: admin |
||||
INFO: Datastore SSL: false |
||||
INFO: Datastore Password has been set to non-default |
||||
INFO: Datastore DB Name: admin |
||||
INFO: Datastore: mongo db service is remote |
||||
INFO: checking mongo db login access |
||||
INFO: mongo db login successful: ec2-54-226-116-175.compute-1.amazonaws.com:2700/admin --username admin |
||||
INFO: checking services |
||||
INFO: checking cloud user authentication |
||||
INFO: auth plugin = OpenShift::RemoteUserAuthService |
||||
INFO: auth plugin: OpenShift::RemoteUserAuthService |
||||
INFO: checking remote-user auth configuration |
||||
INFO: Auth trusted header: REMOTE_USER |
||||
INFO: Auth passthrough is enabled for OpenShift services |
||||
INFO: Got HTTP 200 response from https://localhost/broker/rest/api |
||||
INFO: Got HTTP 200 response from https://localhost/broker/rest/cartridges |
||||
INFO: Got HTTP 401 response from https://localhost/broker/rest/user |
||||
INFO: Got HTTP 401 response from https://localhost/broker/rest/domains |
||||
INFO: checking dynamic dns plugin |
||||
INFO: dynamic dns plugin = OpenShift::BindPlugin |
||||
INFO: checking bind dns plugin configuration |
||||
INFO: DNS Server: 10.165.33.186 |
||||
INFO: DNS Port: 53 |
||||
INFO: DNS Zone: example.com |
||||
INFO: DNS Domain Suffix: example.com |
||||
INFO: DNS Update Auth: key |
||||
INFO: DNS Key Name: example.com |
||||
INFO: DNS Key Value: ***** |
||||
INFO: adding txt record named testrecord.example.com to server 10.165.33.186: key0 |
||||
INFO: txt record successfully added |
||||
INFO: deleteing txt record named testrecord.example.com to server 10.165.33.186: key0 |
||||
INFO: txt record successfully deleted |
||||
INFO: checking messaging configuration |
||||
INFO: messaging plugin = OpenShift::MCollectiveApplicationContainerProxy |
||||
PASS |
||||
|
||||
- Node: To verify if the node installation/configuration has been successfull, issue the follwoing command and check for a similar output as shown below. |
||||
|
||||
[root@ip-10-152-154-18 ~]# oo-accept-node -v |
||||
INFO: using default accept-node extensions |
||||
INFO: loading node configuration file /etc/openshift/node.conf |
||||
INFO: loading resource limit file /etc/openshift/resource_limits.conf |
||||
INFO: finding external network device |
||||
INFO: checking node public hostname resolution |
||||
INFO: checking selinux status |
||||
INFO: checking selinux openshift-origin policy |
||||
INFO: checking selinux booleans |
||||
INFO: checking package list |
||||
INFO: checking services |
||||
INFO: checking kernel semaphores >= 512 |
||||
INFO: checking cgroups configuration |
||||
INFO: checking cgroups processes |
||||
INFO: checking filesystem quotas |
||||
INFO: checking quota db file selinux label |
||||
INFO: checking 0 user accounts |
||||
INFO: checking application dirs |
||||
INFO: checking system httpd configs |
||||
INFO: checking cartridge repository |
||||
PASS |
||||
|
||||
- LVS (LoadBalancer): To check the LoadBalncer Login to the active loadbalancer and issue the follwing command, the output would show the two broker to which the loadbalancer is balancing the traffic. |
||||
|
||||
[root@ip-10-145-204-43 ~]# ipvsadm |
||||
IP Virtual Server version 1.2.1 (size=4096) |
||||
Prot LocalAddress:Port Scheduler Flags |
||||
-> RemoteAddress:Port Forward Weight ActiveConn InActConn |
||||
TCP ip-192-168-1-1.ec2.internal: rr |
||||
-> ec2-54-227-63-48.compute-1.a Route 1 0 0 |
||||
-> ec2-54-227-171-2.compute-1.a Route 2 0 0 |
||||
|
||||
## Creating an APP in Openshift |
||||
|
||||
To create an App in openshift access the management console via any browser, the VIP specified in group_vars/all can used or ip address of any broker node can used. |
||||
|
||||
https://<ip-of-broker-or-vip>/ |
||||
|
||||
The page would as a login, give it as demo/passme. Once logged in follow the screen instructions to create your first Application. |
||||
Note: Python2.6 cartridge is by default installed by plabooks, so choose python2.6 as the cartridge. |
||||
|
||||
|
||||
## HA Tests |
||||
|
||||
Few test's that can be performed to test High Availability are: |
||||
|
||||
- Shutdown any broker and try to create a new Application |
||||
- Shutdown anyone mongo/mq node and try to create a new Appliaction. |
||||
- Shutdown any loadbalaning machine, and the manamgement application should be available via the VirtualIP. |
||||
|
||||
|
||||
|
@ -0,0 +1,115 @@ |
||||
# config file for ansible -- http://ansibleworks.com/ |
||||
# ================================================== |
||||
|
||||
# nearly all parameters can be overridden in ansible-playbook |
||||
# or with command line flags. ansible will read ~/.ansible.cfg, |
||||
# ansible.cfg in the current working directory or |
||||
# /etc/ansible/ansible.cfg, whichever it finds first |
||||
|
||||
[defaults] |
||||
|
||||
# some basic default values... |
||||
|
||||
hostfile = /etc/ansible/hosts |
||||
library = /usr/share/ansible |
||||
remote_tmp = $HOME/.ansible/tmp |
||||
pattern = * |
||||
forks = 5 |
||||
poll_interval = 15 |
||||
sudo_user = root |
||||
#ask_sudo_pass = True |
||||
#ask_pass = True |
||||
transport = smart |
||||
remote_port = 22 |
||||
|
||||
# uncomment this to disable SSH key host checking |
||||
host_key_checking = False |
||||
|
||||
# change this for alternative sudo implementations |
||||
sudo_exe = sudo |
||||
|
||||
# what flags to pass to sudo |
||||
#sudo_flags = -H |
||||
|
||||
# SSH timeout |
||||
timeout = 10 |
||||
|
||||
# default user to use for playbooks if user is not specified |
||||
# (/usr/bin/ansible will use current user as default) |
||||
#remote_user = root |
||||
|
||||
# logging is off by default unless this path is defined |
||||
# if so defined, consider logrotate |
||||
#log_path = /var/log/ansible.log |
||||
|
||||
# default module name for /usr/bin/ansible |
||||
#module_name = command |
||||
|
||||
# use this shell for commands executed under sudo |
||||
# you may need to change this to bin/bash in rare instances |
||||
# if sudo is constrained |
||||
#executable = /bin/sh |
||||
|
||||
# if inventory variables overlap, does the higher precedence one win |
||||
# or are hash values merged together? The default is 'replace' but |
||||
# this can also be set to 'merge'. |
||||
#hash_behaviour = replace |
||||
|
||||
# How to handle variable replacement - as of 1.2, Jinja2 variable syntax is |
||||
# preferred, but we still support the old $variable replacement too. |
||||
# Turn off ${old_style} variables here if you like. |
||||
#legacy_playbook_variables = yes |
||||
|
||||
# list any Jinja2 extensions to enable here: |
||||
#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n |
||||
|
||||
# if set, always use this private key file for authentication, same as |
||||
# if passing --private-key to ansible or ansible-playbook |
||||
#private_key_file = /path/to/file |
||||
|
||||
# format of string {{ ansible_managed }} available within Jinja2 |
||||
# templates indicates to users editing templates files will be replaced. |
||||
# replacing {file}, {host} and {uid} and strftime codes with proper values. |
||||
ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host} |
||||
|
||||
# by default (as of 1.3), Ansible will raise errors when attempting to dereference |
||||
# Jinja2 variables that are not set in templates or action lines. Uncomment this line |
||||
# to revert the behavior to pre-1.3. |
||||
#error_on_undefined_vars = False |
||||
|
||||
# set plugin path directories here, seperate with colons |
||||
action_plugins = /usr/share/ansible_plugins/action_plugins |
||||
callback_plugins = /usr/share/ansible_plugins/callback_plugins |
||||
connection_plugins = /usr/share/ansible_plugins/connection_plugins |
||||
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins |
||||
vars_plugins = /usr/share/ansible_plugins/vars_plugins |
||||
filter_plugins = /usr/share/ansible_plugins/filter_plugins |
||||
|
||||
# don't like cows? that's unfortunate. |
||||
# set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1 |
||||
#nocows = 1 |
||||
|
||||
# don't like colors either? |
||||
# set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1 |
||||
#nocolor = 1 |
||||
|
||||
[paramiko_connection] |
||||
|
||||
# uncomment this line to cause the paramiko connection plugin to not record new host |
||||
# keys encountered. Increases performance on new host additions. Setting works independently of the |
||||
# host key checking setting above. |
||||
|
||||
#record_host_keys=False |
||||
|
||||
[ssh_connection] |
||||
|
||||
# ssh arguments to use |
||||
# Leaving off ControlPersist will result in poor performance, so use |
||||
# paramiko on older platforms rather than removing it |
||||
#ssh_args = -o ControlMaster=auto -o ControlPersist=60s |
||||
|
||||
# if True, make ansible use scp if the connection type is ssh |
||||
# (default is sftp) |
||||
#scp_if_ssh = True |
||||
|
||||
|
@ -0,0 +1,61 @@ |
||||
- hosts: localhost |
||||
connection: local |
||||
pre_tasks: |
||||
- fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" |
||||
when: id is not defined |
||||
|
||||
roles: |
||||
- role: ec2 |
||||
type: dns |
||||
ncount: 1 |
||||
|
||||
- role: ec2 |
||||
type: mq |
||||
ncount: 3 |
||||
|
||||
- role: ec2 |
||||
type: broker |
||||
ncount: 2 |
||||
|
||||
- role: ec2 |
||||
type: nodes |
||||
ncount: "{{ count }}" |
||||
|
||||
post_tasks: |
||||
- name: Wait for the instance to come up |
||||
wait_for: delay=10 host={{ item.public_dns_name }} port=22 state=started timeout=360 |
||||
with_items: ec2.instances |
||||
|
||||
- debug: msg="{{ groups }}" |
||||
|
||||
- hosts: all:!localhost |
||||
user: root |
||||
roles: |
||||
- role: common |
||||
|
||||
- hosts: dns |
||||
user: root |
||||
roles: |
||||
- role: dns |
||||
|
||||
- hosts: mongo_servers |
||||
user: root |
||||
roles: |
||||
- role: mongodb |
||||
|
||||
- hosts: mq |
||||
user: root |
||||
roles: |
||||
- role: mq |
||||
|
||||
- hosts: broker |
||||
user: root |
||||
roles: |
||||
- role: broker |
||||
|
||||
- hosts: nodes |
||||
user: root |
||||
roles: |
||||
- role: nodes |
||||
|
||||
|
@ -0,0 +1,23 @@ |
||||
- hosts: localhost |
||||
connection: local |
||||
pre_tasks: |
||||
- fail: msg=" Please make sure the variables id is specified and unique in the command line -e id=uniquedev1" |
||||
when: id is not defined |
||||
|
||||
roles: |
||||
- role: ec2_remove |
||||
type: dns |
||||
ncount: 1 |
||||
|
||||
- role: ec2_remove |
||||
type: mq |
||||
ncount: 3 |
||||
|
||||
- role: ec2_remove |
||||
type: broker |
||||
ncount: 2 |
||||
|
||||
- role: ec2_remove |
||||
type: nodes |
||||
ncount: "{{ count }}" |
||||
|
@ -0,0 +1 @@ |
||||
localhost |
@ -0,0 +1,32 @@ |
||||
--- |
||||
# Global Vars for OpenShift |
||||
|
||||
#EC2 specific varibles |
||||
ec2_access_key: "AKIAI7ALWJR5VQUFDNXQ" |
||||
ec2_secret_key: "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" |
||||
keypair: "axialkey" |
||||
instance_type: "m1.small" |
||||
image: "ami-bf5021d6" |
||||
group: "default" |
||||
count: 2 |
||||
ec2_elbs: oselb |
||||
region: "us-east-1" |
||||
zone: "us-east-1a" |
||||
|
||||
iface: '{{ ansible_default_ipv4.interface }}' |
||||
|
||||
domain_name: example.com |
||||
dns_port: 53 |
||||
rndc_port: 953 |
||||
dns_key: "YG70pT2h9xmn9DviT+E6H8MNlJ9wc7Xa9qpCOtuonj3oLJGBBA8udXUsJnoGdMSIIw2pk9lw9QL4rv8XQNBRLQ==" |
||||
|
||||
mongodb_datadir_prefix: /data/ |
||||
mongod_port: 2700 |
||||
mongo_admin_pass: passme |
||||
|
||||
mcollective_pass: passme |
||||
admin_pass: passme |
||||
amquser_pass: passme |
||||
|
||||
vip: 192.168.2.15 |
||||
vip_netmask: 255.255.255.0 |
@ -0,0 +1,23 @@ |
||||
[dns] |
||||
vm1 |
||||
[mongo_servers] |
||||
vm1 |
||||
vm2 |
||||
vm3 |
||||
|
||||
[mq] |
||||
vm1 |
||||
vm2 |
||||
vm3 |
||||
|
||||
[broker] |
||||
vm1 |
||||
vm2 |
||||
|
||||
[nodes] |
||||
vm4 |
||||
|
||||
[lvs] |
||||
vm5 |
||||
vm3 |
||||
|
After Width: | Height: | Size: 194 KiB |
After Width: | Height: | Size: 163 KiB |
After Width: | Height: | Size: 195 KiB |
@ -0,0 +1,2 @@ |
||||
#!/bin/bash |
||||
/usr/bin/scl enable ruby193 "gem install rspec --version 1.3.0 --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install fakefs --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install httpclient --version 2.3.2 --no-rdoc --no-ri" ; touch /opt/gem.init |
@ -0,0 +1 @@ |
||||
demo:k2WsPcYIRAaXs |
@ -0,0 +1,25 @@ |
||||
LoadModule auth_basic_module modules/mod_auth_basic.so |
||||
LoadModule authn_file_module modules/mod_authn_file.so |
||||
LoadModule authz_user_module modules/mod_authz_user.so |
||||
|
||||
# Turn the authenticated remote-user into an Apache environment variable for the console security controller |
||||
RewriteEngine On |
||||
RewriteCond %{LA-U:REMOTE_USER} (.+) |
||||
RewriteRule . - [E=RU:%1] |
||||
RequestHeader set X-Remote-User "%{RU}e" env=RU |
||||
|
||||
<Location /console> |
||||
AuthName "OpenShift Developer Console" |
||||
AuthType Basic |
||||
AuthUserFile /etc/openshift/htpasswd |
||||
require valid-user |
||||
|
||||
# The node->broker auth is handled in the Ruby code |
||||
BrowserMatch Openshift passthrough |
||||
Allow from env=passthrough |
||||
|
||||
Order Deny,Allow |
||||
Deny from all |
||||
Satisfy any |
||||
</Location> |
||||
|
@ -0,0 +1,39 @@ |
||||
LoadModule auth_basic_module modules/mod_auth_basic.so |
||||
LoadModule authn_file_module modules/mod_authn_file.so |
||||
LoadModule authz_user_module modules/mod_authz_user.so |
||||
|
||||
<Location /broker> |
||||
AuthName "OpenShift broker API" |
||||
AuthType Basic |
||||
AuthUserFile /etc/openshift/htpasswd |
||||
require valid-user |
||||
|
||||
SetEnvIfNoCase Authorization Bearer passthrough |
||||
|
||||
# The node->broker auth is handled in the Ruby code |
||||
BrowserMatchNoCase ^OpenShift passthrough |
||||
Allow from env=passthrough |
||||
|
||||
# Console traffic will hit the local port. mod_proxy will set this header automatically. |
||||
SetEnvIf X-Forwarded-For "^$" local_traffic=1 |
||||
# Turn the Console output header into the Apache environment variable for the broker remote-user plugin |
||||
SetEnvIf X-Remote-User "(..*)" REMOTE_USER=$1 |
||||
Allow from env=local_traffic |
||||
|
||||
Order Deny,Allow |
||||
Deny from all |
||||
Satisfy any |
||||
</Location> |
||||
|
||||
# The following APIs do not require auth: |
||||
<Location /broker/rest/cartridges*> |
||||
Allow from all |
||||
</Location> |
||||
|
||||
<Location /broker/rest/api*> |
||||
Allow from all |
||||
</Location> |
||||
|
||||
<Location /broker/rest/environment*> |
||||
Allow from all |
||||
</Location> |
@ -0,0 +1,27 @@ |
||||
-----BEGIN RSA PRIVATE KEY----- |
||||
MIIEpAIBAAKCAQEAyWM85VFDBOdWz16oC7j8Q7uHHbs3UVzRhHhHkSg8avK6ETMH |
||||
piXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4blZHcDO5T1I//MyjwVNfqAQV4xin |
||||
qRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadpL0vZrzXAJREEhh/G7021q010CF+E |
||||
KTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgbCdvx+DnTw9E/YY+07/lDPOiXQFZm |
||||
7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc1BMXzHUiDVehNZ+I8ffzFltNNUL0 |
||||
Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0bwIDAQABAoIBAQDJPXpvqLlw3/92 |
||||
bx87v5mN0YneYuOPUVIorszNN8jQEkduwnCFTec2b8xRgx45AqwG3Ol/xM/V+qrd |
||||
eEvUs/fBgkQW0gj+Q7GfW5rTqA2xZou8iDmaF0/0tCbFWkoe8I8MdCkOl0Pkv1A4 |
||||
Au/UNqc8VO5tUCf2oj/EC2MOZLgCOTaerePnc+SFIf4TkerixPA9I4KYWwJQ2eXG |
||||
esSfR2f2EsUGfwOqKLEQU1JTMFkttbSAp42p+xpRaUh1FuyLHDlf3EeFmq5BPaFL |
||||
UnpzPDJTZtXjnyBrM9fb1ewiFW8x+EBmsdGooY7ptrWWhGzvxAsK9C0L2di3FBAy |
||||
gscM/rPBAoGBAPpt0xXtVWJu2ezoBfjNuqwMqGKFsOF3hi5ncOHW9nd6iZABD5Xt |
||||
KamrszxItkqiJpEacBCabgfo0FSLEOo+KqfTBK/r4dIoMwgcfhJOz+HvEC6+557n |
||||
GEFaL+evdLrxNrU41wvvfCzPK7pWaQGR1nrGohTyX5ZH4uA0Kmreof+PAoGBAM3e |
||||
IFPNrXuzhgShqFibWqJ8JdsSfMroV62aCqdJlB92lxx8JJ2lEiAMPfHmAtF1g01r |
||||
oBUcJcPfuBZ0bC1KxIvtz9d5m1f2geNGH/uwVULU3skhPBwqAs2s607/Z1S+/WRr |
||||
Af1rAs2KTJ7BDCQo8g2TPUO+sDrUzR6joxOy/Y0hAoGAbWaI7m1N/cBbZ4k9AqIt |
||||
SHgHH3M0AGtMrPz3bVGRPkTDz6sG+gIvTzX5CP7i09veaUlZZ4dvRflI+YX/D7W0 |
||||
wLgItimf70UsdgCseqb/Xb4oHaO8X8io6fPSNa6KmhhCRAzetRIb9x9SBQc2vD7P |
||||
qbcYm3n+lBI3ZKalWSaFMrUCgYEAsV0xfuISGCRIT48zafuWr6zENKUN7QcWGxQ/ |
||||
H3eN7TmP4VO3fDZukjvZ1qHzRaC32ijih61zf/ksMfRmCvOCuIfP7HXx92wC5dtR |
||||
zNdT7btWofRHRICRX8AeDzaOQP43c5+Z3Eqo5IrFjnUFz9WTDU0QmGAeluEmQ8J5 |
||||
yowIVOECgYB97fGLuEBSlKJCvmWp6cTyY+mXbiQjYYGBbYAiJWnwaK9U3bt71we/ |
||||
MQNzBHAe0mPCReVHSr68BfoWY/crV+7RKSBgrDpR0Y0DI1yn0LXXZfd3NNrTVaAb |
||||
rScbJ8Xe3qcLi3QZ3BxaWfub08Wm57wjDBBqGZyExYjjlGSpjBpVJQ== |
||||
-----END RSA PRIVATE KEY----- |
@ -0,0 +1,9 @@ |
||||
-----BEGIN PUBLIC KEY----- |
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWM85VFDBOdWz16oC7j8 |
||||
Q7uHHbs3UVzRhHhHkSg8avK6ETMHpiXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4 |
||||
blZHcDO5T1I//MyjwVNfqAQV4xinqRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadp |
||||
L0vZrzXAJREEhh/G7021q010CF+EKTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgb |
||||
Cdvx+DnTw9E/YY+07/lDPOiXQFZm7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc |
||||
1BMXzHUiDVehNZ+I8ffzFltNNUL0Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0 |
||||
bwIDAQAB |
||||
-----END PUBLIC KEY----- |
@ -0,0 +1,74 @@ |
||||
# |
||||
# This is the Apache server configuration file providing SSL support. |
||||
# It contains the configuration directives to instruct the server how to |
||||
# serve pages over an https connection. For detailing information about these |
||||
# directives see <URL:http://httpd.apache.org/docs/2.2/mod/mod_ssl.html> |
||||
# |
||||
# Do NOT simply read the instructions in here without understanding |
||||
# what they do. They're here only as hints or reminders. If you are unsure |
||||
# consult the online docs. You have been warned. |
||||
# |
||||
|
||||
LoadModule ssl_module modules/mod_ssl.so |
||||
|
||||
# |
||||
# When we also provide SSL we have to listen to the |
||||
# the HTTPS port in addition. |
||||
# |
||||
Listen 443 |
||||
|
||||
## |
||||
## SSL Global Context |
||||
## |
||||
## All SSL configuration in this context applies both to |
||||
## the main server and all SSL-enabled virtual hosts. |
||||
## |
||||
|
||||
# Pass Phrase Dialog: |
||||
# Configure the pass phrase gathering process. |
||||
# The filtering dialog program (`builtin' is a internal |
||||
# terminal dialog) has to provide the pass phrase on stdout. |
||||
SSLPassPhraseDialog builtin |
||||
|
||||
# Inter-Process Session Cache: |
||||
# Configure the SSL Session Cache: First the mechanism |
||||
# to use and second the expiring timeout (in seconds). |
||||
SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) |
||||
SSLSessionCacheTimeout 300 |
||||
|
||||
# Semaphore: |
||||
# Configure the path to the mutual exclusion semaphore the |
||||
# SSL engine uses internally for inter-process synchronization. |
||||
SSLMutex default |
||||
|
||||
# Pseudo Random Number Generator (PRNG): |
||||
# Configure one or more sources to seed the PRNG of the |
||||
# SSL library. The seed data should be of good random quality. |
||||
# WARNING! On some platforms /dev/random blocks if not enough entropy |
||||
# is available. This means you then cannot use the /dev/random device |
||||
# because it would lead to very long connection times (as long as |
||||
# it requires to make more entropy available). But usually those |
||||
# platforms additionally provide a /dev/urandom device which doesn't |
||||
# block. So, if available, use this one instead. Read the mod_ssl User |
||||
# Manual for more details. |
||||
SSLRandomSeed startup file:/dev/urandom 256 |
||||
SSLRandomSeed connect builtin |
||||
#SSLRandomSeed startup file:/dev/random 512 |
||||
#SSLRandomSeed connect file:/dev/random 512 |
||||
#SSLRandomSeed connect file:/dev/urandom 512 |
||||
|
||||
# |
||||
# Use "SSLCryptoDevice" to enable any supported hardware |
||||
# accelerators. Use "openssl engine -v" to list supported |
||||
# engine names. NOTE: If you enable an accelerator and the |
||||
# server does not start, consult the error logs and ensure |
||||
# your accelerator is functioning properly. |
||||
# |
||||
SSLCryptoDevice builtin |
||||
#SSLCryptoDevice ubsec |
||||
|
||||
## |
||||
## SSL Virtual Host Context |
||||
## |
||||
|
||||
|
@ -0,0 +1,9 @@ |
||||
--- |
||||
# handlers for broker |
||||
|
||||
- name: restart broker |
||||
service: name=openshift-broker state=restarted |
||||
|
||||
- name: restart console |
||||
service: name=openshift-console state=restarted |
||||
|
@ -0,0 +1,104 @@ |
||||
--- |
||||
# Tasks for the Openshift broker installation |
||||
|
||||
- name: Install mcollective |
||||
yum: name=mcollective-client |
||||
|
||||
- name: Copy the mcollective configuration file |
||||
template: src=client.cfg.j2 dest=/etc/mcollective/client.cfg |
||||
|
||||
- name: Install the broker components |
||||
yum: name="{{ item }}" state=installed |
||||
with_items: "{{ broker_packages }}" |
||||
|
||||
- name: Copy the rhc client configuration file |
||||
template: src=express.conf.j2 dest=/etc/openshift/express.conf |
||||
register: last_run |
||||
|
||||
- name: Install the gems for rhc |
||||
script: gem.sh |
||||
when: last_run.changed |
||||
|
||||
- name: create the file for mcollective logging |
||||
copy: content="" dest=/var/log/mcollective-client.log owner=apache group=root |
||||
|
||||
- name: SELinux - configure sebooleans |
||||
seboolean: name="{{ item }}" state=true persistent=yes |
||||
with_items: |
||||
- httpd_unified |
||||
- httpd_execmem |
||||
- httpd_can_network_connect |
||||
- httpd_can_network_relay |
||||
- httpd_run_stickshift |
||||
- named_write_master_zones |
||||
- httpd_verify_dns |
||||
- allow_ypbind |
||||
|
||||
- name: copy the auth keyfiles |
||||
copy: src="{{ item }}" dest="/etc/openshift/{{ item }}" |
||||
with_items: |
||||
- server_priv.pem |
||||
- server_pub.pem |
||||
- htpasswd |
||||
|
||||
- name: copy the local ssh keys |
||||
copy: src="~/.ssh/{{ item }}" dest="~/.ssh/{{ item }}" |
||||
with_items: |
||||
- id_rsa.pub |
||||
- id_rsa |
||||
|
||||
- name: copy the local ssh keys to openshift dir |
||||
copy: src="~/.ssh/{{ item }}" dest="/etc/openshift/rsync_{{ item }}" |
||||
with_items: |
||||
- id_rsa.pub |
||||
- id_rsa |
||||
|
||||
- name: Copy the broker configuration file |
||||
template: src=broker.conf.j2 dest=/etc/openshift/broker.conf |
||||
notify: restart broker |
||||
|
||||
- name: Copy the console configuration file |
||||
template: src=console.conf.j2 dest=/etc/openshift/console.conf |
||||
notify: restart console |
||||
|
||||
- name: create the file for ssl.conf |
||||
copy: src=ssl.conf dest=/etc/httpd/conf.d/ssl.conf owner=apache group=root |
||||
|
||||
- name: copy the configuration file for openstack plugins |
||||
template: src="{{ item }}" dest="/etc/openshift/plugins.d/{{ item }}" |
||||
with_items: |
||||
- openshift-origin-auth-remote-user.conf |
||||
- openshift-origin-dns-bind.conf |
||||
- openshift-origin-msg-broker-mcollective.conf |
||||
|
||||
- name: Bundle the ruby gems |
||||
shell: chdir=/var/www/openshift/broker/ /usr/bin/scl enable ruby193 "bundle show"; touch bundle.init |
||||
creates=//var/www/openshift/broker/bundle.init |
||||
|
||||
- name: Copy the httpd configuration file |
||||
copy: src=openshift-origin-auth-remote-user.conf dest=/var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf |
||||
notify: restart broker |
||||
|
||||
- name: Copy the httpd configuration file for console |
||||
copy: src=openshift-origin-auth-remote-basic-user.conf dest=/var/www/openshift/console/httpd/conf.d/openshift-origin-auth-remote-basic-user.conf |
||||
notify: restart console |
||||
|
||||
- name: Fix the selinux contexts on several files |
||||
shell: fixfiles -R rubygem-passenger restore; fixfiles -R mod_passenger restore; restorecon -rv /var/run; restorecon -rv /usr/share/rubygems/gems/passenger-*; touch /opt/context.fixed creates=/opt/context.fixed |
||||
|
||||
- name: start the http and broker service |
||||
service: name="{{ item }}" state=started enabled=yes |
||||
with_items: |
||||
- httpd |
||||
- openshift-broker |
||||
|
||||
- name: Install the rhc client |
||||
gem: name={{ item }} state=latest |
||||
with_items: |
||||
- rdoc |
||||
- rhc |
||||
ignore_errors: yes |
||||
|
||||
- name: copy the resolv.conf |
||||
template: src=resolv.conf.j2 dest=/etc/resolv.conf |
||||
|
@ -0,0 +1,47 @@ |
||||
# Domain suffix to use for applications (Must match node config) |
||||
CLOUD_DOMAIN="{{ domain_name }}" |
||||
# Comma seperted list of valid gear sizes |
||||
VALID_GEAR_SIZES="small,medium" |
||||
|
||||
# Default number of gears to assign to a new user |
||||
DEFAULT_MAX_GEARS="100" |
||||
# Default gear size for a new gear |
||||
DEFAULT_GEAR_SIZE="small" |
||||
|
||||
#Broker datastore configuration |
||||
MONGO_REPLICA_SETS=true |
||||
# Replica set example: "<host-1>:<port-1> <host-2>:<port-2> ..." |
||||
{% set hosts = '' %} |
||||
{% for host in groups['mongo_servers'] %} |
||||
{% if loop.last %} |
||||
{% set hosts = hosts + host + ':' ~ mongod_port + ' ' %} |
||||
|
||||
MONGO_HOST_PORT="{{ hosts }}" |
||||
|
||||
{% else %} |
||||
{% set hosts = hosts + host + ':' ~ mongod_port + ', ' %} |
||||
{% endif %} |
||||
{% endfor %} |
||||
|
||||
MONGO_USER="admin" |
||||
MONGO_PASSWORD="{{ mongo_admin_pass }}" |
||||
MONGO_DB="admin" |
||||
|
||||
#Enables gear/filesystem resource usage tracking |
||||
ENABLE_USAGE_TRACKING_DATASTORE="false" |
||||
#Log resource usage information to syslog |
||||
ENABLE_USAGE_TRACKING_SYSLOG="false" |
||||
|
||||
#Enable all broker analytics |
||||
ENABLE_ANALYTICS="false" |
||||
|
||||
#Enables logging of REST API operations and success/failure |
||||
ENABLE_USER_ACTION_LOG="true" |
||||
USER_ACTION_LOG_FILE="/var/log/openshift/broker/user_action.log" |
||||
|
||||
AUTH_SALT="{{ auth_salt }}" |
||||
AUTH_PRIVKEYFILE="/etc/openshift/server_priv.pem" |
||||
AUTH_PRIVKEYPASS="" |
||||
AUTH_PUBKEYFILE="/etc/openshift/server_pub.pem" |
||||
AUTH_RSYNC_KEY_FILE="/etc/openshift/rsync_id_rsa" |
||||
SESSION_SECRET="{{ session_secret }}" |
@ -0,0 +1,25 @@ |
||||
topicprefix = /topic/ |
||||
main_collective = mcollective |
||||
collectives = mcollective |
||||
libdir = /opt/rh/ruby193/root/usr/libexec/mcollective |
||||
logfile = /var/log/mcollective-client.log |
||||
loglevel = debug |
||||
direct_addressing = 1 |
||||
|
||||
# Plugins |
||||
securityprovider = psk |
||||
plugin.psk = unset |
||||
|
||||
connector = stomp |
||||
plugin.stomp.pool.size = {{ groups['mq']|length() }} |
||||
{% for host in groups['mq'] %} |
||||
|
||||
plugin.stomp.pool.host{{ loop.index }} = {{ hostvars[host].ansible_hostname }} |
||||
plugin.stomp.pool.port{{ loop.index }} = 61613 |
||||
plugin.stomp.pool.user{{ loop.index }} = mcollective |
||||
plugin.stomp.pool.password{{ loop.index }} = {{ mcollective_pass }} |
||||
|
||||
{% endfor %} |
||||
|
||||
|
||||
|
@ -0,0 +1,8 @@ |
||||
BROKER_URL=http://localhost:8080/broker/rest |
||||
|
||||
CONSOLE_SECURITY=remote_user |
||||
|
||||
REMOTE_USER_HEADER=REMOTE_USER |
||||
|
||||
REMOTE_USER_COPY_HEADERS=X-Remote-User |
||||
SESSION_SECRET="{{ session_secret }}" |
@ -0,0 +1,8 @@ |
||||
# Remote API server |
||||
libra_server = '{{ ansible_hostname }}' |
||||
|
||||
# Logging |
||||
debug = 'false' |
||||
|
||||
# Timeout |
||||
#timeout = '10' |
@ -0,0 +1,4 @@ |
||||
# Settings related to the Remote-User variant of an OpenShift auth plugin |
||||
|
||||
# The name of the header containing the trusted username |
||||
TRUSTED_HEADER="REMOTE_USER" |
@ -0,0 +1,16 @@ |
||||
# Settings related to the bind variant of an OpenShift DNS plugin |
||||
|
||||
# The DNS server |
||||
BIND_SERVER="{{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }}" |
||||
|
||||
# The DNS server's port |
||||
BIND_PORT=53 |
||||
|
||||
# The key name for your zone |
||||
BIND_KEYNAME="{{ domain_name }}" |
||||
|
||||
# base64-encoded key, most likely from /var/named/example.com.key. |
||||
BIND_KEYVALUE="{{ dns_key }}" |
||||
|
||||
# The base zone for the DNS server |
||||
BIND_ZONE="{{ domain_name }}" |
@ -0,0 +1,25 @@ |
||||
# Some settings to configure how mcollective handles gear placement on nodes: |
||||
|
||||
# Use districts when placing gears and moving them between hosts. Should be |
||||
# true except for particular dev/test situations. |
||||
DISTRICTS_ENABLED=true |
||||
|
||||
# Require new gears to be placed in a district; when true, placement will fail |
||||
# if there isn't a district with capacity and the right gear profile. |
||||
DISTRICTS_REQUIRE_FOR_APP_CREATE=false |
||||
|
||||
# Used as the default max gear capacity when creating a district. |
||||
DISTRICTS_MAX_CAPACITY=6000 |
||||
|
||||
# It is unlikely these will need to be changed |
||||
DISTRICTS_FIRST_UID=1000 |
||||
MCOLLECTIVE_DISCTIMEOUT=5 |
||||
MCOLLECTIVE_TIMEOUT=180 |
||||
MCOLLECTIVE_VERBOSE=false |
||||
MCOLLECTIVE_PROGRESS_BAR=0 |
||||
MCOLLECTIVE_CONFIG="/etc/mcollective/client.cfg" |
||||
|
||||
# Place gears on nodes with the requested profile; should be true, as |
||||
# a false value means gear profiles are ignored and gears are placed arbitrarily. |
||||
NODE_PROFILE_ENABLED=true |
||||
|
@ -0,0 +1,2 @@ |
||||
search {{ domain_name }} |
||||
nameserver {{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }} |
@ -0,0 +1,82 @@ |
||||
--- |
||||
# variables for broker |
||||
|
||||
broker_packages: |
||||
- mongodb-devel |
||||
- openshift-origin-broker |
||||
- openshift-origin-broker-util |
||||
- rubygem-openshift-origin-dns-nsupdate |
||||
- rubygem-openshift-origin-auth-mongo |
||||
- rubygem-openshift-origin-auth-remote-user |
||||
- rubygem-openshift-origin-controller |
||||
- rubygem-openshift-origin-msg-broker-mcollective |
||||
- rubygem-openshift-origin-dns-bind |
||||
- rubygem-passenger |
||||
- ruby193-mod_passenger |
||||
- mysql-devel |
||||
- openshift-origin-console |
||||
- ruby193-rubygem-actionmailer |
||||
- ruby193-rubygem-actionpack |
||||
- ruby193-rubygem-activemodel |
||||
- ruby193-rubygem-activerecord |
||||
- ruby193-rubygem-activeresource |
||||
- ruby193-rubygem-activesupport |
||||
- ruby193-rubygem-arel |
||||
- ruby193-rubygem-bigdecimal |
||||
- ruby193-rubygem-net-ssh |
||||
- ruby193-rubygem-commander |
||||
- ruby193-rubygem-archive-tar-minitar |
||||
- ruby193-rubygem-bson |
||||
- ruby193-rubygem-bson_ext |
||||
- ruby193-rubygem-builder |
||||
- ruby193-rubygem-bundler |
||||
- ruby193-rubygem-cucumber |
||||
- ruby193-rubygem-diff-lcs |
||||
- ruby193-rubygem-dnsruby |
||||
- ruby193-rubygem-erubis |
||||
- ruby193-rubygem-gherkin |
||||
- ruby193-rubygem-hike |
||||
- ruby193-rubygem-i18n |
||||
- ruby193-rubygem-journey |
||||
- ruby193-rubygem-json |
||||
- ruby193-rubygem-mail |
||||
- ruby193-rubygem-metaclass |
||||
- ruby193-rubygem-mime-types |
||||
- ruby193-rubygem-minitest |
||||
- ruby193-rubygem-mocha |
||||
- ruby193-rubygem-mongo |
||||
- ruby193-rubygem-mongoid |
||||
- ruby193-rubygem-moped |
||||
- ruby193-rubygem-multi_json |
||||
- ruby193-rubygem-open4 |
||||
- ruby193-rubygem-origin |
||||
- ruby193-rubygem-parseconfig |
||||
- ruby193-rubygem-polyglot |
||||
- ruby193-rubygem-rack |
||||
- ruby193-rubygem-rack-cache |
||||
- ruby193-rubygem-rack-ssl |
||||
- ruby193-rubygem-rack-test |
||||
- ruby193-rubygem-rails |
||||
- ruby193-rubygem-railties |
||||
- ruby193-rubygem-rake |
||||
- ruby193-rubygem-rdoc |
||||
- ruby193-rubygem-regin |
||||
- ruby193-rubygem-rest-client |
||||
- ruby193-rubygem-simplecov |
||||
- ruby193-rubygem-simplecov-html |
||||
- ruby193-rubygem-sprockets |
||||
- ruby193-rubygem-state_machine |
||||
- ruby193-rubygem-stomp |
||||
- ruby193-rubygem-systemu |
||||
- ruby193-rubygem-term-ansicolor |
||||
- ruby193-rubygem-thor |
||||
- ruby193-rubygem-tilt |
||||
- ruby193-rubygem-treetop |
||||
- ruby193-rubygem-tzinfo |
||||
- ruby193-rubygem-xml-simple |
||||
|
||||
|
||||
|
||||
auth_salt: "ceFm8El0mTLu7VLGpBFSFfmxeID+UoNfsQrAKs8dhKSQ/uAGwjWiz3VdyuB1fW/WR+R1q7yXW+sxSm9wkmuqVA==" |
||||
session_secret: "25905ebdb06d8705025531bb5cb45335c53c4f36ee534719ffffd0fe28808395d80449c6c69bc079e2ac14c8ff66639bba1513332ef9ad5ed42cc0bb21e07134" |
||||
|
@ -0,0 +1,29 @@ |
||||
-----BEGIN PGP PUBLIC KEY BLOCK----- |
||||
Version: GnuPG v1.4.5 (GNU/Linux) |
||||
|
||||
mQINBEvSKUIBEADLGnUj24ZVKW7liFN/JA5CgtzlNnKs7sBg7fVbNWryiE3URbn1 |
||||
JXvrdwHtkKyY96/ifZ1Ld3lE2gOF61bGZ2CWwJNee76Sp9Z+isP8RQXbG5jwj/4B |
||||
M9HK7phktqFVJ8VbY2jfTjcfxRvGM8YBwXF8hx0CDZURAjvf1xRSQJ7iAo58qcHn |
||||
XtxOAvQmAbR9z6Q/h/D+Y/PhoIJp1OV4VNHCbCs9M7HUVBpgC53PDcTUQuwcgeY6 |
||||
pQgo9eT1eLNSZVrJ5Bctivl1UcD6P6CIGkkeT2gNhqindRPngUXGXW7Qzoefe+fV |
||||
QqJSm7Tq2q9oqVZ46J964waCRItRySpuW5dxZO34WM6wsw2BP2MlACbH4l3luqtp |
||||
Xo3Bvfnk+HAFH3HcMuwdaulxv7zYKXCfNoSfgrpEfo2Ex4Im/I3WdtwME/Gbnwdq |
||||
3VJzgAxLVFhczDHwNkjmIdPAlNJ9/ixRjip4dgZtW8VcBCrNoL+LhDrIfjvnLdRu |
||||
vBHy9P3sCF7FZycaHlMWP6RiLtHnEMGcbZ8QpQHi2dReU1wyr9QgguGU+jqSXYar |
||||
1yEcsdRGasppNIZ8+Qawbm/a4doT10TEtPArhSoHlwbvqTDYjtfV92lC/2iwgO6g |
||||
YgG9XrO4V8dV39Ffm7oLFfvTbg5mv4Q/E6AWo/gkjmtxkculbyAvjFtYAQARAQAB |
||||
tCFFUEVMICg2KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAjYEEwECACAFAkvS |
||||
KUICGw8GCwkIBwMCBBUCCAMEFgIDAQIeAQIXgAAKCRA7Sd8qBgi4lR/GD/wLGPv9 |
||||
qO39eyb9NlrwfKdUEo1tHxKdrhNz+XYrO4yVDTBZRPSuvL2yaoeSIhQOKhNPfEgT |
||||
9mdsbsgcfmoHxmGVcn+lbheWsSvcgrXuz0gLt8TGGKGGROAoLXpuUsb1HNtKEOwP |
||||
Q4z1uQ2nOz5hLRyDOV0I2LwYV8BjGIjBKUMFEUxFTsL7XOZkrAg/WbTH2PW3hrfS |
||||
WtcRA7EYonI3B80d39ffws7SmyKbS5PmZjqOPuTvV2F0tMhKIhncBwoojWZPExft |
||||
HpKhzKVh8fdDO/3P1y1Fk3Cin8UbCO9MWMFNR27fVzCANlEPljsHA+3Ez4F7uboF |
||||
p0OOEov4Yyi4BEbgqZnthTG4ub9nyiupIZ3ckPHr3nVcDUGcL6lQD/nkmNVIeLYP |
||||
x1uHPOSlWfuojAYgzRH6LL7Idg4FHHBA0to7FW8dQXFIOyNiJFAOT2j8P5+tVdq8 |
||||
wB0PDSH8yRpn4HdJ9RYquau4OkjluxOWf0uRaS//SUcCZh+1/KBEOmcvBHYRZA5J |
||||
l/nakCgxGb2paQOzqqpOcHKvlyLuzO5uybMXaipLExTGJXBlXrbbASfXa/yGYSAG |
||||
iVrGz9CE6676dMlm8F+s3XXE13QZrXmjloc6jwOljnfAkjTGXjiB7OULESed96MR |
||||
XtfLk0W5Ab9pd7tKDR6QHI7rgHXfCopRnZ2VVQ== |
||||
=V/6I |
||||
-----END PGP PUBLIC KEY BLOCK----- |
@ -0,0 +1,26 @@ |
||||
[epel] |
||||
name=Extra Packages for Enterprise Linux 6 - $basearch |
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch |
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch |
||||
failovermethod=priority |
||||
enabled=1 |
||||
gpgcheck=1 |
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 |
||||
|
||||
[epel-debuginfo] |
||||
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug |
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug |
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch |
||||
failovermethod=priority |
||||
enabled=0 |
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 |
||||
gpgcheck=1 |
||||
|
||||
[epel-source] |
||||
name=Extra Packages for Enterprise Linux 6 - $basearch - Source |
||||
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS |
||||
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch |
||||
failovermethod=priority |
||||
enabled=0 |
||||
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 |
||||
gpgcheck=1 |
@ -0,0 +1,13 @@ |
||||
[openshift_support] |
||||
name=Extra Packages for OpenShift - $basearch |
||||
baseurl=https://mirror.openshift.com/pub/openshift/release/2/rhel-6/dependencies/x86_64/ |
||||
failovermethod=priority |
||||
enabled=1 |
||||
gpgcheck=0 |
||||
|
||||
[openshift] |
||||
name=Packages for OpenShift - $basearch |
||||
baseurl=https://mirror.openshift.com/pub/openshift/release/2/rhel-6/packages/x86_64/ |
||||
failovermethod=priority |
||||
enabled=1 |
||||
gpgcheck=0 |
@ -0,0 +1,10 @@ |
||||
# Setup PATH, LD_LIBRARY_PATH and MANPATH for ruby-1.9 |
||||
ruby19_dir=$(dirname `scl enable ruby193 "which ruby"`) |
||||
export PATH=$ruby19_dir:$PATH |
||||
|
||||
ruby19_ld_libs=$(scl enable ruby193 "printenv LD_LIBRARY_PATH") |
||||
export LD_LIBRARY_PATH=$ruby19_ld_libs:$LD_LIBRARY_PATH |
||||
|
||||
ruby19_manpath=$(scl enable ruby193 "printenv MANPATH") |
||||
export MANPATH=$ruby19_manpath:$MANPATH |
||||
|
@ -0,0 +1,5 @@ |
||||
--- |
||||
# Handler for mongod |
||||
|
||||
- name: restart iptables |
||||
service: name=iptables state=restarted |
@ -0,0 +1,42 @@ |
||||
--- |
||||
# Common tasks across nodes |
||||
|
||||
- name: Install common packages |
||||
yum : name={{ item }} state=installed |
||||
with_items: |
||||
- libselinux-python |
||||
- policycoreutils |
||||
- policycoreutils-python |
||||
- ntp |
||||
- ruby-devel |
||||
|
||||
- name: make sure we have the right time |
||||
shell: ntpdate -u 0.centos.pool.ntp.org |
||||
|
||||
- name: start the ntp service |
||||
service: name=ntpd state=started enabled=yes |
||||
|
||||
- name: Create the hosts file for all machines |
||||
template: src=hosts.j2 dest=/etc/hosts |
||||
|
||||
- name: Create the EPEL Repository. |
||||
copy: src=epel.repo.j2 dest=/etc/yum.repos.d/epel.repo |
||||
|
||||
- name: Create the OpenShift Repository. |
||||
copy: src=openshift.repo dest=/etc/yum.repos.d/openshift.repo |
||||
|
||||
- name: Create the GPG key for EPEL |
||||
copy: src=RPM-GPG-KEY-EPEL-6 dest=/etc/pki/rpm-gpg |
||||
|
||||
- name: SELinux Enforcing (Targeted) |
||||
selinux: policy=targeted state=enforcing |
||||
|
||||
- name: copy the file for ruby193 profile |
||||
copy: src=scl193.sh dest=/etc/profile.d/scl193.sh mode=755 |
||||
|
||||
- name: copy the file for mcollective profile |
||||
copy: src=scl193.sh dest=/etc/sysconfig/mcollective mode=755 |
||||
|
||||
- name: Create the iptables file |
||||
template: src=iptables.j2 dest=/etc/sysconfig/iptables |
||||
notify: restart iptables |
@ -0,0 +1,4 @@ |
||||
127.0.0.1 localhost |
||||
{% for host in groups['all'] %} |
||||
{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }} {{ hostvars[host].ansible_hostname }} |
||||
{% endfor %} |
@ -0,0 +1,52 @@ |
||||
# Firewall configuration written by system-config-firewall |
||||
# Manual customization of this file is not recommended. |
||||
|
||||
{% if 'broker' in group_names %} |
||||
*nat |
||||
-A PREROUTING -d {{ vip }}/32 -p tcp -m tcp --dport 443 -j REDIRECT |
||||
COMMIT |
||||
{% endif %} |
||||
|
||||
*filter |
||||
:INPUT ACCEPT [0:0] |
||||
:FORWARD ACCEPT [0:0] |
||||
:OUTPUT ACCEPT [0:0] |
||||
{% if 'mongo_servers' in group_names %} |
||||
-A INPUT -p tcp --dport {{ mongod_port }} -j ACCEPT |
||||
{% endif %} |
||||
{% if 'mq' in group_names %} |
||||
-A INPUT -p tcp --dport 61613 -j ACCEPT |
||||
-A INPUT -p tcp --dport 61616 -j ACCEPT |
||||
-A INPUT -p tcp --dport 8161 -j ACCEPT |
||||
{% endif %} |
||||
{% if 'broker' in group_names %} |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
-A INPUT -p tcp --dport 443 -j ACCEPT |
||||
{% endif %} |
||||
{% if 'lvs' in group_names %} |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
-A INPUT -p tcp --dport 443 -j ACCEPT |
||||
{% endif %} |
||||
{% if 'nodes' in group_names %} |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
-A INPUT -p tcp --dport 443 -j ACCEPT |
||||
-A INPUT -p tcp -m multiport --dports 35531:65535 -j ACCEPT |
||||
{% endif %} |
||||
{% if 'dns' in group_names %} |
||||
-A INPUT -p tcp --dport {{ dns_port }} -j ACCEPT |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
-A INPUT -p tcp --dport 443 -j ACCEPT |
||||
-A INPUT -p udp --dport {{ dns_port }} -j ACCEPT |
||||
-A INPUT -p udp --dport {{ rndc_port }} -j ACCEPT |
||||
-A INPUT -p tcp --dport {{ rndc_port }} -j ACCEPT |
||||
{% endif %} |
||||
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT |
||||
-A INPUT -p icmp -j ACCEPT |
||||
-A INPUT -i lo -j ACCEPT |
||||
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT |
||||
-A INPUT -j REJECT --reject-with icmp-host-prohibited |
||||
-A FORWARD -j REJECT --reject-with icmp-host-prohibited |
||||
COMMIT |
||||
|
||||
|
||||
|
@ -0,0 +1,6 @@ |
||||
--- |
||||
# handler for dns |
||||
|
||||
- name: restart named |
||||
service: name=named state=restarted enabled=yes |
||||
|
@ -0,0 +1,26 @@ |
||||
--- |
||||
# tasks for the bind server |
||||
|
||||
- name: Install the bind packages |
||||
yum: name={{ item }} state=installed |
||||
with_items: |
||||
- bind |
||||
- bind-utils |
||||
|
||||
- name: Copy the key for dynamic dns updates to the domain |
||||
template: src=keyfile.j2 dest=/var/named/{{ domain_name }}.key owner=named group=named |
||||
|
||||
- name: Copy the forwarders file for bind |
||||
template: src=forwarders.conf.j2 dest=/var/named/forwarders.conf owner=named group=named |
||||
|
||||
- name: copy the db file for the domain |
||||
template: src=domain.db.j2 dest=/var/named/dynamic/{{ domain_name }}.db owner=named group=named |
||||
|
||||
- name: copy the named.conf file |
||||
template: src=named.conf.j2 dest=/etc/named.conf owner=root group=named mode=755 |
||||
notify: restart named |
||||
|
||||
- name: restore the sellinux contexts |
||||
shell: restorecon -v /var/named/forwarders.conf; restorecon -rv /var/named; restorecon /etc/named.conf; touch /opt/named.init |
||||
creates=/opt/named.init |
||||
|
@ -0,0 +1,16 @@ |
||||
$ORIGIN . |
||||
$TTL 1 ; 1 second |
||||
{{ domain_name }} IN SOA ns1.{{ domain_name }}. hostmaster.{{ domain_name }}. ( |
||||
2002100404 ; serial |
||||
10800 ; refresh (3 hours) |
||||
3600 ; retry (1 hour) |
||||
3600000 ; expire (5 weeks 6 days 16 hours) |
||||
7200 ; minimum (2 hours) |
||||
) |
||||
NS ns1.{{ domain_name }}. |
||||
$ORIGIN {{ domain_name }}. |
||||
{% for host in groups['nodes'] %} |
||||
{{ hostvars[host].ansible_hostname }} A {{ hostvars[host].ansible_default_ipv4.address }} |
||||
{% endfor %} |
||||
ns1 A {{ ansible_default_ipv4.address }} |
||||
|
@ -0,0 +1 @@ |
||||
forwarders { {{ forwarders|join(';') }}; } ; |
@ -0,0 +1,6 @@ |
||||
key {{ domain_name }} { |
||||
|
||||
algorithm HMAC-MD5; |
||||
secret {{ dns_key }}; |
||||
|
||||
}; |
@ -0,0 +1,57 @@ |
||||
// |
||||
// named.conf |
||||
// |
||||
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS |
||||
// server as a caching only nameserver (as a localhost DNS resolver only). |
||||
// |
||||
// See /usr/share/doc/bind*/sample/ for example named configuration files. |
||||
// |
||||
|
||||
options { |
||||
listen-on port {{ dns_port }} { any; }; |
||||
directory "/var/named"; |
||||
dump-file "/var/named/data/cache_dump.db"; |
||||
statistics-file "/var/named/data/named_stats.txt"; |
||||
memstatistics-file "/var/named/data/named_mem_stats.txt"; |
||||
allow-query { any; }; |
||||
recursion yes; |
||||
|
||||
dnssec-enable yes; |
||||
dnssec-validation yes; |
||||
dnssec-lookaside auto; |
||||
|
||||
/* Path to ISC DLV key */ |
||||
bindkeys-file "/etc/named.iscdlv.key"; |
||||
include "forwarders.conf"; |
||||
|
||||
managed-keys-directory "/var/named/dynamic"; |
||||
}; |
||||
|
||||
logging { |
||||
channel default_debug { |
||||
file "data/named.run"; |
||||
severity debug; |
||||
}; |
||||
}; |
||||
include "{{ domain_name }}.key"; |
||||
|
||||
controls { |
||||
inet * port {{ rndc_port }} allow { any; } |
||||
keys { example.com; }; |
||||
}; |
||||
|
||||
zone "{{ domain_name }}" IN { |
||||
type master; |
||||
file "dynamic/{{ domain_name }}.db"; |
||||
allow-update { key {{ domain_name }}; }; |
||||
}; |
||||
|
||||
include "/etc/named.rfc1912.zones"; |
||||
include "/etc/named.root.key"; |
||||
|
||||
zone "." IN { |
||||
type hint; |
||||
file "named.ca"; |
||||
}; |
||||
|
||||
|
@ -0,0 +1,8 @@ |
||||
--- |
||||
# Variables for the bind daemon |
||||
|
||||
forwarders: |
||||
- 8.8.8.8 |
||||
- 8.8.4.4 |
||||
|
||||
|
@ -0,0 +1,30 @@ |
||||
--- |
||||
- name: Create Instance |
||||
ec2: > |
||||
region="{{ region }}" |
||||
zone="{{ zone }}" |
||||
id="{{ id + '_' + type }}" |
||||
ec2_access_key="{{ ec2_access_key }}" |
||||
ec2_secret_key="{{ ec2_secret_key }}" |
||||
keypair="{{ keypair }}" |
||||
instance_type="{{ instance_type }}" |
||||
image="{{ image }}" |
||||
group="{{ group }}" |
||||
wait=true |
||||
user_data="{{ type }}" |
||||
instance_tags='{"type":"{{ type }}", "id":"{{ id }}"}' |
||||
count="{{ ncount }}" |
||||
register: ec2 |
||||
|
||||
- pause: seconds=60 |
||||
when: type == 'nodes' |
||||
|
||||
- name: Add new instance to host group |
||||
add_host: hostname={{ item.public_dns_name }} groupname={{ type }} |
||||
with_items: ec2.instances |
||||
when: type != 'mq' |
||||
|
||||
- name: Add new instance to host group |
||||
add_host: hostname={{ item.public_dns_name }} groupname="mq,mongo_servers" |
||||
with_items: ec2.instances |
||||
when: type == 'mq' |
@ -0,0 +1,25 @@ |
||||
--- |
||||
|
||||
- name: Create Instance |
||||
ec2: > |
||||
region="{{ region }}" |
||||
zone="{{ zone }}" |
||||
id="{{ id + '_' + type }}" |
||||
ec2_access_key="{{ ec2_access_key }}" |
||||
ec2_secret_key="{{ ec2_secret_key }}" |
||||
keypair="{{ keypair }}" |
||||
instance_type="{{ instance_type }}" |
||||
image="{{ image }}" |
||||
group="{{ group }}" |
||||
wait=true |
||||
count="{{ ncount }}" |
||||
register: ec2 |
||||
|
||||
- name: Delete Instance |
||||
ec2: |
||||
region: "{{ region }}" |
||||
ec2_access_key: "{{ ec2_access_key }}" |
||||
ec2_secret_key: "{{ ec2_secret_key }}" |
||||
state: 'absent' |
||||
instance_ids: "{{ item }}" |
||||
with_items: ec2.instance_ids |
@ -0,0 +1,26 @@ |
||||
--- |
||||
# Tasks for deploying the loadbalancer lvs |
||||
|
||||
- name: Install the lvs packages |
||||
yum: name={{ item }} state=installed |
||||
with_items: |
||||
- piranha |
||||
- wget |
||||
|
||||
- name: disable selinux |
||||
selinux: state=disabled |
||||
|
||||
- name: copy the configuration file |
||||
template: src=lvs.cf.j2 dest=/etc/sysconfig/ha/lvs.cf |
||||
|
||||
- name: copy the file for broker monitoring |
||||
template: src=check.sh dest=/opt/check.sh mode=0755 |
||||
|
||||
- name: start the services |
||||
service: name={{ item }} state=started enabled=yes |
||||
with_items: |
||||
- ipvsadm |
||||
- pulse |
||||
ignore_errors: yes |
||||
tags: test |
||||
|
@ -0,0 +1,8 @@ |
||||
#!/bin/bash |
||||
LINES=`wget -q -O - --no-check-certificate https://$1/broker/rest/api | wc -c` |
||||
if [ $LINES -gt "0" ]; then |
||||
echo "OK" |
||||
else |
||||
echo "FAILURE" |
||||
fi |
||||
exit 0 |
@ -0,0 +1,44 @@ |
||||
serial_no = 14 |
||||
primary = 10.152.154.62 |
||||
service = lvs |
||||
backup_active = 1 |
||||
backup = 10.114.215.67 |
||||
heartbeat = 1 |
||||
heartbeat_port = 539 |
||||
keepalive = 6 |
||||
deadtime = 18 |
||||
network = direct |
||||
debug_level = NONE |
||||
monitor_links = 0 |
||||
syncdaemon = 0 |
||||
tcp_timeout = 30 |
||||
tcpfin_timeout = 30 |
||||
udp_timeout = 30 |
||||
virtual webserver { |
||||
active = 1 |
||||
address = 10.114.215.69 eth0:1 |
||||
vip_nmask = 255.255.255.255 |
||||
port = 80 |
||||
pmask = 255.255.255.255 |
||||
send = "GET / HTTP/1.0\r\n\r\n" |
||||
expect = "HTTP" |
||||
use_regex = 0 |
||||
load_monitor = none |
||||
scheduler = rr |
||||
protocol = tcp |
||||
timeout = 60 |
||||
reentry = 45 |
||||
quiesce_server = 0 |
||||
server web1 { |
||||
address = 10.35.91.109 |
||||
active = 1 |
||||
port = 80 |
||||
weight = 1 |
||||
} |
||||
server web2 { |
||||
address = 10.147.222.172 |
||||
active = 1 |
||||
port = 80 |
||||
weight = 2 |
||||
} |
||||
} |
@ -0,0 +1,44 @@ |
||||
serial_no = 14 |
||||
primary = 10.152.154.62 |
||||
service = lvs |
||||
backup_active = 1 |
||||
backup = 10.114.215.67 |
||||
heartbeat = 1 |
||||
heartbeat_port = 539 |
||||
keepalive = 6 |
||||
deadtime = 18 |
||||
network = direct |
||||
debug_level = NONE |
||||
monitor_links = 0 |
||||
syncdaemon = 0 |
||||
tcp_timeout = 30 |
||||
tcpfin_timeout = 30 |
||||
udp_timeout = 30 |
||||
virtual webserver { |
||||
active = 1 |
||||
address = 10.114.215.69 eth0:1 |
||||
vip_nmask = 255.255.255.255 |
||||
port = 80 |
||||
pmask = 255.255.255.255 |
||||
send_program = "/etc/https.check" |
||||
expect = "OK" |
||||
use_regex = 0 |
||||
load_monitor = none |
||||
scheduler = rr |
||||
protocol = tcp |
||||
timeout = 60 |
||||
reentry = 45 |
||||
quiesce_server = 0 |
||||
server web1 { |
||||
address = 10.35.91.109 |
||||
active = 1 |
||||
port = 80 |
||||
weight = 1 |
||||
} |
||||
server web2 { |
||||
address = 10.147.222.172 |
||||
active = 1 |
||||
port = 80 |
||||
weight = 2 |
||||
} |
||||
} |
@ -0,0 +1,45 @@ |
||||
serial_no = 1 |
||||
primary = {{ hostvars[groups['lvs'][0]].ansible_default_ipv4.address }} |
||||
service = lvs |
||||
backup_active = 1 |
||||
backup = {{ hostvars[groups['lvs'][1]].ansible_default_ipv4.address }} |
||||
heartbeat = 1 |
||||
heartbeat_port = 539 |
||||
keepalive = 6 |
||||
deadtime = 18 |
||||
network = direct |
||||
debug_level = NONE |
||||
monitor_links = 0 |
||||
syncdaemon = 0 |
||||
tcp_timeout = 30 |
||||
tcpfin_timeout = 30 |
||||
udp_timeout = 30 |
||||
virtual brokers { |
||||
active = 1 |
||||
address = {{ vip }} {{ hostvars[groups['lvs'][0]].ansible_default_ipv4.interface }}:1 |
||||
vip_nmask = {{ vip_netmask }} |
||||
port = 443 |
||||
persistent = 10 |
||||
pmask = 255.255.255.255 |
||||
send_program = "/opt/check.sh %h" |
||||
expect = "OK" |
||||
use_regex = 0 |
||||
load_monitor = none |
||||
scheduler = rr |
||||
protocol = tcp |
||||
timeout = 60 |
||||
reentry = 45 |
||||
quiesce_server = 0 |
||||
server web1 { |
||||
address = {{ hostvars[groups['broker'][0]].ansible_default_ipv4.address }} |
||||
active = 1 |
||||
port = 443 |
||||
weight = 0 |
||||
} |
||||
server web2 { |
||||
address = {{ hostvars[groups['broker'][1]].ansible_default_ipv4.address }} |
||||
active = 1 |
||||
port = 443 |
||||
weight = 0 |
||||
} |
||||
} |
@ -0,0 +1,6 @@ |
||||
[10gen] |
||||
name=10gen Repository |
||||
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64 |
||||
gpgcheck=0 |
||||
enabled=1 |
||||
|
@ -0,0 +1,3 @@ |
||||
qGO6OYb64Uth9p9Tm8s9kqarydmAg1AUdgVz+ecjinaLZ1SlWxXMY1ug8AO7C/Vu |
||||
D8kA3+rE37Gv1GuZyPYi87NSfDhKXo4nJWxI00BxTBppmv2PTzbi7xLCx1+8A1uQ |
||||
4XU0HA |
@ -0,0 +1,29 @@ |
||||
#The site wide list of mongodb servers |
||||
|
||||
# the mongo servers need a mongod_port variable set, and they must not conflict. |
||||
[mongo_servers] |
||||
hadoop1 mongod_port=2700 |
||||
hadoop2 mongod_port=2701 |
||||
hadoop3 mongod_port=2702 |
||||
hadoop4 mongod_port=2703 |
||||
|
||||
#The list of servers where replication should happen, by default include all servers |
||||
[replication_servers] |
||||
hadoop1 |
||||
hadoop2 |
||||
hadoop3 |
||||
hadoop4 |
||||
|
||||
#The list of mongodb configuration servers, make sure it is 1 or 3 |
||||
[mongoc_servers] |
||||
hadoop1 |
||||
hadoop2 |
||||
hadoop3 |
||||
|
||||
|
||||
#The list of servers where mongos servers would run. |
||||
[mongos_servers] |
||||
hadoop1 |
||||
hadoop2 |
||||
|
||||
|
@ -0,0 +1,22 @@ |
||||
--- |
||||
# This Playbook would deploy the whole mongodb cluster with replication and sharding. |
||||
|
||||
- hosts: all |
||||
roles: |
||||
- role: common |
||||
|
||||
- hosts: mongo_servers |
||||
roles: |
||||
- role: mongod |
||||
|
||||
- hosts: mongoc_servers |
||||
roles: |
||||
- role: mongoc |
||||
|
||||
- hosts: mongos_servers |
||||
roles: |
||||
- role: mongos |
||||
|
||||
- hosts: mongo_servers |
||||
tasks: |
||||
- include: roles/mongod/tasks/shards.yml |
@ -0,0 +1,43 @@ |
||||
--- |
||||
# This role deploys the mongod processes and sets up the replication set. |
||||
|
||||
|
||||
- name: Install the mongodb package |
||||
yum: name={{ item }} state=installed |
||||
with_items: |
||||
- mongodb |
||||
- mongodb-server |
||||
- bc |
||||
- python-pip |
||||
- gcc |
||||
|
||||
- name: Install the latest pymongo package |
||||
pip: name=pymongo state=latest use_mirrors=no |
||||
|
||||
- name: Create the data directory for mongodb |
||||
file: path={{ mongodb_datadir_prefix }} owner=mongodb group=mongodb state=directory |
||||
|
||||
- name: Copy the keyfile for authentication |
||||
copy: src=secret dest={{ mongodb_datadir_prefix }}/secret owner=mongodb group=mongodb mode=0400 |
||||
|
||||
- name: Create the mongodb configuration file |
||||
template: src=mongod.conf.j2 dest=/etc/mongodb.conf |
||||
|
||||
- name: Start the mongodb service |
||||
service: name=mongod state=started |
||||
|
||||
- name: Create the file to initialize the mongod replica set |
||||
template: src=repset_init.j2 dest=/tmp/repset_init.js |
||||
|
||||
- name: Pause for a while |
||||
wait_for: port="{{ mongod_port }}" delay=30 |
||||
|
||||
- name: Initialize the replication set |
||||
shell: /usr/bin/mongo --port "{{ mongod_port }}" /tmp/repset_init.js;touch /opt/rep.init creates=/opt/rep.init |
||||
when: inventory_hostname == groups['mongo_servers'][0] |
||||
|
||||
- name: add the admin user |
||||
mongodb_user: database=admin name=admin password={{ mongo_admin_pass }} login_port={{ mongod_port }} state=present |
||||
when: inventory_hostname == groups['mongo_servers'][0] |
||||
ignore_errors: yes |
||||
|
@ -0,0 +1,25 @@ |
||||
# mongo.conf |
||||
smallfiles=true |
||||
|
||||
#where to log |
||||
logpath=/var/log/mongodb/mongodb.log |
||||
|
||||
logappend=true |
||||
|
||||
# fork and run in background |
||||
fork = true |
||||
|
||||
port = {{ mongod_port }} |
||||
|
||||
dbpath={{ mongodb_datadir_prefix }} |
||||
keyFile={{ mongodb_datadir_prefix }}/secret |
||||
|
||||
# location of pidfile |
||||
pidfilepath = /var/run/mongod.pid |
||||
|
||||
|
||||
# Ping interval for Mongo monitoring server. |
||||
#mms-interval = <seconds> |
||||
|
||||
# Replication Options |
||||
replSet=openshift |
@ -0,0 +1,7 @@ |
||||
rs.initiate() |
||||
sleep(13000) |
||||
{% for host in groups['mongo_servers'] %} |
||||
rs.add("{{ host }}:{{ mongod_port }}") |
||||
sleep(8000) |
||||
{% endfor %} |
||||
printjson(rs.status()) |
@ -0,0 +1,5 @@ |
||||
--- |
||||
# handlers for mq |
||||
|
||||
- name: restart mq |
||||
service: name=activemq state=restarted |
@ -0,0 +1,26 @@ |
||||
--- |
||||
# task for setting up MQ cluster |
||||
|
||||
- name: Install the packages for MQ |
||||
yum: name={{ item }} state=installed |
||||
with_items: |
||||
- java-1.6.0-openjdk |
||||
- java-1.6.0-openjdk-devel |
||||
- activemq |
||||
|
||||
- name: Copy the activemq.xml file |
||||
template: src=activemq.xml.j2 dest=/etc/activemq/activemq.xml |
||||
notify: restart mq |
||||
|
||||
- name: Copy the jetty.xml file |
||||
template: src=jetty.xml.j2 dest=/etc/activemq/jetty.xml |
||||
notify: restart mq |
||||
|
||||
- name: Copy the jetty realm properties file |
||||
template: src=jetty-realm.properties.j2 dest=/etc/activemq/jetty-realm.properties |
||||
notify: restart mq |
||||
|
||||
- name: start the active mq service |
||||
service: name=activemq state=started enabled=yes |
||||
|
||||
|
@ -0,0 +1,178 @@ |
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more |
||||
contributor license agreements. See the NOTICE file distributed with |
||||
this work for additional information regarding copyright ownership. |
||||
The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
(the "License"); you may not use this file except in compliance with |
||||
the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 |
||||
|
||||
Unless required by applicable law or agreed to in writing, software |
||||
distributed under the License is distributed on an "AS IS" BASIS, |
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<beans |
||||
xmlns="http://www.springframework.org/schema/beans" |
||||
xmlns:amq="http://activemq.apache.org/schema/core" |
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" |
||||
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd |
||||
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd"> |
||||
|
||||
<!-- Allows us to use system properties as variables in this configuration file --> |
||||
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> |
||||
<property name="locations"> |
||||
<value>file:${activemq.conf}/credentials.properties</value> |
||||
</property> |
||||
</bean> |
||||
|
||||
<!-- |
||||
The <broker> element is used to configure the ActiveMQ broker. |
||||
--> |
||||
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="{{ ansible_hostname }}" useJmx="true" dataDirectory="${activemq.data}"> |
||||
<!-- |
||||
For better performances use VM cursor and small memory limit. |
||||
For more information, see: |
||||
|
||||
http://activemq.apache.org/message-cursors.html |
||||
|
||||
Also, if your producer is "hanging", it's probably due to producer flow control. |
||||
For more information, see: |
||||
http://activemq.apache.org/producer-flow-control.html |
||||
--> |
||||
|
||||
<destinationPolicy> |
||||
<policyMap> |
||||
<policyEntries> |
||||
<policyEntry topic=">" producerFlowControl="false"/> |
||||
<policyEntry queue="*.reply.>" gcInactiveDestinations="true" |
||||
inactiveTimoutBeforeGC="300000" /> |
||||
</policyEntries> |
||||
</policyMap> |
||||
</destinationPolicy> |
||||
|
||||
<!-- |
||||
The managementContext is used to configure how ActiveMQ is exposed in |
||||
ed by |
||||
the JVM. For more information, see: |
||||
|
||||
http://activemq.apache.org/jmx.html |
||||
--> |
||||
<managementContext> |
||||
<managementContext createConnector="false"/> |
||||
</managementContext> |
||||
|
||||
<!-- |
||||
Configure message persistence for the broker. The default persistence |
||||
mechanism is the KahaDB store (identified by the kahaDB tag). |
||||
For more information, see: |
||||
|
||||
http://activemq.apache.org/persistence.html |
||||
--> |
||||
<networkConnectors> |
||||
{% for host in groups['mq'] %} |
||||
{% if hostvars[host].ansible_hostname != ansible_hostname %} |
||||
|
||||
<networkConnector name="{{ ansible_hostname }}_{{ hostvars[host].ansible_hostname }}" duplex="true" |
||||
uri="static:(tcp://{{ hostvars[host].ansible_hostname }}:61616)" |
||||
userName="amquser" password="{{ amquser_pass }}"> |
||||
<staticallyIncludedDestinations> |
||||
<topic physicalName="mcollective.openshift.reply"/> |
||||
<topic physicalName="mcollective.discovery.reply"/> |
||||
</staticallyIncludedDestinations> |
||||
</networkConnector> |
||||
{% endif %} |
||||
{% endfor %} |
||||
|
||||
</networkConnectors> |
||||
<!-- add users for mcollective --> |
||||
|
||||
<plugins> |
||||
<statisticsBrokerPlugin/> |
||||
<simpleAuthenticationPlugin> |
||||
<users> |
||||
<authenticationUser username="mcollective" |
||||
password="{{ mcollective_pass }}" groups="mcollective,everyone"/> |
||||
<authenticationUser username="amquser" |
||||
password="{{ amquser_pass }}" groups="admins,everyone"/> |
||||
<authenticationUser username="admin" |
||||
password="{{ admin_pass }}" groups="mcollective,admin,everyone"/> |
||||
</users> |
||||
</simpleAuthenticationPlugin> |
||||
<authorizationPlugin> |
||||
<map> |
||||
<authorizationMap> |
||||
<authorizationEntries> |
||||
<authorizationEntry queue=">" |
||||
write="admins" read="admins" admin="admins" /> |
||||
<authorizationEntry topic=">" |
||||
write="admins" read="admins" admin="admins" /> |
||||
<authorizationEntry topic="mcollective.>" |
||||
write="mcollective" read="mcollective" admin="mcollective" /> |
||||
<authorizationEntry queue="mcollective.>" |
||||
write="mcollective" read="mcollective" admin="mcollective" /> |
||||
<authorizationEntry topic="ActiveMQ.Advisory.>" |
||||
read="everyone" write="everyone" admin="everyone"/> |
||||
</authorizationEntries> |
||||
</authorizationMap> |
||||
</map> |
||||
</authorizationPlugin> |
||||
</plugins> |
||||
<!-- |
||||
The systemUsage controls the maximum amount of space the broker will |
||||
use before slowing down producers. For more information, see: |
||||
http://activemq.apache.org/producer-flow-control.html |
||||
If using ActiveMQ embedded - the following limits could safely be used: |
||||
|
||||
<systemUsage> |
||||
<systemUsage> |
||||
<memoryUsage> |
||||
<memoryUsage limit="20 mb"/> |
||||
</memoryUsage> |
||||
<storeUsage> |
||||
<storeUsage limit="1 gb"/> |
||||
</storeUsage> |
||||
<tempUsage> |
||||
<tempUsage limit="100 mb"/> |
||||
</tempUsage> |
||||
</systemUsage> |
||||
</systemUsage> |
||||
--> |
||||
<systemUsage> |
||||
<systemUsage> |
||||
<memoryUsage> |
||||
<memoryUsage limit="64 mb"/> |
||||
</memoryUsage> |
||||
<storeUsage> |
||||
<storeUsage limit="100 gb"/> |
||||
</storeUsage> |
||||
<tempUsage> |
||||
<tempUsage limit="50 gb"/> |
||||
</tempUsage> |
||||
</systemUsage> |
||||
</systemUsage> |
||||
|
||||
<!-- |
||||
The transport connectors expose ActiveMQ over a given protocol to |
||||
clients and other brokers. For more information, see: |
||||
|
||||
http://activemq.apache.org/configuring-transports.html |
||||
--> |
||||
<transportConnectors> |
||||
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/> |
||||
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/> |
||||
</transportConnectors> |
||||
|
||||
</broker> |
||||
|
||||
<!-- |
||||
Enable web consoles, REST and Ajax APIs and demos |
||||
|
||||
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details |
||||
--> |
||||
<import resource="jetty.xml"/> |
||||
|
||||
</beans> |
||||
<!-- END SNIPPET: example --> |
@ -0,0 +1,20 @@ |
||||
## --------------------------------------------------------------------------- |
||||
## Licensed to the Apache Software Foundation (ASF) under one or more |
||||
## contributor license agreements. See the NOTICE file distributed with |
||||
## this work for additional information regarding copyright ownership. |
||||
## The ASF licenses this file to You under the Apache License, Version 2.0 |
||||
## (the "License"); you may not use this file except in compliance with |
||||
## the License. You may obtain a copy of the License at |
||||
## |
||||
## http://www.apache.org/licenses/LICENSE-2.0 |
||||
## |
||||
## Unless required by applicable law or agreed to in writing, software |
||||
## distributed under the License is distributed on an "AS IS" BASIS, |
||||
## WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
||||
## See the License for the specific language governing permissions and |
||||
## limitations under the License. |
||||
## --------------------------------------------------------------------------- |
||||
|
||||
# Defines users that can access the web (console, demo, etc.) |
||||
# username: password [,rolename ...] |
||||
admin: {{ admin_pass }}, admin |
@ -0,0 +1,113 @@ |
||||
|
||||
<!-- |
||||
Licensed to the Apache Software Foundation (ASF) under one or more contributor |
||||
license agreements. See the NOTICE file distributed with this work for additional |
||||
information regarding copyright ownership. The ASF licenses this file to You under |
||||
the Apache License, Version 2.0 (the "License"); you may not use this file except in |
||||
compliance with the License. You may obtain a copy of the License at |
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or |
||||
agreed to in writing, software distributed under the License is distributed on an |
||||
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or |
||||
implied. See the License for the specific language governing permissions and |
||||
limitations under the License. |
||||
--> |
||||
<!-- |
||||
An embedded servlet engine for serving up the Admin consoles, REST and Ajax APIs and |
||||
some demos Include this file in your configuration to enable ActiveMQ web components |
||||
e.g. <import resource="jetty.xml"/> |
||||
--> |
||||
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" |
||||
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> |
||||
|
||||
<bean id="securityLoginService" class="org.eclipse.jetty.security.HashLoginService"> |
||||
<property name="name" value="ActiveMQRealm" /> |
||||
<property name="config" value="${activemq.conf}/jetty-realm.properties" /> |
||||
</bean> |
||||
|
||||
<bean id="securityConstraint" class="org.eclipse.jetty.util.security.Constraint"> |
||||
<property name="name" value="BASIC" /> |
||||
<property name="roles" value="admin" /> |
||||
<property name="authenticate" value="true" /> |
||||
</bean> |
||||
<bean id="securityConstraintMapping" class="org.eclipse.jetty.security.ConstraintMapping"> |
||||
<property name="constraint" ref="securityConstraint" /> |
||||
<property name="pathSpec" value="/*" /> |
||||
</bean> |
||||
<bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler"> |
||||
<property name="loginService" ref="securityLoginService" /> |
||||
<property name="authenticator"> |
||||
<bean class="org.eclipse.jetty.security.authentication.BasicAuthenticator" /> |
||||
</property> |
||||
<property name="constraintMappings"> |
||||
<list> |
||||
<ref bean="securityConstraintMapping" /> |
||||
</list> |
||||
</property> |
||||
<property name="handler"> |
||||
<bean id="sec" class="org.eclipse.jetty.server.handler.HandlerCollection"> |
||||
<property name="handlers"> |
||||
<list> |
||||
<bean class="org.eclipse.jetty.webapp.WebAppContext"> |
||||
<property name="contextPath" value="/admin" /> |
||||
<property name="resourceBase" value="${activemq.home}/webapps/admin" /> |
||||
<property name="logUrlOnStart" value="true" /> |
||||
</bean> |
||||
<bean class="org.eclipse.jetty.webapp.WebAppContext"> |
||||
<property name="contextPath" value="/demo" /> |
||||
<property name="resourceBase" value="${activemq.home}/webapps/demo" /> |
||||
<property name="logUrlOnStart" value="true" /> |
||||
</bean> |
||||
<bean class="org.eclipse.jetty.webapp.WebAppContext"> |
||||
<property name="contextPath" value="/fileserver" /> |
||||
<property name="resourceBase" value="${activemq.home}/webapps/fileserver" /> |
||||
<property name="logUrlOnStart" value="true" /> |
||||
<property name="parentLoaderPriority" value="true" /> |
||||
</bean> |
||||
<bean class="org.eclipse.jetty.server.handler.ResourceHandler"> |
||||
<property name="directoriesListed" value="false" /> |
||||
<property name="welcomeFiles"> |
||||
<list> |
||||
<value>index.html</value> |
||||
</list> |
||||
</property> |
||||
<property name="resourceBase" value="${activemq.home}/webapps/" /> |
||||
</bean> |
||||
<bean id="defaultHandler" class="org.eclipse.jetty.server.handler.DefaultHandler"> |
||||
<property name="serveIcon" value="false" /> |
||||
</bean> |
||||
</list> |
||||
</property> |
||||
</bean> |
||||
</property> |
||||
</bean> |
||||
|
||||
<bean id="contexts" class="org.eclipse.jetty.server.handler.ContextHandlerCollection"> |
||||
</bean> |
||||
|
||||
<bean id="Server" class="org.eclipse.jetty.server.Server" init-method="start" |
||||
destroy-method="stop"> |
||||
|
||||
<property name="connectors"> |
||||
<list> |
||||
<bean id="Connector" class="org.eclipse.jetty.server.nio.SelectChannelConnector"> |
||||
<property name="port" value="8161" /> |
||||
<property name="host" value="0.0.0.0" /> |
||||
</bean> |
||||
</list> |
||||
</property> |
||||
|
||||
<property name="handler"> |
||||
<bean id="handlers" class="org.eclipse.jetty.server.handler.HandlerCollection"> |
||||
<property name="handlers"> |
||||
<list> |
||||
<ref bean="contexts" /> |
||||
<ref bean="securityHandler" /> |
||||
</list> |
||||
</property> |
||||
</bean> |
||||
</property> |
||||
|
||||
</bean> |
||||
|
||||
</beans> |
@ -0,0 +1,26 @@ |
||||
# |
||||
# Copyright IBM Corporation. 2007 |
||||
# |
||||
# Authors: Balbir Singh <balbir@linux.vnet.ibm.com> |
||||
# This program is free software; you can redistribute it and/or modify it |
||||
# under the terms of version 2.1 of the GNU Lesser General Public License |
||||
# as published by the Free Software Foundation. |
||||
# |
||||
# This program is distributed in the hope that it would be useful, but |
||||
# WITHOUT ANY WARRANTY; without even the implied warranty of |
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. |
||||
# |
||||
# See man cgconfig.conf for further details. |
||||
# |
||||
# By default, mount all controllers to /cgroup/<controller> |
||||
|
||||
mount { |
||||
cpuset = /cgroup/cpuset; |
||||
cpu = /cgroup/cpu; |
||||
cpuacct = /cgroup/cpuacct; |
||||
memory = /cgroup/memory; |
||||
devices = /cgroup/devices; |
||||
freezer = /cgroup/freezer; |
||||
net_cls = /cgroup/net_cls; |
||||
blkio = /cgroup/blkio; |
||||
} |
Binary file not shown.
@ -0,0 +1,9 @@ |
||||
#!/bin/bash |
||||
|
||||
for f in "runuser" "runuser-l" "sshd" "su" "system-auth-ac"; \ |
||||
do t="/etc/pam.d/$f"; \ |
||||
if ! grep -q "pam_namespace.so" "$t"; \ |
||||
then echo -e "session\t\trequired\tpam_namespace.so no_unmount_on_close" >> "$t" ; \ |
||||
fi; \ |
||||
done; |
||||
|
@ -0,0 +1,13 @@ |
||||
#%PAM-1.0 |
||||
auth required pam_sepermit.so |
||||
auth include password-auth |
||||
account required pam_nologin.so |
||||
account include password-auth |
||||
password include password-auth |
||||
# pam_openshift.so close should be the first session rule |
||||
session required pam_openshift.so close |
||||
session required pam_loginuid.so |
||||
# pam_openshift.so open should only be followed by sessions to be executed in the user context |
||||
session required pam_openshift.so open env_params |
||||
session optional pam_keyinit.so force revoke |
||||
session include password-auth |
@ -0,0 +1,139 @@ |
||||
# $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ |
||||
|
||||
# This is the sshd server system-wide configuration file. See |
||||
# sshd_config(5) for more information. |
||||
|
||||
# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin |
||||
|
||||
# The strategy used for options in the default sshd_config shipped with |
||||
# OpenSSH is to specify options with their default value where |
||||
# possible, but leave them commented. Uncommented options change a |
||||
# default value. |
||||
|
||||
AcceptEnv GIT_SSH |
||||
#Port 22 |
||||
#AddressFamily any |
||||
#ListenAddress 0.0.0.0 |
||||
#ListenAddress :: |
||||
|
||||
# Disable legacy (protocol version 1) support in the server for new |
||||
# installations. In future the default will change to require explicit |
||||
# activation of protocol 1 |
||||
Protocol 2 |
||||
|
||||
# HostKey for protocol version 1 |
||||
#HostKey /etc/ssh/ssh_host_key |
||||
# HostKeys for protocol version 2 |
||||
#HostKey /etc/ssh/ssh_host_rsa_key |
||||
#HostKey /etc/ssh/ssh_host_dsa_key |
||||
|
||||
# Lifetime and size of ephemeral version 1 server key |
||||
#KeyRegenerationInterval 1h |
||||
#ServerKeyBits 1024 |
||||
|
||||
# Logging |
||||
# obsoletes QuietMode and FascistLogging |
||||
#SyslogFacility AUTH |
||||
SyslogFacility AUTHPRIV |
||||
#LogLevel INFO |
||||
|
||||
# Authentication: |
||||
|
||||
#LoginGraceTime 2m |
||||
#PermitRootLogin yes |
||||
#StrictModes yes |
||||
#MaxAuthTries 6 |
||||
MaxSessions 40 |
||||
|
||||
#RSAAuthentication yes |
||||
#PubkeyAuthentication yes |
||||
#AuthorizedKeysFile .ssh/authorized_keys |
||||
#AuthorizedKeysCommand none |
||||
#AuthorizedKeysCommandRunAs nobody |
||||
|
||||
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts |
||||
#RhostsRSAAuthentication no |
||||
# similar for protocol version 2 |
||||
#HostbasedAuthentication no |
||||
# Change to yes if you don't trust ~/.ssh/known_hosts for |
||||
# RhostsRSAAuthentication and HostbasedAuthentication |
||||
#IgnoreUserKnownHosts no |
||||
# Don't read the user's ~/.rhosts and ~/.shosts files |
||||
#IgnoreRhosts yes |
||||
|
||||
# To disable tunneled clear text passwords, change to no here! |
||||
#PasswordAuthentication yes |
||||
#PermitEmptyPasswords no |
||||
PasswordAuthentication yes |
||||
|
||||
# Change to no to disable s/key passwords |
||||
#ChallengeResponseAuthentication yes |
||||
ChallengeResponseAuthentication no |
||||
|
||||
# Kerberos options |
||||
#KerberosAuthentication no |
||||
#KerberosOrLocalPasswd yes |
||||
#KerberosTicketCleanup yes |
||||
#KerberosGetAFSToken no |
||||
#KerberosUseKuserok yes |
||||
|
||||
# GSSAPI options |
||||
#GSSAPIAuthentication no |
||||
GSSAPIAuthentication yes |
||||
#GSSAPICleanupCredentials yes |
||||
GSSAPICleanupCredentials yes |
||||
#GSSAPIStrictAcceptorCheck yes |
||||
#GSSAPIKeyExchange no |
||||
|
||||
# Set this to 'yes' to enable PAM authentication, account processing, |
||||
# and session processing. If this is enabled, PAM authentication will |
||||
# be allowed through the ChallengeResponseAuthentication and |
||||
# PasswordAuthentication. Depending on your PAM configuration, |
||||
# PAM authentication via ChallengeResponseAuthentication may bypass |
||||
# the setting of "PermitRootLogin without-password". |
||||
# If you just want the PAM account and session checks to run without |
||||
# PAM authentication, then enable this but set PasswordAuthentication |
||||
# and ChallengeResponseAuthentication to 'no'. |
||||
#UsePAM no |
||||
UsePAM yes |
||||
|
||||
# Accept locale-related environment variables |
||||
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES |
||||
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT |
||||
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE |
||||
AcceptEnv XMODIFIERS |
||||
|
||||
#AllowAgentForwarding yes |
||||
#AllowTcpForwarding yes |
||||
#GatewayPorts no |
||||
#X11Forwarding no |
||||
X11Forwarding yes |
||||
#X11DisplayOffset 10 |
||||
#X11UseLocalhost yes |
||||
#PrintMotd yes |
||||
#PrintLastLog yes |
||||
#TCPKeepAlive yes |
||||
#UseLogin no |
||||
#UsePrivilegeSeparation yes |
||||
#PermitUserEnvironment no |
||||
#Compression delayed |
||||
#ClientAliveInterval 0 |
||||
#ClientAliveCountMax 3 |
||||
#ShowPatchLevel no |
||||
#UseDNS yes |
||||
#PidFile /var/run/sshd.pid |
||||
MaxStartups 40 |
||||
#PermitTunnel no |
||||
#ChrootDirectory none |
||||
|
||||
# no default banner path |
||||
#Banner none |
||||
|
||||
# override default of no subsystems |
||||
Subsystem sftp /usr/libexec/openssh/sftp-server |
||||
|
||||
# Example of overriding settings on a per-user basis |
||||
#Match User anoncvs |
||||
# X11Forwarding no |
||||
# AllowTcpForwarding no |
||||
# ForceCommand cvs server |
@ -0,0 +1,10 @@ |
||||
--- |
||||
#Handlers for nodes |
||||
|
||||
- name: restart mcollective |
||||
service: name=mcollective state=restarted |
||||
|
||||
- name: restart ssh |
||||
service: name=sshd state=restarted |
||||
async: 10 |
||||
poll: 0 |
@ -0,0 +1,107 @@ |
||||
--- |
||||
# Tasks for the openshift nodes |
||||
|
||||
- name: Install the mcollective packages |
||||
yum: name=openshift-origin-msg-node-mcollective state=installed |
||||
|
||||
- name: Copy the mcollective configuration file |
||||
template: src=server.cfg.j2 dest=/etc/mcollective/server.cfg |
||||
notify: restart mcollective |
||||
|
||||
- name: start the mcollective service |
||||
service: name=mcollective state=started enabled=yes |
||||
|
||||
- name: Install OpenShift node packages |
||||
yum: name="{{ item }}" state=installed |
||||
with_items: |
||||
- rubygem-openshift-origin-node |
||||
- rubygem-openshift-origin-container-selinux.noarch |
||||
- rubygem-passenger-native |
||||
- rubygem-openshift-origin-msg-broker-mcollective |
||||
- openshift-origin-port-proxy |
||||
- openshift-origin-node-util |
||||
- openshift-origin-cartridge-cron |
||||
- openshift-origin-cartridge-python |
||||
- ruby193-rubygem-rest-client |
||||
- httpd |
||||
- lsof |
||||
- dbus |
||||
|
||||
- name: Copy the ssh authorized key for root |
||||
authorized_key: user=root key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}" |
||||
|
||||
- name: Copy the pam.d ssh file |
||||
copy: src=sshd dest=/etc/pam.d/sshd |
||||
register: last_run |
||||
|
||||
- name: Copy the cgconfig file |
||||
copy: src=cgconfig.conf dest=/etc/cgconfig.conf |
||||
|
||||
- name: Execute script for pam update |
||||
script: pam.sh |
||||
when: last_run.changed |
||||
|
||||
- name: Create directory for cgroups |
||||
file: path=/cgroup state=directory |
||||
|
||||
- name: restart cgroups |
||||
service: name="{{ item }}" state=restarted enabled=yes |
||||
with_items: |
||||
- cgconfig |
||||
- cgred |
||||
- httpd |
||||
- messagebus |
||||
- oddjobd |
||||
when: last_run.changed |
||||
|
||||
- name: Find root mount point of gear dir |
||||
shell: df -P /var/lib/openshift | tail -1 | awk '{ print $6 }' |
||||
register: gear_root_mount |
||||
|
||||
- name: Initialize quota db |
||||
shell: oo-init-quota creates="{{ gear_root_mount.stdout }}/aquota.user" |
||||
|
||||
- name: SELinux - configure sebooleans |
||||
seboolean: name="{{ item }}" state=true persistent=yes |
||||
with_items: |
||||
- httpd_unified |
||||
- httpd_can_network_connect |
||||
- httpd_can_network_relay |
||||
- httpd_run_stickshift |
||||
- httpd_read_user_content |
||||
- httpd_enable_homedirs |
||||
- allow_polyinstantiation |
||||
|
||||
- name: set the sysctl settings |
||||
sysctl: name=kernel.sem value="250 32000 32 4096" state=present reload=yes |
||||
|
||||
- name: set the sysctl settings |
||||
sysctl: name=net.ipv4.ip_local_port_range value="15000 35530" state=present reload=yes |
||||
|
||||
- name: set the sysctl settings |
||||
sysctl: name=net.netfilter.nf_conntrack_max value="1048576" state=present reload=yes |
||||
|
||||
- name: Copy sshd config |
||||
copy: src=sshd_config dest=/etc/ssh/sshd_config |
||||
notify: restart ssh |
||||
|
||||
- name: start the port proxy service |
||||
service: name=openshift-port-proxy state=started enabled=yes |
||||
|
||||
- name: Copy the node.conf file |
||||
template: src=node.conf.j2 dest=/etc/openshift/node.conf |
||||
|
||||
- name: Copy the se linux fix file |
||||
copy: src=cgrulesengd.pp dest=/opt/cgrulesengd.pp |
||||
register: se_run |
||||
|
||||
- name: allow se linux policy |
||||
shell: chdir=/opt semodule -i cgrulesengd.pp |
||||
when: se_run.changed |
||||
|
||||
- name: Start the openshift gears |
||||
service: name=openshift-gears state=started enabled=yes |
||||
|
||||
- name: copy the resolv.conf file |
||||
template: src=resolv.conf.j2 dest=/etc/resolv.conf |
||||
|
@ -0,0 +1,40 @@ |
||||
# These should not be left at default values, even for a demo. |
||||
# "PUBLIC" networking values are ones that end-users should be able to reach. |
||||
PUBLIC_HOSTNAME="{{ ansible_hostname }}" # The node host's public hostname |
||||
PUBLIC_IP="{{ ansible_default_ipv4.address }}" # The node host's public IP address |
||||
BROKER_HOST="{{ groups['broker'][0] }}" # IP or DNS name of broker server for REST API |
||||
|
||||
# Usually (unless in a demo) this should be changed to the domain for your installation: |
||||
CLOUD_DOMAIN="example.com" # Domain suffix to use for applications (Must match broker config) |
||||
|
||||
# You may want these, depending on the complexity of your networking: |
||||
# EXTERNAL_ETH_DEV='eth0' # Specify the internet facing public ethernet device |
||||
# INTERNAL_ETH_DEV='eth1' # Specify the internal cluster facing ethernet device |
||||
INSTANCE_ID="localhost" # Set by RH EC2 automation |
||||
|
||||
# Uncomment and use the following line if you want to gear users to be members of |
||||
# additional groups besides the one with the same id as the uid. The other group |
||||
# should be an existing group. |
||||
#GEAR_SUPL_GRPS="another_group" # Supplementary groups for gear UIDs (comma separated list) |
||||
|
||||
# Generally the following should not be changed: |
||||
ENABLE_CGROUPS=1 # constrain gears in cgroups (1=yes, 0=no) |
||||
GEAR_BASE_DIR="/var/lib/openshift" # gear root directory |
||||
GEAR_SKEL_DIR="/etc/openshift/skel" # skel files to use when building a gear |
||||
GEAR_SHELL="/usr/bin/oo-trap-user" # shell to use for the gear |
||||
GEAR_GECOS="OpenShift guest" # Gecos information to populate for the gear user |
||||
GEAR_MIN_UID=500 # Lower bound of UID used to create gears |
||||
GEAR_MAX_UID=6500 # Upper bound of UID used to create gears |
||||
CARTRIDGE_BASE_PATH="/usr/libexec/openshift/cartridges" # Locations where cartridges are installed |
||||
LAST_ACCESS_DIR="/var/lib/openshift/.last_access" # Location to maintain last accessed time for gears |
||||
APACHE_ACCESS_LOG="/var/log/httpd/openshift_log" # Localion of httpd for node |
||||
PROXY_MIN_PORT_NUM=35531 # Lower bound of port numbers used to proxy ports externally |
||||
PROXY_PORTS_PER_GEAR=5 # Number of proxy ports available per gear |
||||
CREATE_APP_SYMLINKS=0 # If set to 1, creates gear-name symlinks to the UUID directories (debugging only) |
||||
OPENSHIFT_HTTP_CONF_DIR="/etc/httpd/conf.d/openshift" |
||||
|
||||
PLATFORM_LOG_FILE=/var/log/openshift/node/platform.log |
||||
PLATFORM_LOG_LEVEL=DEBUG |
||||
PLATFORM_TRACE_LOG_FILE=/var/log/openshift/node/platform-trace.log |
||||
PLATFORM_TRACE_LOG_LEVEL=DEBUG |
||||
CONTAINERIZATION_PLUGIN=openshift-origin-container-selinux |
@ -0,0 +1,2 @@ |
||||
search {{ domain_name }} |
||||
nameserver {{ hostvars[groups['dns'][0]].ansible_default_ipv4.address }} |
@ -0,0 +1,28 @@ |
||||
topicprefix = /topic/ |
||||
main_collective = mcollective |
||||
collectives = mcollective |
||||
libdir = /opt/rh/ruby193/root/usr/libexec/mcollective |
||||
logfile = /var/log/mcollective.log |
||||
loglevel = debug |
||||
daemonize = 1 |
||||
direct_addressing = 1 |
||||
registerinterval = 30 |
||||
|
||||
# Plugins |
||||
securityprovider = psk |
||||
plugin.psk = unset |
||||
|
||||
connector = stomp |
||||
plugin.stomp.pool.size = {{ groups['mq']|length() }} |
||||
{% for host in groups['mq'] %} |
||||
|
||||
plugin.stomp.pool.host{{ loop.index }} = {{ hostvars[host].ansible_hostname }} |
||||
plugin.stomp.pool.port{{ loop.index }} = 61613 |
||||
plugin.stomp.pool.user{{ loop.index }} = mcollective |
||||
plugin.stomp.pool.password{{ loop.index }} = {{ mcollective_pass }} |
||||
|
||||
{% endfor %} |
||||
|
||||
# Facts |
||||
factsource = yaml |
||||
plugin.yaml = /etc/mcollective/facts.yaml |
@ -0,0 +1,34 @@ |
||||
|
||||
- hosts: all |
||||
user: root |
||||
roles: |
||||
- role: common |
||||
|
||||
- hosts: dns |
||||
user: root |
||||
roles: |
||||
- role: dns |
||||
- hosts: mongo_servers |
||||
user: root |
||||
roles: |
||||
- role: mongodb |
||||
|
||||
- hosts: mq |
||||
user: root |
||||
roles: |
||||
- role: mq |
||||
|
||||
- hosts: broker |
||||
user: root |
||||
roles: |
||||
- role: broker |
||||
|
||||
- hosts: nodes |
||||
user: root |
||||
roles: |
||||
- role: nodes |
||||
|
||||
- hosts: lvs |
||||
user: root |
||||
roles: |
||||
- role: lvs |
@ -0,0 +1,2 @@ |
||||
#!/bin/bash |
||||
/usr/bin/scl enable ruby193 "gem install rspec --version 1.3.0 --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install fakefs --no-rdoc --no-ri" ; /usr/bin/scl enable ruby193 "gem install httpclient --version 2.3.2 --no-rdoc --no-ri" ; touch /opt/gem.init |
@ -0,0 +1 @@ |
||||
demo:k2WsPcYIRAaXs |
@ -0,0 +1,25 @@ |
||||
LoadModule auth_basic_module modules/mod_auth_basic.so |
||||
LoadModule authn_file_module modules/mod_authn_file.so |
||||
LoadModule authz_user_module modules/mod_authz_user.so |
||||
|
||||
# Turn the authenticated remote-user into an Apache environment variable for the console security controller |
||||
RewriteEngine On |
||||
RewriteCond %{LA-U:REMOTE_USER} (.+) |
||||
RewriteRule . - [E=RU:%1] |
||||
RequestHeader set X-Remote-User "%{RU}e" env=RU |
||||
|
||||
<Location /console> |
||||
AuthName "OpenShift Developer Console" |
||||
AuthType Basic |
||||
AuthUserFile /etc/openshift/htpasswd |
||||
require valid-user |
||||
|
||||
# The node->broker auth is handled in the Ruby code |
||||
BrowserMatch Openshift passthrough |
||||
Allow from env=passthrough |
||||
|
||||
Order Deny,Allow |
||||
Deny from all |
||||
Satisfy any |
||||
</Location> |
||||
|
@ -0,0 +1,39 @@ |
||||
LoadModule auth_basic_module modules/mod_auth_basic.so |
||||
LoadModule authn_file_module modules/mod_authn_file.so |
||||
LoadModule authz_user_module modules/mod_authz_user.so |
||||
|
||||
<Location /broker> |
||||
AuthName "OpenShift broker API" |
||||
AuthType Basic |
||||
AuthUserFile /etc/openshift/htpasswd |
||||
require valid-user |
||||
|
||||
SetEnvIfNoCase Authorization Bearer passthrough |
||||
|
||||
# The node->broker auth is handled in the Ruby code |
||||
BrowserMatchNoCase ^OpenShift passthrough |
||||
Allow from env=passthrough |
||||
|
||||
# Console traffic will hit the local port. mod_proxy will set this header automatically. |
||||
SetEnvIf X-Forwarded-For "^$" local_traffic=1 |
||||
# Turn the Console output header into the Apache environment variable for the broker remote-user plugin |
||||
SetEnvIf X-Remote-User "(..*)" REMOTE_USER=$1 |
||||
Allow from env=local_traffic |
||||
|
||||
Order Deny,Allow |
||||
Deny from all |
||||
Satisfy any |
||||
</Location> |
||||
|
||||
# The following APIs do not require auth: |
||||
<Location /broker/rest/cartridges*> |
||||
Allow from all |
||||
</Location> |
||||
|
||||
<Location /broker/rest/api*> |
||||
Allow from all |
||||
</Location> |
||||
|
||||
<Location /broker/rest/environment*> |
||||
Allow from all |
||||
</Location> |
@ -0,0 +1,27 @@ |
||||
-----BEGIN RSA PRIVATE KEY----- |
||||
MIIEpAIBAAKCAQEAyWM85VFDBOdWz16oC7j8Q7uHHbs3UVzRhHhHkSg8avK6ETMH |
||||
piXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4blZHcDO5T1I//MyjwVNfqAQV4xin |
||||
qRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadpL0vZrzXAJREEhh/G7021q010CF+E |
||||
KTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgbCdvx+DnTw9E/YY+07/lDPOiXQFZm |
||||
7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc1BMXzHUiDVehNZ+I8ffzFltNNUL0 |
||||
Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0bwIDAQABAoIBAQDJPXpvqLlw3/92 |
||||
bx87v5mN0YneYuOPUVIorszNN8jQEkduwnCFTec2b8xRgx45AqwG3Ol/xM/V+qrd |
||||
eEvUs/fBgkQW0gj+Q7GfW5rTqA2xZou8iDmaF0/0tCbFWkoe8I8MdCkOl0Pkv1A4 |
||||
Au/UNqc8VO5tUCf2oj/EC2MOZLgCOTaerePnc+SFIf4TkerixPA9I4KYWwJQ2eXG |
||||
esSfR2f2EsUGfwOqKLEQU1JTMFkttbSAp42p+xpRaUh1FuyLHDlf3EeFmq5BPaFL |
||||
UnpzPDJTZtXjnyBrM9fb1ewiFW8x+EBmsdGooY7ptrWWhGzvxAsK9C0L2di3FBAy |
||||
gscM/rPBAoGBAPpt0xXtVWJu2ezoBfjNuqwMqGKFsOF3hi5ncOHW9nd6iZABD5Xt |
||||
KamrszxItkqiJpEacBCabgfo0FSLEOo+KqfTBK/r4dIoMwgcfhJOz+HvEC6+557n |
||||
GEFaL+evdLrxNrU41wvvfCzPK7pWaQGR1nrGohTyX5ZH4uA0Kmreof+PAoGBAM3e |
||||
IFPNrXuzhgShqFibWqJ8JdsSfMroV62aCqdJlB92lxx8JJ2lEiAMPfHmAtF1g01r |
||||
oBUcJcPfuBZ0bC1KxIvtz9d5m1f2geNGH/uwVULU3skhPBwqAs2s607/Z1S+/WRr |
||||
Af1rAs2KTJ7BDCQo8g2TPUO+sDrUzR6joxOy/Y0hAoGAbWaI7m1N/cBbZ4k9AqIt |
||||
SHgHH3M0AGtMrPz3bVGRPkTDz6sG+gIvTzX5CP7i09veaUlZZ4dvRflI+YX/D7W0 |
||||
wLgItimf70UsdgCseqb/Xb4oHaO8X8io6fPSNa6KmhhCRAzetRIb9x9SBQc2vD7P |
||||
qbcYm3n+lBI3ZKalWSaFMrUCgYEAsV0xfuISGCRIT48zafuWr6zENKUN7QcWGxQ/ |
||||
H3eN7TmP4VO3fDZukjvZ1qHzRaC32ijih61zf/ksMfRmCvOCuIfP7HXx92wC5dtR |
||||
zNdT7btWofRHRICRX8AeDzaOQP43c5+Z3Eqo5IrFjnUFz9WTDU0QmGAeluEmQ8J5 |
||||
yowIVOECgYB97fGLuEBSlKJCvmWp6cTyY+mXbiQjYYGBbYAiJWnwaK9U3bt71we/ |
||||
MQNzBHAe0mPCReVHSr68BfoWY/crV+7RKSBgrDpR0Y0DI1yn0LXXZfd3NNrTVaAb |
||||
rScbJ8Xe3qcLi3QZ3BxaWfub08Wm57wjDBBqGZyExYjjlGSpjBpVJQ== |
||||
-----END RSA PRIVATE KEY----- |
@ -0,0 +1,9 @@ |
||||
-----BEGIN PUBLIC KEY----- |
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWM85VFDBOdWz16oC7j8 |
||||
Q7uHHbs3UVzRhHhHkSg8avK6ETMHpiXtevCU7KbiX7B2b0dedwYpvHQaKPCtfNm4 |
||||
blZHcDO5T1I//MyjwVNfqAQV4xinqRj1oRyvvcTmn5H5yd9FgILqhRGjNEnBYadp |
||||
L0vZrzXAJREEhh/G7021q010CF+EKTTlSrbctGsoiUQKH1KfogsWsj8ygL1xVDgb |
||||
Cdvx+DnTw9E/YY+07/lDPOiXQFZm7hXA8Q51ecjtFy0VmWDwjq3t7pP33tyjQkMc |
||||
1BMXzHUiDVehNZ+I8ffzFltNNUL0Jw3AGwyCmE3Q9ml1tHIxpuvZExMCTALN6va0 |
||||
bwIDAQAB |
||||
-----END PUBLIC KEY----- |
@ -0,0 +1,74 @@ |
||||
# |
||||
# This is the Apache server configuration file providing SSL support. |
||||
# It contains the configuration directives to instruct the server how to |
||||
# serve pages over an https connection. For detailing information about these |
||||
# directives see <URL:http://httpd.apache.org/docs/2.2/mod/mod_ssl.html> |
||||
# |
||||
# Do NOT simply read the instructions in here without understanding |
||||
# what they do. They're here only as hints or reminders. If you are unsure |
||||
# consult the online docs. You have been warned. |
||||
# |
||||
|
||||
LoadModule ssl_module modules/mod_ssl.so |
||||
|
||||
# |
||||
# When we also provide SSL we have to listen to the |
||||
# the HTTPS port in addition. |
||||
# |
||||
Listen 443 |
||||
|
||||
## |
||||
## SSL Global Context |
||||
## |
||||
## All SSL configuration in this context applies both to |
||||
## the main server and all SSL-enabled virtual hosts. |
||||
## |
||||
|
||||
# Pass Phrase Dialog: |
||||
# Configure the pass phrase gathering process. |
||||
# The filtering dialog program (`builtin' is a internal |
||||
# terminal dialog) has to provide the pass phrase on stdout. |
||||
SSLPassPhraseDialog builtin |
||||
|
||||
# Inter-Process Session Cache: |
||||
# Configure the SSL Session Cache: First the mechanism |
||||
# to use and second the expiring timeout (in seconds). |
||||
SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000) |
||||
SSLSessionCacheTimeout 300 |
||||
|
||||
# Semaphore: |
||||
# Configure the path to the mutual exclusion semaphore the |
||||
# SSL engine uses internally for inter-process synchronization. |
||||
SSLMutex default |
||||
|
||||
# Pseudo Random Number Generator (PRNG): |
||||
# Configure one or more sources to seed the PRNG of the |
||||
# SSL library. The seed data should be of good random quality. |
||||
# WARNING! On some platforms /dev/random blocks if not enough entropy |
||||
# is available. This means you then cannot use the /dev/random device |
||||
# because it would lead to very long connection times (as long as |
||||
# it requires to make more entropy available). But usually those |
||||
# platforms additionally provide a /dev/urandom device which doesn't |
||||
# block. So, if available, use this one instead. Read the mod_ssl User |
||||
# Manual for more details. |
||||
SSLRandomSeed startup file:/dev/urandom 256 |
||||
SSLRandomSeed connect builtin |
||||
#SSLRandomSeed startup file:/dev/random 512 |
||||
#SSLRandomSeed connect file:/dev/random 512 |
||||
#SSLRandomSeed connect file:/dev/urandom 512 |
||||
|
||||
# |
||||
# Use "SSLCryptoDevice" to enable any supported hardware |
||||
# accelerators. Use "openssl engine -v" to list supported |
||||
# engine names. NOTE: If you enable an accelerator and the |
||||
# server does not start, consult the error logs and ensure |
||||
# your accelerator is functioning properly. |
||||
# |
||||
SSLCryptoDevice builtin |
||||
#SSLCryptoDevice ubsec |
||||
|
||||
## |
||||
## SSL Virtual Host Context |
||||
## |
||||
|
||||
|
@ -0,0 +1,9 @@ |
||||
--- |
||||
# handlers for broker |
||||
|
||||
- name: restart broker |
||||
service: name=openshift-broker state=restarted |
||||
|
||||
- name: restart console |
||||
service: name=openshift-console state=restarted |
||||
|
@ -0,0 +1,107 @@ |
||||
--- |
||||
# Tasks for the Openshift broker installation |
||||
|
||||
- name: install mcollective common |
||||
yum: name=mcollective-common-2.2.1 state=installed |
||||
|
||||
- name: Install the broker components |
||||
yum: name="{{ item }}" state=installed disablerepo=epel |
||||
with_items: "{{ broker_packages }}" |
||||
|
||||
- name: Install mcollective |
||||
yum: name=mcollective-client |
||||
|
||||
- name: Copy the mcollective configuration file |
||||
template: src=client.cfg.j2 dest=/etc/mcollective/client.cfg |
||||
|
||||
- name: Copy the rhc client configuration file |
||||
template: src=express.conf.j2 dest=/etc/openshift/express.conf |
||||
register: last_run |
||||
|
||||
- name: Install the gems for rhc |
||||
script: gem.sh |
||||
when: last_run.changed |
||||
|
||||
- name: create the file for mcollective logging |
||||
copy: content="" dest=/var/log/mcollective-client.log owner=apache group=root |
||||
|
||||
- name: SELinux - configure sebooleans |
||||
seboolean: name="{{ item }}" state=true persistent=yes |
||||
with_items: |
||||
- httpd_unified |
||||
- httpd_execmem |
||||
- httpd_can_network_connect |
||||
- httpd_can_network_relay |
||||
- httpd_run_stickshift |
||||
- named_write_master_zones |
||||
- httpd_verify_dns |
||||
- allow_ypbind |
||||
|
||||
- name: copy the auth keyfiles |
||||
copy: src="{{ item }}" dest="/etc/openshift/{{ item }}" |
||||
with_items: |
||||
- server_priv.pem |
||||
- server_pub.pem |
||||
- htpasswd |
||||
|
||||
- name: copy the local ssh keys |
||||
copy: src="~/.ssh/{{ item }}" dest="~/.ssh/{{ item }}" |
||||
with_items: |
||||
- id_rsa.pub |
||||
- id_rsa |
||||
|
||||
- name: copy the local ssh keys to openshift dir |
||||
copy: src="~/.ssh/{{ item }}" dest="/etc/openshift/rsync_{{ item }}" |
||||
with_items: |
||||
- id_rsa.pub |
||||
- id_rsa |
||||
|
||||
- name: Copy the broker configuration file |
||||
template: src=broker.conf.j2 dest=/etc/openshift/broker.conf |
||||
notify: restart broker |
||||
|
||||
- name: Copy the console configuration file |
||||
template: src=console.conf.j2 dest=/etc/openshift/console.conf |
||||
notify: restart console |
||||
|
||||
- name: create the file for ssl.conf |
||||
copy: src=ssl.conf dest=/etc/httpd/conf.d/ssl.conf owner=apache group=root |
||||
|
||||
- name: copy the configuration file for openstack plugins |
||||
template: src="{{ item }}" dest="/etc/openshift/plugins.d/{{ item }}" |
||||
with_items: |
||||
- openshift-origin-auth-remote-user.conf |
||||
- openshift-origin-dns-bind.conf |
||||
- openshift-origin-msg-broker-mcollective.conf |
||||
|
||||
- name: Bundle the ruby gems |
||||
shell: chdir=/var/www/openshift/broker/ /usr/bin/scl enable ruby193 "bundle show"; touch bundle.init |
||||
creates=//var/www/openshift/broker/bundle.init |
||||
|
||||
- name: Copy the httpd configuration file |
||||
copy: src=openshift-origin-auth-remote-user.conf dest=/var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf |
||||
notify: restart broker |
||||
|
||||
- name: Copy the httpd configuration file for console |
||||
copy: src=openshift-origin-auth-remote-basic-user.conf dest=/var/www/openshift/console/httpd/conf.d/openshift-origin-auth-remote-basic-user.conf |
||||
notify: restart console |
||||
|
||||
- name: Fix the selinux contexts on several files |
||||
shell: fixfiles -R rubygem-passenger restore; fixfiles -R mod_passenger restore; restorecon -rv /var/run; restorecon -rv /usr/share/rubygems/gems/passenger-*; touch /opt/context.fixed creates=/opt/context.fixed |
||||
|
||||
- name: start the http and broker service |
||||
service: name="{{ item }}" state=started enabled=yes |
||||
with_items: |
||||
- httpd |
||||
- openshift-broker |
||||
|
||||
- name: Install the rhc client |
||||
gem: name={{ item }} state=latest |
||||
with_items: |
||||
- rdoc |
||||
- rhc |
||||
ignore_errors: yes |
||||
|
||||
- name: copy the resolv.conf |
||||
template: src=resolv.conf.j2 dest=/etc/resolv.conf |
||||
|
@ -0,0 +1,47 @@ |
||||
# Domain suffix to use for applications (Must match node config) |
||||
CLOUD_DOMAIN="{{ domain_name }}" |
||||
# Comma seperted list of valid gear sizes |
||||
VALID_GEAR_SIZES="small,medium" |
||||
|
||||
# Default number of gears to assign to a new user |
||||
DEFAULT_MAX_GEARS="100" |
||||
# Default gear size for a new gear |
||||
DEFAULT_GEAR_SIZE="small" |
||||
|
||||
#Broker datastore configuration |
||||
MONGO_REPLICA_SETS=true |
||||
# Replica set example: "<host-1>:<port-1> <host-2>:<port-2> ..." |
||||
{% set hosts = '' %} |
||||
{% for host in groups['mongo_servers'] %} |
||||
{% if loop.last %} |
||||
{% set hosts = hosts + host + ':' ~ mongod_port + ' ' %} |
||||
|
||||
MONGO_HOST_PORT="{{ hosts }}" |
||||
|
||||
{% else %} |
||||
{% set hosts = hosts + host + ':' ~ mongod_port + ', ' %} |
||||
{% endif %} |
||||
{% endfor %} |
||||
|
||||
MONGO_USER="admin" |
||||
MONGO_PASSWORD="{{ mongo_admin_pass }}" |
||||
MONGO_DB="admin" |
||||
|
||||
#Enables gear/filesystem resource usage tracking |
||||
ENABLE_USAGE_TRACKING_DATASTORE="false" |
||||
#Log resource usage information to syslog |
||||
ENABLE_USAGE_TRACKING_SYSLOG="false" |
||||
|
||||
#Enable all broker analytics |
||||
ENABLE_ANALYTICS="false" |
||||
|
||||
#Enables logging of REST API operations and success/failure |
||||
ENABLE_USER_ACTION_LOG="true" |
||||
USER_ACTION_LOG_FILE="/var/log/openshift/broker/user_action.log" |
||||
|
||||
AUTH_SALT="{{ auth_salt }}" |
||||
AUTH_PRIVKEYFILE="/etc/openshift/server_priv.pem" |
||||
AUTH_PRIVKEYPASS="" |
||||
AUTH_PUBKEYFILE="/etc/openshift/server_pub.pem" |
||||
AUTH_RSYNC_KEY_FILE="/etc/openshift/rsync_id_rsa" |
||||
SESSION_SECRET="{{ session_secret }}" |
@ -0,0 +1,25 @@ |
||||
topicprefix = /topic/ |
||||
main_collective = mcollective |
||||
collectives = mcollective |
||||
libdir = /opt/rh/ruby193/root/usr/libexec/mcollective |
||||
logfile = /var/log/mcollective-client.log |
||||
loglevel = debug |
||||
direct_addressing = 1 |
||||
|
||||
# Plugins |
||||
securityprovider = psk |
||||
plugin.psk = unset |
||||
|
||||
connector = stomp |
||||
plugin.stomp.pool.size = {{ groups['mq']|length() }} |
||||
{% for host in groups['mq'] %} |
||||
|
||||
plugin.stomp.pool.host{{ loop.index }} = {{ hostvars[host].ansible_hostname }} |
||||
plugin.stomp.pool.port{{ loop.index }} = 61613 |
||||
plugin.stomp.pool.user{{ loop.index }} = mcollective |
||||
plugin.stomp.pool.password{{ loop.index }} = {{ mcollective_pass }} |
||||
|
||||
{% endfor %} |
||||
|
||||
|
||||
|
@ -0,0 +1,8 @@ |
||||
BROKER_URL=http://localhost:8080/broker/rest |
||||
|
||||
CONSOLE_SECURITY=remote_user |
||||
|
||||
REMOTE_USER_HEADER=REMOTE_USER |
||||
|
||||
REMOTE_USER_COPY_HEADERS=X-Remote-User |
||||
SESSION_SECRET="{{ session_secret }}" |
@ -0,0 +1,8 @@ |
||||
# Remote API server |
||||
libra_server = '{{ ansible_hostname }}' |
||||
|
||||
# Logging |
||||
debug = 'false' |
||||
|
||||
# Timeout |
||||
#timeout = '10' |
@ -0,0 +1,4 @@ |
||||
# Settings related to the Remote-User variant of an OpenShift auth plugin |
||||
|
||||
# The name of the header containing the trusted username |
||||
TRUSTED_HEADER="REMOTE_USER" |
Some files were not shown because too many files have changed in this diff Show More
Reference in new issue