Merge pull request #4 from tgerla/nagios
Implement Nagios for lamp_haproxy and move to 1.2 syntax (roles, etc)pull/63/head
commit
cd1da33dba
@ -1,50 +1,67 @@ |
||||
LAMP Stack + HAProxy: Example Playbooks |
||||
----------------------------------------------------------------------------- |
||||
|
||||
This example is an extension of the simple LAMP deployment. Here we'll deploy a web server with an HAProxy load balancer in front. This set of playbooks also have the capability to dynamically add and remove web server nodes from the deployment. It also includes examples to do a rolling update of a stack without affecting the service. |
||||
(This example requires Ansible 1.2) |
||||
|
||||
###Setup Entire Site. |
||||
First we configure the entire stack by listing our hosts in the 'hosts' inventory file, grouped by their purpose: |
||||
This example is an extension of the simple LAMP deployment. Here we'll install |
||||
and configure a web server with an HAProxy load balancer in front, and deploy |
||||
an application to the web servers. This set of playbooks also have the |
||||
capability to dynamically add and remove web server nodes from the deployment. |
||||
It also includes examples to do a rolling update of a stack without affecting |
||||
the service. |
||||
|
||||
You can also optionally configure a Nagios monitoring node. |
||||
|
||||
### Initial Site Setup |
||||
|
||||
First we configure the entire stack by listing our hosts in the 'hosts' |
||||
inventory file, grouped by their purpose: |
||||
|
||||
[webservers] |
||||
web3 |
||||
web2 |
||||
webserver1 |
||||
webserver2 |
||||
|
||||
[dbservers] |
||||
web3 |
||||
dbserver |
||||
|
||||
[lbservers] |
||||
lbserver |
||||
|
||||
[monitoring] |
||||
nagios |
||||
|
||||
After which we execute the following command to deploy the site: |
||||
|
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
The deployment can be verified by accessing the IP address of your load balnacer host in a web browser: http://<ip-of-lb>:8888. Reloading the page should have you hit different webservers. |
||||
|
||||
###Remove a Node |
||||
|
||||
Removal of a node from the cluster is as simple as executing the following command: |
||||
|
||||
ansible-playbook -i hosts playbooks/remove_webservers.yml --limit=web2 |
||||
|
||||
###Add a Node |
||||
|
||||
Adding a node to the cluster can be done by executing the following command: |
||||
|
||||
ansible-playbook -i hosts playbooks/add_webservers.yml --limit=web2 |
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
###Rolling Update |
||||
The deployment can be verified by accessing the IP address of your load |
||||
balancer host in a web browser: http://<ip-of-lb>:8888. Reloading the page |
||||
should have you hit different webservers. |
||||
|
||||
Rolling updates are the preferred way to update the web server software or deployed application, since the load balancer can be dynamically configured to take the hosts to be updated out of the pool. This will keep the service running on other servers so that the users are not interrupted. |
||||
### Removing and Adding a Node |
||||
|
||||
In this example the hosts are updated in serial fashion, which means |
||||
that only one server will be updated at one time. If you have a lot of web server hosts, this behaviour can be changed by setting the 'serial' keyword in webservers.yml file. |
||||
Removal and addition of nodes to the cluster is as simple as editing the |
||||
hosts inventory and re-running: |
||||
|
||||
Once the code has been updated in the source repository for your application which can be defined in the group_vars/all file, execute the following command: |
||||
ansible-playbook -i hosts site.yml |
||||
|
||||
ansible-playbook -i hosts playbooks/rolling_update.yml |
||||
### Rolling Update |
||||
|
||||
Rolling updates are the preferred way to update the web server software or |
||||
deployed application, since the load balancer can be dynamically configured |
||||
to take the hosts to be updated out of the pool. This will keep the service |
||||
running on other servers so that the users are not interrupted. |
||||
|
||||
In this example the hosts are updated in serial fashion, which means that |
||||
only one server will be updated at one time. If you have a lot of web server |
||||
hosts, this behaviour can be changed by setting the 'serial' keyword in |
||||
webservers.yml file. |
||||
|
||||
Once the code has been updated in the source repository for your application |
||||
which can be defined in the group_vars/all file, execute the following |
||||
command: |
||||
|
||||
ansible-playbook -i hosts rolling_update.yml |
||||
|
||||
|
||||
You can optionally pass: -e webapp_version=xxx to the rolling_update |
||||
playbook to specify a specific version of the example webapp to deploy. |
||||
|
@ -0,0 +1,12 @@ |
||||
--- |
||||
# This playbook deploys MySQL and configures the database on the db node(s) |
||||
|
||||
# fetch monitoring facts for iptables rules |
||||
- hosts: monitoring |
||||
tasks: |
||||
|
||||
- hosts: dbservers |
||||
user: root |
||||
roles: |
||||
- common |
||||
- db |
@ -0,0 +1,12 @@ |
||||
--- |
||||
# Playbook for HAProxy operations |
||||
|
||||
# fetch monitoring facts for iptables rules |
||||
- hosts: monitoring |
||||
tasks: |
||||
|
||||
- hosts: lbservers |
||||
user: root |
||||
roles: |
||||
- common |
||||
- haproxy |
@ -0,0 +1,13 @@ |
||||
--- |
||||
# This playbook configures the monitoring node |
||||
|
||||
# trigger fact-gathering for all hosts |
||||
- hosts: all |
||||
tasks: |
||||
|
||||
- hosts: monitoring |
||||
user: root |
||||
roles: |
||||
- common |
||||
- base-apache |
||||
- nagios |
@ -1,10 +0,0 @@ |
||||
--- |
||||
# This Playbook adds a webserver into the the web cluster |
||||
|
||||
- hosts: webservers |
||||
user: root |
||||
serial: 1 |
||||
tasks: |
||||
- include: ../roles/web/tasks/install_httpd.yml |
||||
- include: ../roles/web/tasks/copy_code.yml |
||||
- include: ../roles/web/tasks/add_to_lb.yml |
@ -1,11 +0,0 @@ |
||||
--- |
||||
# This playbook deploys MySQL and configures the database on the db node(s) |
||||
|
||||
- hosts: dbservers |
||||
user: root |
||||
tasks: |
||||
- include: ../roles/common/tasks/main.yml |
||||
- include: ../roles/db/tasks/main.yml |
||||
handlers: |
||||
- include: ../roles/db/handlers/main.yml |
||||
- include: ../roles/common/handlers/main.yml |
@ -1,11 +0,0 @@ |
||||
--- |
||||
# Playbook for HAProxy operations |
||||
|
||||
- hosts: lbservers |
||||
user: root |
||||
tasks: |
||||
- include: ../roles/common/tasks/main.yml |
||||
- include: ../roles/haproxy/tasks/main.yml |
||||
handlers: |
||||
- include: ../roles/haproxy/handlers/main.yml |
||||
- include: ../roles/common/handlers/main.yml |
@ -1,10 +0,0 @@ |
||||
--- |
||||
# This playbook removes a webserver from the pool serially. |
||||
# Change the value of serial: to adjust the number of servers |
||||
# to be removed at a time. |
||||
|
||||
- hosts: webservers |
||||
user: root |
||||
serial: 1 |
||||
tasks: |
||||
- include: ../roles/web/tasks/remove_from_lb.yml |
@ -1,9 +0,0 @@ |
||||
--- |
||||
# This Playbook does a rolling update of the code for all webservers serially (one at a time). |
||||
# Change the value of serial: to adjust the number of server to be updated. |
||||
|
||||
- hosts: webservers |
||||
user: root |
||||
serial: 1 |
||||
tasks: |
||||
- include: ../roles/web/tasks/rolling_update.yml |
@ -1,12 +0,0 @@ |
||||
--- |
||||
# This playbook deploys the webservers with httpd and the code. |
||||
|
||||
- hosts: webservers |
||||
user: root |
||||
tasks: |
||||
- include: ../roles/common/tasks/main.yml |
||||
- include: ../roles/web/tasks/install_httpd.yml |
||||
- include: ../roles/web/tasks/copy_code.yml |
||||
handlers: |
||||
- include: ../roles/web/handlers/main.yml |
||||
- include: ../roles/common/handlers/main.yml |
@ -0,0 +1,12 @@ |
||||
--- |
||||
# This playbook installs httpd |
||||
|
||||
- name: Install http and php etc |
||||
yum: name=$item state=installed |
||||
with_items: |
||||
- httpd |
||||
- libsemanage-python |
||||
- libselinux-python |
||||
|
||||
- name: http service state |
||||
service: name=httpd state=started enabled=yes |
@ -0,0 +1,30 @@ |
||||
# {{ ansible_managed }} |
||||
# Manual customization of this file is not recommended. |
||||
*filter |
||||
:INPUT ACCEPT [0:0] |
||||
:FORWARD ACCEPT [0:0] |
||||
:OUTPUT ACCEPT [0:0] |
||||
|
||||
{% if (inventory_hostname in groups['webservers']) or (inventory_hostname in groups['monitoring']) %} |
||||
-A INPUT -p tcp --dport 80 -j ACCEPT |
||||
{% endif %} |
||||
|
||||
{% if inventory_hostname in groups['dbservers'] %} |
||||
-A INPUT -p tcp --dport 3306 -j ACCEPT |
||||
{% endif %} |
||||
|
||||
{% if inventory_hostname in groups['lbservers'] %} |
||||
-A INPUT -p tcp --dport {{ listenport }} -j ACCEPT |
||||
{% endif %} |
||||
|
||||
{% for host in groups['monitoring'] %} |
||||
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT |
||||
{% endfor %} |
||||
|
||||
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT |
||||
-A INPUT -p icmp -j ACCEPT |
||||
-A INPUT -i lo -j ACCEPT |
||||
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT |
||||
-A INPUT -j REJECT --reject-with icmp-host-prohibited |
||||
-A FORWARD -j REJECT --reject-with icmp-host-prohibited |
||||
COMMIT |
@ -0,0 +1,39 @@ |
||||
# {{ ansible_managed }} |
||||
|
||||
# service checks to be applied to all hosts |
||||
|
||||
define service { |
||||
use local-service |
||||
host_name localhost |
||||
service_description Root Partition |
||||
check_command check_local_disk!20%!10%!/ |
||||
} |
||||
|
||||
define service { |
||||
use local-service |
||||
host_name * |
||||
service_description Current Users |
||||
check_command check_local_users!20!50 |
||||
} |
||||
|
||||
|
||||
define service { |
||||
use local-service |
||||
host_name * |
||||
service_description Total Processes |
||||
check_command check_local_procs!250!400!RSZDT |
||||
} |
||||
|
||||
define service { |
||||
use local-service |
||||
host_name * |
||||
service_description Current Load |
||||
check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0 |
||||
} |
||||
|
||||
define service { |
||||
use local-service |
||||
host_name * |
||||
service_description Swap Usage |
||||
check_command check_local_swap!20!10 |
||||
} |
@ -0,0 +1,144 @@ |
||||
############################################################################### |
||||
# LOCALHOST.CFG - SAMPLE OBJECT CONFIG FILE FOR MONITORING THIS MACHINE |
||||
# |
||||
# Last Modified: 05-31-2007 |
||||
# |
||||
# NOTE: This config file is intended to serve as an *extremely* simple |
||||
# example of how you can create configuration entries to monitor |
||||
# the local (Linux) machine. |
||||
# |
||||
############################################################################### |
||||
|
||||
|
||||
|
||||
|
||||
############################################################################### |
||||
############################################################################### |
||||
# |
||||
# HOST DEFINITION |
||||
# |
||||
############################################################################### |
||||
############################################################################### |
||||
|
||||
# Define a host for the local machine |
||||
|
||||
define host{ |
||||
use linux-server ; Name of host template to use |
||||
; This host definition will inherit all variables that are defined |
||||
; in (or inherited by) the linux-server host template definition. |
||||
host_name localhost |
||||
alias localhost |
||||
address 127.0.0.1 |
||||
} |
||||
|
||||
|
||||
|
||||
############################################################################### |
||||
############################################################################### |
||||
# |
||||
# HOST GROUP DEFINITION |
||||
# |
||||
############################################################################### |
||||
############################################################################### |
||||
|
||||
# Define an optional hostgroup for Linux machines |
||||
|
||||
define hostgroup{ |
||||
hostgroup_name linux-servers ; The name of the hostgroup |
||||
alias Linux Servers ; Long name of the group |
||||
members localhost ; Comma separated list of hosts that belong to this group |
||||
} |
||||
|
||||
|
||||
|
||||
############################################################################### |
||||
############################################################################### |
||||
# |
||||
# SERVICE DEFINITIONS |
||||
# |
||||
############################################################################### |
||||
############################################################################### |
||||
|
||||
|
||||
# Define a service to "ping" the local machine |
||||
|
||||
define service{ |
||||
use local-service ; Name of service template to use |
||||
host_name localhost |
||||
service_description PING |
||||
check_command check_ping!100.0,20%!500.0,60% |
||||
} |
||||
|
||||
|
||||
# Define a service to check the disk space of the root partition |
||||
# on the local machine. Warning if < 20% free, critical if |
||||
# < 10% free space on partition. |
||||
|
||||
define service{ |
||||
use local-service ; Name of service template to use |
||||
host_name localhost |
||||
service_description Root Partition |
||||
check_command check_local_disk!20%!10%!/ |
||||
} |
||||
|
||||
|
||||
|
||||
# Define a service to check the number of currently logged in |
||||
# users on the local machine. Warning if > 20 users, critical |
||||
# if > 50 users. |
||||
|
||||
define service{ |
||||
use local-service ; Name of service template to use |
||||
host_name localhost |
||||
service_description Current Users |
||||
check_command check_local_users!20!50 |
||||
} |
||||
|
||||
|
||||
# Define a service to check the number of currently running procs |
||||
# on the local machine. Warning if > 250 processes, critical if |
||||
# > 400 users. |
||||
|
||||
define service{ |
||||
use local-service ; Name of service template to use |
||||
host_name localhost |
||||
service_description Total Processes |
||||
check_command check_local_procs!250!400!RSZDT |
||||
} |
||||
|
||||
|
||||
|
||||
# Define a service to check the load on the local machine. |
||||
|
||||
define service{ |
||||
use local-service ; Name of service template to use |
||||
host_name localhost |
||||
service_description Current Load |
||||
check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0 |
||||
} |
||||
|
||||
|
||||
|
||||
# Define a service to check the swap usage the local machine. |
||||
# Critical if less than 10% of swap is free, warning if less than 20% is free |
||||
|
||||
define service{ |
||||
use local-service ; Name of service template to use |
||||
host_name localhost |
||||
service_description Swap Usage |
||||
check_command check_local_swap!20!10 |
||||
} |
||||
|
||||
|
||||
|
||||
# Define a service to check SSH on the local machine. |
||||
# Disable notifications for this service by default, as not all users may have SSH enabled. |
||||
|
||||
define service{ |
||||
use local-service ; Name of service template to use |
||||
host_name localhost |
||||
service_description SSH |
||||
check_command check_ssh |
||||
notifications_enabled 0 |
||||
} |
||||
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,7 @@ |
||||
--- |
||||
# handlers for icinga |
||||
- name: restart httpd |
||||
service: name=httpd state=restarted |
||||
|
||||
- name: restart nagios |
||||
service: name=nagios state=restarted |
@ -0,0 +1,41 @@ |
||||
--- |
||||
# This playbook will install nagios |
||||
|
||||
- name: install nagios |
||||
yum: pkg=$item state=installed |
||||
with_items: |
||||
- nagios |
||||
- nagios-plugins |
||||
- nagios-plugins-nrpe |
||||
- nagios-plugins-ping |
||||
- nagios-plugins-ssh |
||||
- nagios-plugins-http |
||||
- nagios-plugins-mysql |
||||
- nagios-devel |
||||
notify: restart httpd |
||||
|
||||
- name: create nagios config dir |
||||
file: path=/etc/nagios/ansible-managed state=directory |
||||
|
||||
- name: configure nagios |
||||
copy: src=nagios.cfg dest=/etc/nagios/nagios.cfg |
||||
notify: restart nagios |
||||
|
||||
- name: configure localhost monitoring |
||||
copy: src=localhost.cfg dest=/etc/nagios/objects/localhost.cfg |
||||
notify: restart nagios |
||||
|
||||
- name: configure nagios services |
||||
copy: src=ansible-managed-services.cfg dest=/etc/nagios/ |
||||
|
||||
- name: create the nagios object files |
||||
template: src={{ item + ".j2" }} |
||||
dest=/etc/nagios/ansible-managed/{{ item }} |
||||
with_items: |
||||
- webservers.cfg |
||||
- dbservers.cfg |
||||
- lbservers.cfg |
||||
notify: restart nagios |
||||
|
||||
- name: start nagios |
||||
service: name=nagios state=started enabled=yes |
@ -0,0 +1,25 @@ |
||||
# {{ ansible_managed }} |
||||
|
||||
define hostgroup { |
||||
hostgroup_name dbservers |
||||
alias Database Servers |
||||
} |
||||
|
||||
{% for host in groups['dbservers'] %} |
||||
define host { |
||||
use linux-server |
||||
host_name {{ host }} |
||||
alias {{ host }} |
||||
address {{ hostvars[host].ansible_default_ipv4.address }} |
||||
hostgroups dbservers |
||||
} |
||||
{% endfor %} |
||||
|
||||
#define service { |
||||
# use local-service |
||||
# hostgroup_name dbservers |
||||
# service_description MySQL Database Server |
||||
# check_command check_mysql |
||||
# notifications_enabled 0 |
||||
#} |
||||
|
@ -0,0 +1,22 @@ |
||||
# {{ ansible_managed }} |
||||
|
||||
define hostgroup { |
||||
hostgroup_name loadbalancers |
||||
alias Load Balancers |
||||
} |
||||
|
||||
{% for host in groups['lbservers'] %} |
||||
define host { |
||||
use linux-server |
||||
host_name {{ host }} |
||||
alias {{ host }} |
||||
address {{ hostvars[host].ansible_default_ipv4.address }} |
||||
hostgroups loadbalancers |
||||
} |
||||
define service { |
||||
use local-service |
||||
host_name {{ host }} |
||||
service_description HAProxy Load Balancer |
||||
check_command check_http!-p{{ hostvars[host].listenport }} |
||||
} |
||||
{% endfor %} |
@ -0,0 +1,25 @@ |
||||
# {{ ansible_managed }} |
||||
|
||||
define hostgroup { |
||||
hostgroup_name webservers |
||||
alias Web Servers |
||||
} |
||||
|
||||
{% for host in groups['webservers'] %} |
||||
define host { |
||||
use linux-server |
||||
host_name {{ host }} |
||||
alias {{ host }} |
||||
address {{ hostvars[host].ansible_default_ipv4.address }} |
||||
hostgroups webservers |
||||
} |
||||
{% endfor %} |
||||
|
||||
# service checks to be applied to the web server |
||||
define service { |
||||
use local-service |
||||
hostgroup_name webservers |
||||
service_description webserver |
||||
check_command check_http |
||||
notifications_enabled 0 |
||||
} |
@ -1,5 +0,0 @@ |
||||
--- |
||||
# Handler for the web tier |
||||
|
||||
- name: restart iptables |
||||
service: name=iptables state=restarted |
@ -1,15 +0,0 @@ |
||||
--- |
||||
# This Playbook handles the addition of a web server to the pool. |
||||
|
||||
- name: Add server to LB |
||||
lineinfile: dest=/etc/haproxy/haproxy.cfg state=present regexp="${ansible_hostname}" line="server ${ansible_hostname} ${hostvars.{$inventory_hostname}.ansible_$iface.ipv4.address}:${httpd_port}" |
||||
delegate_to: $item |
||||
with_items: ${groups.lbservers} |
||||
register: last_run |
||||
|
||||
- name: Reload the haproxy |
||||
service: name=haproxy state=reloaded |
||||
delegate_to: $item |
||||
with_items: ${groups.lbservers} |
||||
only_if: ${last_run.changed} |
||||
|
@ -1,8 +0,0 @@ |
||||
--- |
||||
# This Playbook is responsible for copying the latest dev/production code from the version control system. |
||||
|
||||
- name: Copy the code from repository |
||||
git: repo=${repository} dest=/var/www/html/ |
||||
|
||||
- name: Create the index.php file |
||||
template: src=../roles/web/templates/index.php.j2 dest=/var/www/html/index.php |
@ -1,25 +0,0 @@ |
||||
--- |
||||
# This playbook installs http and the php modules. |
||||
|
||||
- name: Install http and php etc |
||||
action: yum name=$item state=installed |
||||
with_items: |
||||
- httpd |
||||
- php |
||||
- php-mysql |
||||
- libsemanage-python |
||||
- libselinux-python |
||||
|
||||
- name: insert iptables rule for httpd |
||||
lineinfile: dest=/etc/sysconfig/iptables state=present regexp="$httpd_port" insertafter="^:OUTPUT " line="-A INPUT -p tcp --dport $httpd_port -j ACCEPT" |
||||
register: last_run |
||||
|
||||
- name: Apply iptable rule |
||||
service: name=iptables state=restarted |
||||
only_if: ${last_run.changed} |
||||
|
||||
- name: http service state |
||||
service: name=httpd state=started enabled=yes |
||||
|
||||
- name: Configure SELinux to allow httpd to connect to remote database |
||||
seboolean: name=httpd_can_network_connect_db state=true persistent=yes |
@ -0,0 +1,15 @@ |
||||
--- |
||||
|
||||
# httpd is handled by the base-apache role upstream |
||||
- name: Install php and git |
||||
yum: name=$item state=installed |
||||
with_items: |
||||
- php |
||||
- php-mysql |
||||
- git |
||||
|
||||
- name: Configure SELinux to allow httpd to connect to remote database |
||||
seboolean: name=httpd_can_network_connect_db state=true persistent=yes |
||||
|
||||
- name: Copy the code from repository |
||||
git: repo=${repository} version=${webapp_version} dest=/var/www/html/ |
@ -1,23 +0,0 @@ |
||||
--- |
||||
# This playbook handles the removal of a webserver from the pool. |
||||
|
||||
- name: Remove the code from server |
||||
command: rm -rf /var/www/html/* |
||||
|
||||
- name: Remove server from LB |
||||
lineinfile: dest=/etc/haproxy/haproxy.cfg state=absent regexp="${ansible_hostname}" |
||||
delegate_to: $item |
||||
with_items: ${groups.lbservers} |
||||
register: last_run |
||||
|
||||
- name: disable the server in haproxy |
||||
shell: echo "disable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats |
||||
delegate_to: $item |
||||
with_items: ${groups.lbservers} |
||||
|
||||
- name: Remove the httpd package |
||||
yum: name=httpd state=absent |
||||
|
||||
|
||||
|
||||
|
@ -1,22 +0,0 @@ |
||||
--- |
||||
# This Playbook implements a rolling update on the infrastructure, change the value of the serial keyword to specify the number of servers the update should happen. |
||||
|
||||
- name: Remove the code from server |
||||
command: rm -rf /var/www/html/* |
||||
|
||||
- name: disable the server in haproxy |
||||
shell: echo "disable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats |
||||
delegate_to: $item |
||||
with_items: ${groups.lbservers} |
||||
|
||||
- name: Copy the code from repository |
||||
git: repo=${repository} dest=/var/www/html/ |
||||
|
||||
- name: Create's the index.php file |
||||
template: src=../roles/web/templates/index.php.j2 dest=/var/www/html/index.php |
||||
|
||||
- name: Enable the server in haproxy |
||||
shell: echo "enable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats |
||||
delegate_to: $item |
||||
with_items: ${groups.lbservers} |
||||
|
@ -1,16 +0,0 @@ |
||||
<html> |
||||
<head> |
||||
<title>Ansible Application</title> |
||||
</head> |
||||
<body> |
||||
</br> |
||||
<a href=http://{{ hostvars[inventory_hostname]['ansible_' + iface].ipv4.address }}/index.html>Homepage</a> |
||||
</br> |
||||
<?php |
||||
Print "Hello, World! I am a web server deployed using Ansible and I am : "; |
||||
echo exec('hostname'); |
||||
Print "</BR>"; |
||||
?> |
||||
</body> |
||||
</html> |
||||
|
@ -0,0 +1,36 @@ |
||||
--- |
||||
# This playbook does a rolling update of the code for all webservers serially (one at a time). |
||||
# Change the value of serial: to adjust the number of server to be updated. |
||||
# This playbook also takes the webapp_version variable to specify which git version |
||||
# of the test webapp to deploy. |
||||
|
||||
- hosts: webservers |
||||
user: root |
||||
serial: 1 |
||||
tasks: |
||||
|
||||
- name: disable nagios alerts for this host webserver service |
||||
nagios: action=disable_alerts host=$ansible_hostname services=webserver |
||||
delegate_to: $item |
||||
with_items: ${groups.monitoring} |
||||
|
||||
- name: disable the server in haproxy |
||||
shell: echo "disable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats |
||||
delegate_to: $item |
||||
with_items: ${groups.lbservers} |
||||
|
||||
- name: Remove the code from server |
||||
command: rm -rf /var/www/html/* |
||||
|
||||
- name: Copy the code from repository |
||||
git: repo=${repository} version=${webapp_version} dest=/var/www/html/ |
||||
|
||||
- name: Enable the server in haproxy |
||||
shell: echo "enable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats |
||||
delegate_to: $item |
||||
with_items: ${groups.lbservers} |
||||
|
||||
- name: re-enable nagios alerts |
||||
nagios: action=enable_alerts host=$ansible_hostname services=webserver |
||||
delegate_to: $item |
||||
with_items: ${groups.monitoring} |
@ -1,6 +1,7 @@ |
||||
--- |
||||
#This Playbook deploys the whole application stack in this site. |
||||
|
||||
- include: playbooks/db.yml |
||||
- include: playbooks/web.yml |
||||
- include: playbooks/haproxy.yml |
||||
- include: db.yml |
||||
- include: web.yml |
||||
- include: haproxy.yml |
||||
- include: nagios.yml |
||||
|
@ -0,0 +1,14 @@ |
||||
--- |
||||
# This playbook deploys the webservers with httpd and the code. |
||||
|
||||
# fetch monitoring facts for iptables rules |
||||
- hosts: monitoring |
||||
tasks: |
||||
|
||||
- hosts: webservers |
||||
user: root |
||||
|
||||
roles: |
||||
- common |
||||
- base-apache |
||||
- web |
Reference in new issue