Merge pull request #4 from tgerla/nagios

Implement Nagios for lamp_haproxy and move to 1.2 syntax (roles, etc)
pull/63/head
Tim Gerla 11 years ago
commit cd1da33dba
  1. 73
      lamp_haproxy/README.md
  2. 12
      lamp_haproxy/db.yml
  3. 3
      lamp_haproxy/group_vars/webservers
  4. 12
      lamp_haproxy/haproxy.yml
  5. 13
      lamp_haproxy/nagios.yml
  6. 10
      lamp_haproxy/playbooks/add_webservers.yml
  7. 11
      lamp_haproxy/playbooks/db.yml
  8. 11
      lamp_haproxy/playbooks/haproxy.yml
  9. 10
      lamp_haproxy/playbooks/remove_webservers.yml
  10. 9
      lamp_haproxy/playbooks/rolling_update.yml
  11. 12
      lamp_haproxy/playbooks/web.yml
  12. 12
      lamp_haproxy/roles/base-apache/tasks/main.yml
  13. 26
      lamp_haproxy/roles/common/tasks/main.yml
  14. 30
      lamp_haproxy/roles/common/templates/iptables.j2
  15. 8
      lamp_haproxy/roles/db/tasks/main.yml
  16. 6
      lamp_haproxy/roles/haproxy/tasks/main.yml
  17. 39
      lamp_haproxy/roles/nagios/files/ansible-managed-services.cfg
  18. 144
      lamp_haproxy/roles/nagios/files/localhost.cfg
  19. 1332
      lamp_haproxy/roles/nagios/files/nagios.cfg
  20. 7
      lamp_haproxy/roles/nagios/handlers/main.yml
  21. 41
      lamp_haproxy/roles/nagios/tasks/main.yml
  22. 25
      lamp_haproxy/roles/nagios/templates/dbservers.cfg.j2
  23. 22
      lamp_haproxy/roles/nagios/templates/lbservers.cfg.j2
  24. 25
      lamp_haproxy/roles/nagios/templates/webservers.cfg.j2
  25. 5
      lamp_haproxy/roles/web/handlers/main.yml
  26. 15
      lamp_haproxy/roles/web/tasks/add_to_lb.yml
  27. 8
      lamp_haproxy/roles/web/tasks/copy_code.yml
  28. 25
      lamp_haproxy/roles/web/tasks/install_httpd.yml
  29. 15
      lamp_haproxy/roles/web/tasks/main.yml
  30. 23
      lamp_haproxy/roles/web/tasks/remove_from_lb.yml
  31. 22
      lamp_haproxy/roles/web/tasks/rolling_update.yml
  32. 16
      lamp_haproxy/roles/web/templates/index.php.j2
  33. 36
      lamp_haproxy/rolling_update.yml
  34. 7
      lamp_haproxy/site.yml
  35. 14
      lamp_haproxy/web.yml

@ -1,50 +1,67 @@
LAMP Stack + HAProxy: Example Playbooks
-----------------------------------------------------------------------------
This example is an extension of the simple LAMP deployment. Here we'll deploy a web server with an HAProxy load balancer in front. This set of playbooks also have the capability to dynamically add and remove web server nodes from the deployment. It also includes examples to do a rolling update of a stack without affecting the service.
(This example requires Ansible 1.2)
###Setup Entire Site.
First we configure the entire stack by listing our hosts in the 'hosts' inventory file, grouped by their purpose:
This example is an extension of the simple LAMP deployment. Here we'll install
and configure a web server with an HAProxy load balancer in front, and deploy
an application to the web servers. This set of playbooks also have the
capability to dynamically add and remove web server nodes from the deployment.
It also includes examples to do a rolling update of a stack without affecting
the service.
You can also optionally configure a Nagios monitoring node.
### Initial Site Setup
First we configure the entire stack by listing our hosts in the 'hosts'
inventory file, grouped by their purpose:
[webservers]
web3
web2
webserver1
webserver2
[dbservers]
web3
dbserver
[lbservers]
lbserver
[monitoring]
nagios
After which we execute the following command to deploy the site:
ansible-playbook -i hosts site.yml
The deployment can be verified by accessing the IP address of your load balnacer host in a web browser: http://<ip-of-lb>:8888. Reloading the page should have you hit different webservers.
###Remove a Node
Removal of a node from the cluster is as simple as executing the following command:
ansible-playbook -i hosts playbooks/remove_webservers.yml --limit=web2
###Add a Node
Adding a node to the cluster can be done by executing the following command:
ansible-playbook -i hosts playbooks/add_webservers.yml --limit=web2
ansible-playbook -i hosts site.yml
###Rolling Update
The deployment can be verified by accessing the IP address of your load
balancer host in a web browser: http://<ip-of-lb>:8888. Reloading the page
should have you hit different webservers.
Rolling updates are the preferred way to update the web server software or deployed application, since the load balancer can be dynamically configured to take the hosts to be updated out of the pool. This will keep the service running on other servers so that the users are not interrupted.
### Removing and Adding a Node
In this example the hosts are updated in serial fashion, which means
that only one server will be updated at one time. If you have a lot of web server hosts, this behaviour can be changed by setting the 'serial' keyword in webservers.yml file.
Removal and addition of nodes to the cluster is as simple as editing the
hosts inventory and re-running:
Once the code has been updated in the source repository for your application which can be defined in the group_vars/all file, execute the following command:
ansible-playbook -i hosts site.yml
ansible-playbook -i hosts playbooks/rolling_update.yml
### Rolling Update
Rolling updates are the preferred way to update the web server software or
deployed application, since the load balancer can be dynamically configured
to take the hosts to be updated out of the pool. This will keep the service
running on other servers so that the users are not interrupted.
In this example the hosts are updated in serial fashion, which means that
only one server will be updated at one time. If you have a lot of web server
hosts, this behaviour can be changed by setting the 'serial' keyword in
webservers.yml file.
Once the code has been updated in the source repository for your application
which can be defined in the group_vars/all file, execute the following
command:
ansible-playbook -i hosts rolling_update.yml
You can optionally pass: -e webapp_version=xxx to the rolling_update
playbook to specify a specific version of the example webapp to deploy.

@ -0,0 +1,12 @@
---
# This playbook deploys MySQL and configures the database on the db node(s)
# fetch monitoring facts for iptables rules
- hosts: monitoring
tasks:
- hosts: dbservers
user: root
roles:
- common
- db

@ -3,3 +3,6 @@
# Ethernet interface on which the web server should listen
iface: eth0
# this is V5 of the test webapp.
webapp_version: 351e47276cc66b018f4890a04709d4cc3d3edb0d

@ -0,0 +1,12 @@
---
# Playbook for HAProxy operations
# fetch monitoring facts for iptables rules
- hosts: monitoring
tasks:
- hosts: lbservers
user: root
roles:
- common
- haproxy

@ -0,0 +1,13 @@
---
# This playbook configures the monitoring node
# trigger fact-gathering for all hosts
- hosts: all
tasks:
- hosts: monitoring
user: root
roles:
- common
- base-apache
- nagios

@ -1,10 +0,0 @@
---
# This Playbook adds a webserver into the the web cluster
- hosts: webservers
user: root
serial: 1
tasks:
- include: ../roles/web/tasks/install_httpd.yml
- include: ../roles/web/tasks/copy_code.yml
- include: ../roles/web/tasks/add_to_lb.yml

@ -1,11 +0,0 @@
---
# This playbook deploys MySQL and configures the database on the db node(s)
- hosts: dbservers
user: root
tasks:
- include: ../roles/common/tasks/main.yml
- include: ../roles/db/tasks/main.yml
handlers:
- include: ../roles/db/handlers/main.yml
- include: ../roles/common/handlers/main.yml

@ -1,11 +0,0 @@
---
# Playbook for HAProxy operations
- hosts: lbservers
user: root
tasks:
- include: ../roles/common/tasks/main.yml
- include: ../roles/haproxy/tasks/main.yml
handlers:
- include: ../roles/haproxy/handlers/main.yml
- include: ../roles/common/handlers/main.yml

@ -1,10 +0,0 @@
---
# This playbook removes a webserver from the pool serially.
# Change the value of serial: to adjust the number of servers
# to be removed at a time.
- hosts: webservers
user: root
serial: 1
tasks:
- include: ../roles/web/tasks/remove_from_lb.yml

@ -1,9 +0,0 @@
---
# This Playbook does a rolling update of the code for all webservers serially (one at a time).
# Change the value of serial: to adjust the number of server to be updated.
- hosts: webservers
user: root
serial: 1
tasks:
- include: ../roles/web/tasks/rolling_update.yml

@ -1,12 +0,0 @@
---
# This playbook deploys the webservers with httpd and the code.
- hosts: webservers
user: root
tasks:
- include: ../roles/common/tasks/main.yml
- include: ../roles/web/tasks/install_httpd.yml
- include: ../roles/web/tasks/copy_code.yml
handlers:
- include: ../roles/web/handlers/main.yml
- include: ../roles/common/handlers/main.yml

@ -0,0 +1,12 @@
---
# This playbook installs httpd
- name: Install http and php etc
yum: name=$item state=installed
with_items:
- httpd
- libsemanage-python
- libselinux-python
- name: http service state
service: name=httpd state=started enabled=yes

@ -1,12 +1,28 @@
---
# This playbook contains common plays that will run on all nodes.
- name: Download the EPEL repository RPM
get_url: url=http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm dest=/tmp/ force=yes
- name: Install EPEL RPM
yum: name=/tmp/epel-release-6-8.noarch.rpm state=installed
- name: install some useful nagios plugins
yum: name=$item state=present
with_items:
- nagios-nrpe
- nagios-plugins-swap
- nagios-plugins-users
- nagios-plugins-procs
- nagios-plugins-load
- nagios-plugins-disk
- name: Install ntp
yum: name=ntp state=present
tags: ntp
- name: Configure ntp file
template: src=../roles/common/templates/ntp.conf.j2 dest=/etc/ntp.conf
template: src=ntp.conf.j2 dest=/etc/ntp.conf
tags: ntp
notify: restart ntp
@ -14,8 +30,6 @@
service: name=ntpd state=started enabled=true
tags: ntp
- name: Download the EPEL repository RPM
get_url: url=http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm dest=/tmp/ force=yes
- name: Install EPEL RPM
yum: name=/tmp/epel-release-6-8.noarch.rpm state=installed
- name: insert iptables template
template: src=iptables.j2 dest=/etc/sysconfig/iptables
notify: restart iptables

@ -0,0 +1,30 @@
# {{ ansible_managed }}
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
{% if (inventory_hostname in groups['webservers']) or (inventory_hostname in groups['monitoring']) %}
-A INPUT -p tcp --dport 80 -j ACCEPT
{% endif %}
{% if inventory_hostname in groups['dbservers'] %}
-A INPUT -p tcp --dport 3306 -j ACCEPT
{% endif %}
{% if inventory_hostname in groups['lbservers'] %}
-A INPUT -p tcp --dport {{ listenport }} -j ACCEPT
{% endif %}
{% for host in groups['monitoring'] %}
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
{% endfor %}
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

@ -2,7 +2,7 @@
# This playbook will install MySQL and create db user and give permissions.
- name: Install Mysql package
action: yum pkg=$item state=installed
yum: pkg=$item state=installed
with_items:
- mysql-server
- MySQL-python
@ -13,17 +13,13 @@
seboolean: name=mysql_connect_any state=true persistent=yes
- name: Create Mysql configuration file
action: template src=../roles/db/templates/my.cnf.j2 dest=/etc/my.cnf
template: src=my.cnf.j2 dest=/etc/my.cnf
notify:
- restart mysql
- name: Start Mysql Service
service: name=mysqld state=started enabled=true
- name: insert iptables rule
lineinfile: dest=/etc/sysconfig/iptables state=present regexp="$mysql_port" insertafter="^:OUTPUT " line="-A INPUT -p tcp --dport $mysql_port -j ACCEPT"
notify: restart iptables
- name: Create Application Database
mysql_db: name=$dbname state=present

@ -7,10 +7,6 @@
- haproxy
- socat
- name: Open firewall port for haproxy.
lineinfile: dest=/etc/sysconfig/iptables state=present regexp="$listenport" insertafter="^:OUTPUT " line="-A INPUT -p tcp --dport $listenport -j ACCEPT"
notify: restart iptables
- name: Configure the haproxy cnf file with hosts
template: src=../roles/haproxy/templates/haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg
template: src=haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg
notify: restart haproxy

@ -0,0 +1,39 @@
# {{ ansible_managed }}
# service checks to be applied to all hosts
define service {
use local-service
host_name localhost
service_description Root Partition
check_command check_local_disk!20%!10%!/
}
define service {
use local-service
host_name *
service_description Current Users
check_command check_local_users!20!50
}
define service {
use local-service
host_name *
service_description Total Processes
check_command check_local_procs!250!400!RSZDT
}
define service {
use local-service
host_name *
service_description Current Load
check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0
}
define service {
use local-service
host_name *
service_description Swap Usage
check_command check_local_swap!20!10
}

@ -0,0 +1,144 @@
###############################################################################
# LOCALHOST.CFG - SAMPLE OBJECT CONFIG FILE FOR MONITORING THIS MACHINE
#
# Last Modified: 05-31-2007
#
# NOTE: This config file is intended to serve as an *extremely* simple
# example of how you can create configuration entries to monitor
# the local (Linux) machine.
#
###############################################################################
###############################################################################
###############################################################################
#
# HOST DEFINITION
#
###############################################################################
###############################################################################
# Define a host for the local machine
define host{
use linux-server ; Name of host template to use
; This host definition will inherit all variables that are defined
; in (or inherited by) the linux-server host template definition.
host_name localhost
alias localhost
address 127.0.0.1
}
###############################################################################
###############################################################################
#
# HOST GROUP DEFINITION
#
###############################################################################
###############################################################################
# Define an optional hostgroup for Linux machines
define hostgroup{
hostgroup_name linux-servers ; The name of the hostgroup
alias Linux Servers ; Long name of the group
members localhost ; Comma separated list of hosts that belong to this group
}
###############################################################################
###############################################################################
#
# SERVICE DEFINITIONS
#
###############################################################################
###############################################################################
# Define a service to "ping" the local machine
define service{
use local-service ; Name of service template to use
host_name localhost
service_description PING
check_command check_ping!100.0,20%!500.0,60%
}
# Define a service to check the disk space of the root partition
# on the local machine. Warning if < 20% free, critical if
# < 10% free space on partition.
define service{
use local-service ; Name of service template to use
host_name localhost
service_description Root Partition
check_command check_local_disk!20%!10%!/
}
# Define a service to check the number of currently logged in
# users on the local machine. Warning if > 20 users, critical
# if > 50 users.
define service{
use local-service ; Name of service template to use
host_name localhost
service_description Current Users
check_command check_local_users!20!50
}
# Define a service to check the number of currently running procs
# on the local machine. Warning if > 250 processes, critical if
# > 400 users.
define service{
use local-service ; Name of service template to use
host_name localhost
service_description Total Processes
check_command check_local_procs!250!400!RSZDT
}
# Define a service to check the load on the local machine.
define service{
use local-service ; Name of service template to use
host_name localhost
service_description Current Load
check_command check_local_load!5.0,4.0,3.0!10.0,6.0,4.0
}
# Define a service to check the swap usage the local machine.
# Critical if less than 10% of swap is free, warning if less than 20% is free
define service{
use local-service ; Name of service template to use
host_name localhost
service_description Swap Usage
check_command check_local_swap!20!10
}
# Define a service to check SSH on the local machine.
# Disable notifications for this service by default, as not all users may have SSH enabled.
define service{
use local-service ; Name of service template to use
host_name localhost
service_description SSH
check_command check_ssh
notifications_enabled 0
}

File diff suppressed because it is too large Load Diff

@ -0,0 +1,7 @@
---
# handlers for icinga
- name: restart httpd
service: name=httpd state=restarted
- name: restart nagios
service: name=nagios state=restarted

@ -0,0 +1,41 @@
---
# This playbook will install nagios
- name: install nagios
yum: pkg=$item state=installed
with_items:
- nagios
- nagios-plugins
- nagios-plugins-nrpe
- nagios-plugins-ping
- nagios-plugins-ssh
- nagios-plugins-http
- nagios-plugins-mysql
- nagios-devel
notify: restart httpd
- name: create nagios config dir
file: path=/etc/nagios/ansible-managed state=directory
- name: configure nagios
copy: src=nagios.cfg dest=/etc/nagios/nagios.cfg
notify: restart nagios
- name: configure localhost monitoring
copy: src=localhost.cfg dest=/etc/nagios/objects/localhost.cfg
notify: restart nagios
- name: configure nagios services
copy: src=ansible-managed-services.cfg dest=/etc/nagios/
- name: create the nagios object files
template: src={{ item + ".j2" }}
dest=/etc/nagios/ansible-managed/{{ item }}
with_items:
- webservers.cfg
- dbservers.cfg
- lbservers.cfg
notify: restart nagios
- name: start nagios
service: name=nagios state=started enabled=yes

@ -0,0 +1,25 @@
# {{ ansible_managed }}
define hostgroup {
hostgroup_name dbservers
alias Database Servers
}
{% for host in groups['dbservers'] %}
define host {
use linux-server
host_name {{ host }}
alias {{ host }}
address {{ hostvars[host].ansible_default_ipv4.address }}
hostgroups dbservers
}
{% endfor %}
#define service {
# use local-service
# hostgroup_name dbservers
# service_description MySQL Database Server
# check_command check_mysql
# notifications_enabled 0
#}

@ -0,0 +1,22 @@
# {{ ansible_managed }}
define hostgroup {
hostgroup_name loadbalancers
alias Load Balancers
}
{% for host in groups['lbservers'] %}
define host {
use linux-server
host_name {{ host }}
alias {{ host }}
address {{ hostvars[host].ansible_default_ipv4.address }}
hostgroups loadbalancers
}
define service {
use local-service
host_name {{ host }}
service_description HAProxy Load Balancer
check_command check_http!-p{{ hostvars[host].listenport }}
}
{% endfor %}

@ -0,0 +1,25 @@
# {{ ansible_managed }}
define hostgroup {
hostgroup_name webservers
alias Web Servers
}
{% for host in groups['webservers'] %}
define host {
use linux-server
host_name {{ host }}
alias {{ host }}
address {{ hostvars[host].ansible_default_ipv4.address }}
hostgroups webservers
}
{% endfor %}
# service checks to be applied to the web server
define service {
use local-service
hostgroup_name webservers
service_description webserver
check_command check_http
notifications_enabled 0
}

@ -1,5 +0,0 @@
---
# Handler for the web tier
- name: restart iptables
service: name=iptables state=restarted

@ -1,15 +0,0 @@
---
# This Playbook handles the addition of a web server to the pool.
- name: Add server to LB
lineinfile: dest=/etc/haproxy/haproxy.cfg state=present regexp="${ansible_hostname}" line="server ${ansible_hostname} ${hostvars.{$inventory_hostname}.ansible_$iface.ipv4.address}:${httpd_port}"
delegate_to: $item
with_items: ${groups.lbservers}
register: last_run
- name: Reload the haproxy
service: name=haproxy state=reloaded
delegate_to: $item
with_items: ${groups.lbservers}
only_if: ${last_run.changed}

@ -1,8 +0,0 @@
---
# This Playbook is responsible for copying the latest dev/production code from the version control system.
- name: Copy the code from repository
git: repo=${repository} dest=/var/www/html/
- name: Create the index.php file
template: src=../roles/web/templates/index.php.j2 dest=/var/www/html/index.php

@ -1,25 +0,0 @@
---
# This playbook installs http and the php modules.
- name: Install http and php etc
action: yum name=$item state=installed
with_items:
- httpd
- php
- php-mysql
- libsemanage-python
- libselinux-python
- name: insert iptables rule for httpd
lineinfile: dest=/etc/sysconfig/iptables state=present regexp="$httpd_port" insertafter="^:OUTPUT " line="-A INPUT -p tcp --dport $httpd_port -j ACCEPT"
register: last_run
- name: Apply iptable rule
service: name=iptables state=restarted
only_if: ${last_run.changed}
- name: http service state
service: name=httpd state=started enabled=yes
- name: Configure SELinux to allow httpd to connect to remote database
seboolean: name=httpd_can_network_connect_db state=true persistent=yes

@ -0,0 +1,15 @@
---
# httpd is handled by the base-apache role upstream
- name: Install php and git
yum: name=$item state=installed
with_items:
- php
- php-mysql
- git
- name: Configure SELinux to allow httpd to connect to remote database
seboolean: name=httpd_can_network_connect_db state=true persistent=yes
- name: Copy the code from repository
git: repo=${repository} version=${webapp_version} dest=/var/www/html/

@ -1,23 +0,0 @@
---
# This playbook handles the removal of a webserver from the pool.
- name: Remove the code from server
command: rm -rf /var/www/html/*
- name: Remove server from LB
lineinfile: dest=/etc/haproxy/haproxy.cfg state=absent regexp="${ansible_hostname}"
delegate_to: $item
with_items: ${groups.lbservers}
register: last_run
- name: disable the server in haproxy
shell: echo "disable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats
delegate_to: $item
with_items: ${groups.lbservers}
- name: Remove the httpd package
yum: name=httpd state=absent

@ -1,22 +0,0 @@
---
# This Playbook implements a rolling update on the infrastructure, change the value of the serial keyword to specify the number of servers the update should happen.
- name: Remove the code from server
command: rm -rf /var/www/html/*
- name: disable the server in haproxy
shell: echo "disable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats
delegate_to: $item
with_items: ${groups.lbservers}
- name: Copy the code from repository
git: repo=${repository} dest=/var/www/html/
- name: Create's the index.php file
template: src=../roles/web/templates/index.php.j2 dest=/var/www/html/index.php
- name: Enable the server in haproxy
shell: echo "enable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats
delegate_to: $item
with_items: ${groups.lbservers}

@ -1,16 +0,0 @@
<html>
<head>
<title>Ansible Application</title>
</head>
<body>
</br>
<a href=http://{{ hostvars[inventory_hostname]['ansible_' + iface].ipv4.address }}/index.html>Homepage</a>
</br>
<?php
Print "Hello, World! I am a web server deployed using Ansible and I am : ";
echo exec('hostname');
Print "</BR>";
?>
</body>
</html>

@ -0,0 +1,36 @@
---
# This playbook does a rolling update of the code for all webservers serially (one at a time).
# Change the value of serial: to adjust the number of server to be updated.
# This playbook also takes the webapp_version variable to specify which git version
# of the test webapp to deploy.
- hosts: webservers
user: root
serial: 1
tasks:
- name: disable nagios alerts for this host webserver service
nagios: action=disable_alerts host=$ansible_hostname services=webserver
delegate_to: $item
with_items: ${groups.monitoring}
- name: disable the server in haproxy
shell: echo "disable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats
delegate_to: $item
with_items: ${groups.lbservers}
- name: Remove the code from server
command: rm -rf /var/www/html/*
- name: Copy the code from repository
git: repo=${repository} version=${webapp_version} dest=/var/www/html/
- name: Enable the server in haproxy
shell: echo "enable server myapplb/${ansible_hostname}" | socat stdio /var/lib/haproxy/stats
delegate_to: $item
with_items: ${groups.lbservers}
- name: re-enable nagios alerts
nagios: action=enable_alerts host=$ansible_hostname services=webserver
delegate_to: $item
with_items: ${groups.monitoring}

@ -1,6 +1,7 @@
---
#This Playbook deploys the whole application stack in this site.
- include: playbooks/db.yml
- include: playbooks/web.yml
- include: playbooks/haproxy.yml
- include: db.yml
- include: web.yml
- include: haproxy.yml
- include: nagios.yml

@ -0,0 +1,14 @@
---
# This playbook deploys the webservers with httpd and the code.
# fetch monitoring facts for iptables rules
- hosts: monitoring
tasks:
- hosts: webservers
user: root
roles:
- common
- base-apache
- web