mongod 1.2 update

pull/63/head
bennojoy 11 years ago
parent 34fb6ef903
commit 0defd33f42
  1. 97
      mongodb/README.md
  2. 8
      mongodb/group_vars/all
  3. 35
      mongodb/hosts
  4. BIN
      mongodb/images/check.png
  5. BIN
      mongodb/images/nosql_primer.png
  6. BIN
      mongodb/images/replica_set.png
  7. BIN
      mongodb/images/scale.png
  8. BIN
      mongodb/images/sharding.png
  9. BIN
      mongodb/images/site.png
  10. 13
      mongodb/playbooks/addnode.yml
  11. 9
      mongodb/playbooks/addshard.yml
  12. 10
      mongodb/playbooks/common.yml
  13. 7
      mongodb/playbooks/mongoc.yml
  14. 7
      mongodb/playbooks/mongod.yml
  15. 7
      mongodb/playbooks/mongos.yml
  16. 13
      mongodb/playbooks/nodes.yml
  17. 6
      mongodb/playbooks/test.yml
  18. 19
      mongodb/roles/common/tasks/main.yml
  19. 26
      mongodb/roles/common/templates/epel.repo.j2
  20. 2
      mongodb/roles/common/templates/hosts.j2
  21. 8
      mongodb/roles/common/templates/iptables.j2
  22. 11
      mongodb/roles/mongoc/tasks/main.yml
  23. 2
      mongodb/roles/mongoc/templates/mongoc.j2
  24. 15
      mongodb/roles/mongod/tasks/addshard.yml
  25. 28
      mongodb/roles/mongod/tasks/main.yml
  26. 15
      mongodb/roles/mongod/tasks/shards.yml
  27. 2
      mongodb/roles/mongod/templates/mongod.j2
  28. 2
      mongodb/roles/mongod/templates/repset_init.j2
  29. 10
      mongodb/roles/mongos/tasks/main.yml
  30. 2
      mongodb/roles/mongos/templates/mongos.conf.j2
  31. 2
      mongodb/roles/mongos/templates/mongos.j2
  32. 22
      mongodb/site.yml

@ -1,80 +1,106 @@
Deploying a shared production ready MongoDB cluster with Ansible
##Deploying a sharded production ready MongoDB cluster with Ansible
------------------------------------------------------------------------------
In this example we demonstrate how we can orchestrate the deployment of a production grade MongoDB Cluster. The functionality of this example includes:
####A Primer into the MongoDB NoSQL database.
1) Deploying a N node MongoDB cluster, which has N shards and N replication nodes.
![Alt text](/images/nosql_primer.png "Primer NoSQL")
2) Scale out capability. Expand the Cluster by adding nodes to the cluster.
The above diagram shows how the MongoDB nosql differs from the traditional relational database model. In RDBMS the data of a user is stored in table and the records of users are stored in rows/columns, While in mongodb the 'table' is replaced by 'collection' and the individual 'records' are called 'documents'.
One thing also to be noticed is that the data is stored as key/value pairs in BJSON format.
3) Security, All the mongodb process are secured using the best practices.
Another thing to be noticed is that nosql has a looser consistency model, as an example the second document in the users collection has an additonal field of 'last name'. Due to this flexibility the nosql database model can give us:
###Deployment Architecture.
Better Horizontal scaling capability.
To better explain the deployment architecture let's take an example where we are deploying a 3 node MongoDB cluster ( Minimum recommended by MongoDB).
Also mongodb has inbuilt support for
The way Ansible configures the three nodes is as follows:
Data Replication & HA
1) Install the mongodb software on all nodes.
Which makes it good choice for users who have very large data to handle and less requirement for ACID.
#### MongoDB's Data replication .
![Alt text](/images/replica_set.png "Replica Set")
Data backup is achieved in Mongodb via Replica sets. As the figure above show's a single replication set consists of a replication master (active) and several other replications slaves (passive). All the database operations like Add/Delete/Update happens on the replication master and the master replicates the data to the slave nodes. mongod is the process which is resposible for all the database activities as well as replication processes. The minimum recommended number of slave servers are 3.
2) Creates 3 replication sets, with one primary on each node and the rest two acting as secondaries.
3) Configures MongodDB configuration DB servers as listed in the inventory section[mongocservers]. Recommended number is 3, so it can be the same three servers as the datanodes.
#### MongoDB's Sharding (Horizontal Scaling) .
4) Configures a Mongos server as listed in the inventory file [mongosservers].
![Alt text](/images/Sharding.png "Sharding")
5) Adds 3 Shards each belonging to individual replication sets.
Sharding allows to achieve a very high performing database, by partioning the data into seperate chunks and allocating diffent ranges of chunks to diffrent shard servers. The figure above shows a collection which has 90 documents which has been sharded across the three shard server, The first shard getting ranges from 1- 29 etc... . When a client wants to access a certian document it contacts the query router (mongos process), which inturn would contact the 'configuration node' (lightweight mongod process) which keeps a record of which ranges of chunks are distributed across which shards.
6) All the processes, mongod,mogos are secured using the keyfiles.
Please do note that every shard server should be backed by a replica set, so that when data is written/queried copies of the data are available. So in a three shard deployment we would require 3 replica sets and primaries of each would act as the sharding server.
Once the cluster is deployed, if we want to scale the cluster, Ansible configures it as follows:
Here's a basic steps of how sharding works.
1) Install the MongoDB application on the new node.
1) A new database is created, and collections are added.
2) Configure the replication set with primary as the new node and the secondaries as listed in the inventory file [replicationservers]. ( don't forget to add the new node also in the replicationservers section]
2) New documents get updated as an when clients update, all the new documents goes into a single shard.
3) Adds a new shard to the mongos service pointing to the new replication set.
3) when the size of collection in a shard exceeds the 'chunk_size' the collection is split and balanced across shards.
#### Pre-requisite
##Deploy MongoDB cluster via Ansible.
--------------------------------------------
1) Update the group_vars/all file which contains site specific parmaters, especially the section which contains the mapping of the hostname's and the ports that it should use for the mongod process. Please do make sure the ansible hostname matches the same. Also dont forget to add the variable when adding a new node.
### Deploy the Cluster.
2) The default directory for storing data is /data, please do change it if requried, also make sure it has sufficient space 10G recommended.
![Alt text](/images/site.png "Site")
The above diagram illustrates the deployment model for mongodb cluster via Ansible, This deployment models focuses on deploying a three shard servers, each having a replica set, the backup replica servers are other two shard primaries. The configuration server are co-located with the shard's. The mongos servers are best deployed on seperate servers. These are the minimum recomended configuration for a production grade mongodb deployment.
Please note that the playbooks are capable of deploying N node cluster not necesarily three. Also all the processes are secured using keyfiles.
####Pre-Requisite's
Edit the group_vars/all file to reflect the below variables.
1) iface: 'eth1' # the interface to be used for all communication.
2) mongod_ports: # The hostname and tcp/ip port combination.
mongo1: 2700
mongo2: 2701
mongo3: 2702
3) The default directory for storing data is /data, please do change it if requried, also make sure it has sufficient space 10G recommended.
###The following example deploys a three node MongoDB Cluster
###Once the pre-requisite's have been done, we can procced with the site deployment. The following example deploys a three node MongoDB Cluster
The inventory file looks as follows:
#The site wide list of mongodb servers
[mongoservers]
[mongo_servers]
mongo1
mongo2
mongo3
#The list of servers where replication should happen, including the master server.
[replicationservers]
[replication_servers]
mongo1
mongo2
mongo3
#The list of mongodb configuration servers, make sure it is 1 or 3
[mongocservers]
[mongoc_servers]
mongo1
mongo2
mongo3
#The list of servers where mongos servers would run.
[mongosservers]
mongos
mongos1
mongos2
Build the site with the following command:
ansible-playbook -i hosts site.yml
###Verifying the deployed MongoDB Cluster
Once completed we can check replication set availibitly by connecting to individual primary replication set nodes, 'mongo --host <ip host> --port <port number>
Once completed we can check replication set availibitly by connecting to individual primary replication set nodes, 'mongo --host 192.168.1.1 --port 2700
and issue the command to query the status of replication set, we should get a similar output.
@ -112,7 +138,7 @@ and issue the command to query the status of replication set, we should get a si
}
we can check the status of the Shards as follows: connect to the mongos service 'mongos --host <ip of mongos server> --port 8888'
we can check the status of the Shards as follows: connect to the mongos service 'mongos --host 192.168.1.1 --port 8888'
and issue the following command to get the status of the Shards.
@ -127,7 +153,10 @@ and issue the following command to get the status of the Shards.
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
###We can also make sure the Sharding works by creating a database,collection and populate it with documents and check if the chunks of the collection are balanced equally across nodes.
###We can also make sure the Sharding works by creating a database,collection and populate it with documents and check if the chunks of the collection are balanced equally across nodes. The below diagram illustrates the verification step.
![Alt text](/images/check.png "check")
The above mentioned steps can be tested with an automated playbook.
@ -158,7 +187,9 @@ Once the playbook completes, we check if the shadring has succeded by logging on
### Adding a new node to the Cluster
### Scaling the Cluster
![Alt text](/images/scale.png "scale")
To add a new node to the configured MongoDb Cluster, setup the inventory file as follows:
@ -172,6 +203,7 @@ To add a new node to the configured MongoDb Cluster, setup the inventory file as
#The list of servers where replication should happen, make sure the new node is listed here.
[replicationservers]
mongo4
mongo3
mongo1
mongo2
@ -183,11 +215,12 @@ To add a new node to the configured MongoDb Cluster, setup the inventory file as
#The list of servers where mongos servers would run.
[mongosservers]
mongos
mongos1
mongos2
Make sure you have the new node added in the replicationservers section and execute the following command:
ansible-playbook -i hosts playbooks/addnode.yml -e servername=mongo4
ansible-playbook -i hosts site.yml
###Verification.

@ -19,8 +19,8 @@ iface: eth1
mongo_admin_pass: 123456
mongod_ports:
bensible: 2700
web2: 2701
web3: 2702
web4: 2703
hadoop1: 2700
hadoop2: 2701
hadoop3: 2702
hadoop4: 2703

@ -1,29 +1,28 @@
#The site wide list of mongodb servers
[mongoservers]
web2
web3
web4
bensible
[mongo_servers]
hadoop1
hadoop2
hadoop3
hadoop4
#The list of servers where replication should happen, by default include all servers
[replicationservers]
bensible
web2
web3
[replication_servers]
hadoop1
hadoop2
hadoop3
hadoop4
#The list of mongodb configuration servers, make sure it is 1 or 3
[mongocservers]
web4
web2
web3
[mongoc_servers]
hadoop1
hadoop2
hadoop3
#The list of servers where mongos servers would run.
[mongosservers]
web4
web3
[mongos_servers]
hadoop1
hadoop2

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 427 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 416 KiB

@ -1,13 +0,0 @@
---
#This playbook is used to add a new node the mongodb cluster
- hosts: all
tasks:
- include: ../roles/common/tasks/main.yml
handlers:
- include: ../roles/common/handlers/main.yml
- hosts: ${servername}
tasks:
- include: ../roles/mongod/tasks/main.yml
- include: ../roles/mongod/tasks/addshard.yml

@ -1,9 +0,0 @@
---
# This playbook adds the shards to the mongos service
- hosts: mongosservers
- hosts: mongoservers
user: root
tasks:
- include: ../roles/mongod/tasks/addshard.yml

@ -1,10 +0,0 @@
---
# Deploys all common plays for the site
- hosts: all
user: root
tasks:
- include: ../roles/common/tasks/main.yml
handlers:
- include: ../roles/common/handlers/main.yml

@ -1,7 +0,0 @@
---
# Deploys the mongodb configuratin db servers
- hosts: mongocservers
user: root
tasks:
- include: ../roles/mongoc/tasks/main.yml

@ -1,7 +0,0 @@
---
# Deploys the mongodb service and sets up replication sets
- hosts: mongoservers
user: root
tasks:
- include: ../roles/mongod/tasks/main.yml

@ -1,7 +0,0 @@
---
# Responsible for setting up and configuring mongos services
- hosts: mongosservers
user: root
tasks:
- include: ../roles/mongos/tasks/main.yml

@ -0,0 +1,13 @@
---
#This playbook is used to add a new node the mongodb cluster
- hosts: all
tasks:
- include: roles/common/tasks/main.yml
handlers:
- include: roles/common/handlers/main.yml
- hosts: ${servername}
tasks:
- include: roles/mongod/tasks/main.yml
- include: roles/mongod/tasks/addshard.yml

@ -1,6 +0,0 @@
---
- hosts: mongoservers
tasks:
- name: test
debug: msg="hello- ${mongod_ports.${item}}"
with_items: ${groups.mongoservers}

@ -2,22 +2,19 @@
# This Playbook runs all the common plays in the deployment
- name: Create the hosts file for all machines
template: src=../roles/common/templates/hosts.j2 dest=/etc/hosts
template: src=hosts.j2 dest=/etc/hosts
- name: Creates the repository for 10Gen
template: src=../roles/common/templates/10gen.repo.j2 dest=/etc/yum.repos.d/10gen.repo
template: src=10gen.repo.j2 dest=/etc/yum.repos.d/10gen.repo
- name: Download the EPEL repository RPM
get_url: url=http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm dest=/tmp/ force=yes
- name: Copy the EPEL Repository.
template: src=epel.repo.j2 dest=/etc/yum.repos.d/epel.repo
- name: Install EPEL RPM
yum: name=/tmp/epel-release-6-8.noarch.rpm state=installed
- name: Clean up
command: rm -f /tmp/epel-release-6-8.noarch.rpm
- name: Create the data directory for the namenode metadata
file: path={{ mongodb_datadir_prefix }} owner=mongod group=mongod state=directory
- name: Install the mongodb package
yum: name=$item state=installed
yum: name={{ item }} state=installed
with_items:
- mongo-10gen
- mongo-10gen-server
@ -28,6 +25,6 @@
pip: name=pymongo state=latest use_mirrors=no
- name: Create the iptables file
template: src=../roles/common/templates/iptables.j2 dest=/etc/sysconfig/iptables
template: src=iptables.j2 dest=/etc/sysconfig/iptables
notify: restart iptables

@ -0,0 +1,26 @@
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1
[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

@ -1,4 +1,4 @@
127.0.0.1 localhost
{% for host in groups['mongoservers'] %}
{% for host in groups['all'] %}
{{ hostvars[host]['ansible_' + iface].ipv4.address }} {{ host }}
{% endfor %}

@ -4,14 +4,14 @@
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
{% if 'mongocservers' in group_names %}
{% if 'mongoc_servers' in group_names %}
-A INPUT -p tcp --dport 7777 -j ACCEPT
{% endif %}
{% if 'mongosservers' in group_names %}
{% if 'mongos_servers' in group_names %}
-A INPUT -p tcp --dport 8888 -j ACCEPT
{% endif %}
{% if 'mongoservers' in group_names %}
{% for host in groups['mongoservers'] %}
{% if 'mongo_servers' in group_names %}
{% for host in groups['mongo_servers'] %}
-A INPUT -p tcp --dport {{ mongod_ports[host] }} -j ACCEPT
{% endfor %}
{% endif %}

@ -2,18 +2,18 @@
# This playbook deploys the mongodb configurationdb servers
- name: Create data directory for mongoc configuration server
file: path=${mongodb_datadir_prefix}/configdb state=directory owner=mongod group=mongod
file: path={{ mongodb_datadir_prefix }}/configdb state=directory owner=mongod group=mongod
- name: Create the mongo configuration server startup file
template: src=../roles/mongoc/templates/mongoc.j2 dest=/etc/init.d/mongoc mode=0655
template: src=mongoc.j2 dest=/etc/init.d/mongoc mode=0655
- name: Create the mongo configuration server file
template: src=../roles/mongoc/templates/mongoc.conf.j2 dest=/etc/mongoc.conf
template: src=mongoc.conf.j2 dest=/etc/mongoc.conf
- name: Copy the keyfile for authentication
copy: src=../roles/mongod/files/secret dest=${mongodb_datadir_prefix}/secret owner=mongod group=mongod mode=0400
copy: src=roles/mongod/files/secret dest={{ mongodb_datadir_prefix }}/secret owner=mongod group=mongod mode=0400
- name: Start the mongo configuration server service
command: creates=/var/lock/subsys/mongoc /etc/init.d/mongoc start
@ -21,7 +21,6 @@
- name: pause
pause: seconds=20
- name: add the admin user
mongodb_user: database=admin name=admin password=${mongo_admin_pass} login_port=${mongoc_port} state=present
mongodb_user: database=admin name=admin password={{ mongo_admin_pass }} login_port={{ mongoc_port }} state=present
ignore_errors: yes

@ -11,7 +11,7 @@
. /etc/rc.d/init.d/functions
# things from mongod.conf get there by mongod reading it
export LC_ALL="C"
# NOTE: if you change any OPTIONS here, you get what you pay for:
# this script assumes all options are in the config file.

@ -1,15 +0,0 @@
---
#This Playbooks adds shards to the mongos servers once everythig is added
- name: Create the file to initialize the mongod Shard
template: src=../roles/mongod/templates/shard_init.j2 dest=/tmp/shard_init_${inventory_hostname}.js
delegate_to: $item
with_items: ${groups.mongosservers}
- name: Add the shard to the mongos
shell: /usr/bin/mongo localhost:${mongos_port}/admin -u admin -p ${mongo_admin_pass} /tmp/shard_init_${inventory_hostname}.js
delegate_to: $item
with_items: ${groups.mongosservers}

@ -2,38 +2,38 @@
#This Playbook deploys the mongod processes and sets up the replication set.
- name: create data directory for mongodb
file: path=${mongodb_datadir_prefix}/mongo-${inventory_hostname} state=directory owner=mongod group=mongod
delegate_to: $item
with_items: ${groups.replicationservers}
file: path={{ mongodb_datadir_prefix }}/mongo-{{ inventory_hostname }} state=directory owner=mongod group=mongod
delegate_to: '{{ item }}'
with_items: groups.replication_servers
- name: Create the mongodb startup file
template: src=../roles/mongod/templates/mongod.j2 dest=/etc/init.d/mongod-${inventory_hostname} mode=0655
delegate_to: $item
with_items: ${groups.replicationservers}
template: src=mongod.j2 dest=/etc/init.d/mongod-{{ inventory_hostname }} mode=0655
delegate_to: '{{ item }}'
with_items: groups.replication_servers
- name: Create the mongodb configuration file
template: src=../roles/mongod/templates/mongod.conf.j2 dest=/etc/mongod-${inventory_hostname}.conf
delegate_to: $item
with_items: ${groups.replicationservers}
template: src=mongod.conf.j2 dest=/etc/mongod-${inventory_hostname}.conf
delegate_to: '{{ item }}'
with_items: groups.replication_servers
- name: Copy the keyfile for authentication
copy: src=../roles/mongod/files/secret dest=${mongodb_datadir_prefix}/secret owner=mongod group=mongod mode=0400
copy: src=secret dest={{ mongodb_datadir_prefix }}/secret owner=mongod group=mongod mode=0400
- name: Start the mongodb service
command: creates=/var/lock/subsys/mongod-${inventory_hostname} /etc/init.d/mongod-${inventory_hostname} start
delegate_to: $item
with_items: ${groups.replicationservers}
delegate_to: '{{ item }}'
with_items: groups.replication_servers
- name: Create the file to initialize the mongod replica set
template: src=../roles/mongod/templates/repset_init.j2 dest=/tmp/repset_init.js
template: src=repset_init.j2 dest=/tmp/repset_init.js
- name: Pause for a while
pause: seconds=20
- name: Initialize the replication set
shell: /usr/bin/mongo --port "${mongod_ports.${inventory_hostname}}" /tmp/repset_init.js
shell: /usr/bin/mongo --port "${mongod_ports.${inventory_hostname}}" /tmp/repset_init.js

@ -0,0 +1,15 @@
---
#This Playbooks adds shards to the mongos servers once everythig is added
- name: Create the file to initialize the mongod Shard
template: src=shard_init.j2 dest=/tmp/shard_init_{{ inventory_hostname }}.js
delegate_to: '{{ item }}'
with_items: groups.mongos_servers
- name: Add the shard to the mongos
shell: /usr/bin/mongo localhost:{{ mongos_port }}/admin -u admin -p {{ mongo_admin_pass }} /tmp/shard_init_{{ inventory_hostname }}.js
delegate_to: '{{ item }}'
with_items: groups.mongos_servers

@ -11,7 +11,7 @@
. /etc/rc.d/init.d/functions
# things from mongod.conf get there by mongod reading it
export LC_ALL="C"
# NOTE: if you change any OPTIONS here, you get what you pay for:
# this script assumes all options are in the config file.

@ -1,6 +1,6 @@
rs.initiate()
sleep(13000)
{% for host in groups['replicationservers'] %}
{% for host in groups['replication_servers'] %}
rs.add("{{ host }}:{{ mongod_ports[inventory_hostname] }}")
sleep(8000)
{% endfor %}

@ -2,14 +2,14 @@
#This Playbook configures the mongos service of mongodb
- name: Create the mongos startup file
template: src=../roles/mongos/templates/mongos.j2 dest=/etc/init.d/mongos mode=0655
template: src=mongos.j2 dest=/etc/init.d/mongos mode=0655
- name: Create the mongos configuration file
template: src=../roles/mongos/templates/mongos.conf.j2 dest=/etc/mongos.conf
template: src=mongos.conf.j2 dest=/etc/mongos.conf
- name: Copy the keyfile for authentication
copy: src=../roles/mongod/files/secret dest=${mongodb_datadir_prefix}/secret owner=mongod group=mongod mode=0400
copy: src=roles/mongod/files/secret dest={{ mongodb_datadir_prefix }}/secret owner=mongod group=mongod mode=0400
- name: Start the mongos service
command: creates=/var/lock/subsys/mongos /etc/init.d/mongos start
@ -17,7 +17,7 @@
pause: seconds=20
- name: copy the file for shard test
template: src=../roles/mongos/templates/testsharding.j2 dest=/tmp/testsharding.js
template: src=testsharding.j2 dest=/tmp/testsharding.js
- name: copy the file enable sharding
template: src=../roles/mongos/templates/enablesharding.j2 dest=/tmp/enablesharding.js
template: src=enablesharding.j2 dest=/tmp/enablesharding.js

@ -8,7 +8,7 @@ fork = true
port = {{ mongos_port }}
{% set hosts = '' %}
{% for host in groups['mongocservers'] %}
{% for host in groups['mongoc_servers'] %}
{% if loop.last %}
{% set hosts = hosts + host + ':' ~ mongoc_port %}
configdb = {{ hosts }}

@ -11,7 +11,7 @@
. /etc/rc.d/init.d/functions
# things from mongod.conf get there by mongod reading it
export LC_ALL="C"
# NOTE: if you change any OPTIONS here, you get what you pay for:
# this script assumes all options are in the config file.

@ -1,10 +1,22 @@
---
# This Playbook would deploy the whole mongodb cluster with replication and sharding.
- include: playbooks/common.yml
- include: playbooks/mongod.yml
- include: playbooks/mongoc.yml
- include: playbooks/mongos.yml
- include: playbooks/addshard.yml
- hosts: all
roles:
- role: common
- hosts: mongo_servers
roles:
- role: mongod
- hosts: mongoc_servers
roles:
- role: mongoc
- hosts: mongos_servers
roles:
- role: mongos
- hosts: mongo_servers
tasks:
- include: roles/mongod/tasks/shards.yml