Added ceph mon and osd funcionality (#5)

* add TARGET

Try to define what we are goint to achieve.

* ceph monitors

* added new mon and osd funcionalities

* Documentation fixes

* Added testing metadata

* New ceph_osd_disk salt grain for crushmap generation

* Fixed the map.jinja and common module

* Fixed map for OSD role

* Completed the pool enforcement

* Pass context to the crushmap template from mine information

* RadosGW updates

* Fixed Rados gateway

* push origin master

* Service metadata fixes

* Fixed wrong metadata dir

* changed radosgw keyring path, changed watch for radosgw service

* set osd pool parameters

* added opts for osd mount, few minor fixes for states osd and mon

* added grains for crush parent
diff --git a/TARGET.rst b/TARGET.rst
new file mode 100644
index 0000000..eb5b5ee
--- /dev/null
+++ b/TARGET.rst
@@ -0,0 +1,27 @@
+Proposal
+=========
+
+Ceph salt formula should be able to provide these tasks:
+
+* initial deploy of Ceph cluster
+* remove broken OSD
+* add new OSD
+* add new node with OSDs
+
+
+
+Test procedure
+---------------
+
+. Bootstrap nodes
+. Deploy 3 MON nodes
+. Deploy 3 OSD nodes
+. Deploy 1 MDS
+. Deploy client
+. Run tests:
+
+* Ceph is healty
+* There are 3 MONs and 3 OSD nodes
+
+* Create RBD, map it, mount it, write testing file, get output, unmount, unmap, remove
+* Create CephFS, mount it, write file, unmount it