Merge "Remove ip address from radosgw civitweb parameter"
diff --git a/README.rst b/README.rst
index c7db5ea..6a62baf 100644
--- a/README.rst
+++ b/README.rst
@@ -12,6 +12,167 @@
 Use salt-formula-linux for initial disk partitioning.
 
 
+Daemons
+--------
+
+Ceph uses several daemons to handle data and cluster state. Each daemon type requires different computing capacity and hardware optimization.
+
+These daemons are currently supported by formula:
+
+* MON (`ceph.mon`)
+* OSD (`ceph.osd`)
+* RGW (`ceph.radosgw`)
+
+
+Architecture decisions
+-----------------------
+
+Please refer to upstream achritecture documents before designing your cluster. Solid understanding of Ceph principles is essential for making architecture decisions described bellow.
+http://docs.ceph.com/docs/master/architecture/
+
+* Ceph version
+
+There is 3 or 4 stable releases every year and many of nighty/dev release. You should decide which version will be used since the only stable releases are recommended for production. Some of the releases are marked LTS (Long Term Stable) and these releases receive bugfixed for longer period - usually until next LTS version is released.
+
+* Number of MON daemons
+
+Use 1 MON daemon for testing, 3 MONs for smaller production clusters and 5 MONs for very large production cluster. There is no need to have more than 5 MONs in normal environment because there isn't any significant benefit in running more than 5 MONs. Ceph require MONS to form quorum so you need to heve more than 50% of the MONs up and running to have fully operational cluster. Every I/O operation will stop once less than 50% MONs is availabe because they can't form quorum.
+
+* Number of PGs
+
+Placement groups are providing mappping between stored data and OSDs. It is necessary to calculate number of PGs because there should be stored decent amount of PGs on each OSD. Please keep in mind *decreasing number of PGs* isn't possible and *increading* can affect cluster performance.
+
+http://docs.ceph.com/docs/master/rados/operations/placement-groups/
+http://ceph.com/pgcalc/
+
+* Daemon colocation
+
+It is recommended to dedicate nodes for MONs and RWG since colocation can have and influence on cluster operations. Howerver, small clusters can be running MONs on OSD node but it is critical to have enough of resources for MON daemons because they are the most important part of the cluster.
+
+Installing RGW on node with other daemons isn't recommended because RGW daemon usually require a lot of bandwith and it harm cluster health.
+
+* Journal location
+
+There are two way to setup journal:
+  * **Colocated** journal is located (usually at the beginning) on the same disk as partition for the data. This setup is easier for installation and it doesn't require any other disk to be used. However, colocated setup is significantly slower than dedicated)
+  * **Dedicate** journal is placed on different disk than data. This setup can deliver much higher performance than colocated but it require to have more disks in servers. Journal drives should be carefully selected because high I/O and durability is required.
+
+* Store type (Bluestore/Filestore)
+
+Recent version of Ceph support Bluestore as storage backend and backend should be used if available.
+
+http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
+
+* Cluster and public network
+
+Ceph cluster is accessed using network and thus you need to have decend capacity to handle all the client. There are two networks required for cluster: **public** network and cluster network. Public network is used for client connections and MONs and OSDs are listening on this network. Second network ic called **cluster** networks and this network is used for communication between OSDs. 
+
+Both networks should have dedicated interfaces, bonding interfaces and dedicating vlans on bonded interfaces isn't allowed. Good practise is dedicate more throughput for the cluster network because cluster traffic is more important than client traffic.
+
+* Pool parameters (size, min_size, type)
+
+You should setup each pool according to it's expected usage, at least `min_size` and `size` and pool type should be considered.
+
+* Cluster monitoring
+
+* Hardware
+
+Please refer to upstream hardware recommendation guide for general information about hardware.
+
+Ceph servers are required to fulfil special requirements becauce load generated by Ceph can be diametrically opposed to common load.
+
+http://docs.ceph.com/docs/master/start/hardware-recommendations/
+
+
+Basic management commands
+------------------------------
+
+Cluster
+********
+
+- :code:`ceph health` - check if cluster is healthy (:code:`ceph health detail` can provide more information)
+
+
+.. code-block:: bash
+
+  root@c-01:~# ceph health
+  HEALTH_OK
+
+- :code:`ceph status` - shows basic information about cluster
+
+
+.. code-block:: bash
+
+  root@c-01:~# ceph status
+      cluster e2dc51ae-c5e4-48f0-afc1-9e9e97dfd650
+       health HEALTH_OK
+       monmap e1: 3 mons at {1=192.168.31.201:6789/0,2=192.168.31.202:6789/0,3=192.168.31.203:6789/0}
+              election epoch 38, quorum 0,1,2 1,2,3
+       osdmap e226: 6 osds: 6 up, 6 in
+        pgmap v27916: 400 pgs, 2 pools, 21233 MB data, 5315 objects
+              121 GB used, 10924 GB / 11058 GB avail
+                   400 active+clean
+    client io 481 kB/s rd, 132 kB/s wr, 185 op/
+
+MON
+****
+
+http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/
+
+OSD
+****
+
+http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
+
+- :code:`ceph osd tree` - show all OSDs and it's state
+
+.. code-block:: bash
+
+  root@c-01:~# ceph osd tree
+  ID WEIGHT   TYPE NAME     UP/DOWN REWEIGHT PRIMARY-AFFINITY
+  -4        0 host c-04
+  -1 10.79993 root default
+  -2  3.59998     host c-01
+   0  1.79999         osd.0      up  1.00000          1.00000
+   1  1.79999         osd.1      up  1.00000          1.00000
+  -3  3.59998     host c-02
+   2  1.79999         osd.2      up  1.00000          1.00000
+   3  1.79999         osd.3      up  1.00000          1.00000
+  -5  3.59998     host c-03
+   4  1.79999         osd.4      up  1.00000          1.00000
+   5  1.79999         osd.5      up  1.00000          1.00000
+
+- :code:`ceph osd pools ls` - list of pool
+
+.. code-block:: bash
+
+  root@c-01:~# ceph osd lspools
+  0 rbd,1 test
+
+PG
+***
+
+http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg
+
+- :code:`ceph pg ls` - list placement groups
+
+.. code-block:: bash
+
+  root@c-01:~# ceph pg ls | head -n 4
+  pg_stat	objects	mip	degr	misp	unf	bytes	log	disklog	state	state_stamp	v	reported	up	up_primary	acting	acting_primary	last_scrub	scrub_stamp	last_deep_scrub	deep_scrub_stamp
+  0.0	11	0	0	0	0	46137344	3044	3044	active+clean	2015-07-02 10:12:40.603692	226'10652	226:1798	[4,2,0]	4	[4,2,0]	4	0'0	2015-07-01 18:38:33.126953	0'0	2015-07-01 18:17:01.904194
+  0.1	7	0	0	0	0	25165936	3026	3026	active+clean	2015-07-02 10:12:40.585833	226'5808	226:1070	[2,4,1]	2	[2,4,1]	2	0'0	2015-07-01 18:38:32.352721	0'0	2015-07-01 18:17:01.904198
+  0.2	18	0	0	0	0	75497472	3039	3039	active+clean	2015-07-02 10:12:39.569630	226'17447	226:3213	[3,1,5]	3	[3,1,5]	3	0'0	2015-07-01 18:38:34.308228	0'0	2015-07-01 18:17:01.904199
+
+- :code:`ceph pg map 1.1` - show mapping between PG and OSD
+
+.. code-block:: bash
+
+  root@c-01:~# ceph pg map 1.1
+  osdmap e226 pg 1.1 (1.1) -> up [5,1,2] acting [5,1,2]
+
+
+
 Sample pillars
 ==============
 
@@ -21,7 +182,7 @@
 
     ceph:
       common:
-        version: kraken
+        version: luminous
         config:
           global:
             param1: value1
@@ -47,6 +208,11 @@
               mgr: "allow *"
               mon: "allow *"
               osd: "allow *"
+          bootstrap-osd:
+            key: BQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA==
+            caps:
+              mon: "allow profile bootstrap-osd"
+
 
 Optional definition for cluster and public networks. Cluster network is used
 for replication. Public network for front-end communication.
@@ -55,7 +221,7 @@
 
     ceph:
       common:
-        version: kraken
+        version: luminous
         fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
         ....
         public_network: 10.0.0.0/24, 10.1.0.0/24
@@ -123,31 +289,35 @@
             key: value
       osd:
         enabled: true
-        host_id: 10
-        copy_admin_key: true
-        journal_type: raw
-        dmcrypt: disable
-        osd_scenario: raw_journal_devices
-        fs_type: xfs
-        disk:
-          '00':
-            rule: hdd
-            dev: /dev/vdb2
-            journal: /dev/vdb1
-            class: besthdd
-            weight: 1.5
-          '01':
-            rule: hdd
-            dev: /dev/vdc2
-            journal: /dev/vdc1
-            class: besthdd
-            weight: 1.5
-          '02':
-            rule: hdd
-            dev: /dev/vdd2
-            journal: /dev/vdd1
-            class: besthdd
-            weight: 1.5
+        ceph_host_id: '39'
+        journal_size: 20480
+        bluestore_block_db_size: 1073741824    (1G)
+        bluestore_block_wal_size: 1073741824   (1G)
+        bluestore_block_size: 807374182400     (800G)
+        backend:
+          filestore:
+            disks:
+            - dev: /dev/sdm
+              enabled: false
+              rule: hdd
+              journal: /dev/ssd
+              fs_type: xfs
+              class: bestssd
+              weight: 1.5
+            - dev: /dev/sdl
+              rule: hdd
+              journal: /dev/ssd
+              fs_type: xfs
+              class: bestssd
+              weight: 1.5
+          bluestore:
+            disks:
+            - dev: /dev/sdb
+            - dev: /dev/sdc
+              block_db: /dev/ssd
+              block_wal: /dev/ssd
+            - dev: /dev/sdd
+              enabled: false
 
 
 Ceph client roles
@@ -271,7 +441,7 @@
             crush_ruleset_name: 0
 
 Generate CRUSH map
-+++++++++++++++++++
+--------------------
 
 It is required to define the `type` for crush buckets and these types must start with `root` (top) and end with `host`. OSD daemons will be assigned to hosts according to it's hostname. Weight of the buckets will be calculated according to weight of it's childen.
 
diff --git a/ceph/files/crushmap b/ceph/files/crushmap
index 7b244d3..a1065c4 100644
--- a/ceph/files/crushmap
+++ b/ceph/files/crushmap
@@ -6,21 +6,23 @@
 {%- set osds = {} -%}
 {%- set weights = {} -%}
 
-{%- for node_name, node_grains in salt['mine.get']('*', 'grains.items').iteritems() -%}
-  {%- if node_grains.ceph_osd_host_id is defined -%}
-    {# load OSDs and compute weight#}
-    {%- set node_weight = [] -%}
-    {%- for osd_relative_id, osd in node_grains.ceph_osd_disk.iteritems() -%}
-      {%- set osd_id = node_grains.ceph_osd_host_id ~ osd_relative_id -%}
-      {%- do osd.update({'host': node_grains.nodename }) -%}
-      {%- do osds.update({osd_id: osd}) -%}
-      {%- do node_weight.append(osd.weight) -%}
-    {%- endfor -%}
+# the following for loop must be changed
 
-    {%- do hosts.update({node_grains.nodename: {'weight': node_weight|sum, 'parent': node_grains.ceph_crush_parent }}) -%}
-
-  {%- endif -%}
-{%- endfor -%}
+#{%- for node_name, node_grains in salt['mine.get']('*', 'grains.items').iteritems() -%}
+#  {%- if node_grains.ceph_osd_host_id is defined -%}
+#    {# load OSDs and compute weight#}
+#    {%- set node_weight = [] -%}
+#    {%- for osd_relative_id, osd in node_grains.ceph_osd_disk.iteritems() -%}
+#      {%- set osd_id = node_grains.ceph_osd_host_id ~ osd_relative_id -%}
+#      {%- do osd.update({'host': node_grains.nodename }) -%}
+#      {%- do osds.update({osd_id: osd}) -%}
+#      {%- do node_weight.append(osd.weight) -%}
+#    {%- endfor -%}
+#
+#    {%- do hosts.update({node_grains.nodename: {'weight': node_weight|sum, 'parent': node_grains.ceph_crush_parent }}) -%}
+#
+#  {%- endif -%}
+#{%- endfor -%}
 
 {%- set _crush = setup.crush -%}
 {%- set _buckets = [] %}
diff --git a/ceph/files/jewel/ceph.conf.Debian b/ceph/files/jewel/ceph.conf.Debian
index aa3f222..100f12c 100644
--- a/ceph/files/jewel/ceph.conf.Debian
+++ b/ceph/files/jewel/ceph.conf.Debian
@@ -36,6 +36,18 @@
 
 {%- endfor %}
 
+{%- if osd.bluestore_block_size is defined %}
+bluestore_block_size = {{ osd.bluestore_block_size }}
+{%- endif %}
+
+{%- if osd.bluestore_block_db_size is defined %}
+bluestore_block_db_size = {{ osd.bluestore_block_db_size }}
+{%- endif %}
+
+{%- if osd.bluestore_block_wal_size is defined %}
+bluestore_block_wal_size = {{ osd.bluestore_block_wal_size }}
+{%- endif %}
+
 {%- if pillar.ceph.mon is defined %}
 
 [mon]
@@ -59,18 +71,14 @@
 {%- if pillar.ceph.osd is defined %}
 
 [osd]
+{%- if pillar.ceph.osd.journal_size is defined %}
+osd journal size = {{ pillar.ceph.osd.journal_size }}
+{%- endif %}
 
 {%- for key, value in common.get('config', {}).get('osd', {}).iteritems() %}
 {{ key }} = {{ value }}
 {%- endfor %}
 
-{%- for disk_id, disk in osd.disk.iteritems() %}
-{% set id = osd.host_id~disk_id %}
-[osd.{{ id }}]
-host = {{ grains.host }}
-osd journal = {{ disk.journal }}
-{%- endfor %}
-
 {%- endif %}
 
 {%- if pillar.ceph.radosgw is defined %}
diff --git a/ceph/files/kraken/ceph.conf.Debian b/ceph/files/kraken/ceph.conf.Debian
index aa3f222..100f12c 100644
--- a/ceph/files/kraken/ceph.conf.Debian
+++ b/ceph/files/kraken/ceph.conf.Debian
@@ -36,6 +36,18 @@
 
 {%- endfor %}
 
+{%- if osd.bluestore_block_size is defined %}
+bluestore_block_size = {{ osd.bluestore_block_size }}
+{%- endif %}
+
+{%- if osd.bluestore_block_db_size is defined %}
+bluestore_block_db_size = {{ osd.bluestore_block_db_size }}
+{%- endif %}
+
+{%- if osd.bluestore_block_wal_size is defined %}
+bluestore_block_wal_size = {{ osd.bluestore_block_wal_size }}
+{%- endif %}
+
 {%- if pillar.ceph.mon is defined %}
 
 [mon]
@@ -59,18 +71,14 @@
 {%- if pillar.ceph.osd is defined %}
 
 [osd]
+{%- if pillar.ceph.osd.journal_size is defined %}
+osd journal size = {{ pillar.ceph.osd.journal_size }}
+{%- endif %}
 
 {%- for key, value in common.get('config', {}).get('osd', {}).iteritems() %}
 {{ key }} = {{ value }}
 {%- endfor %}
 
-{%- for disk_id, disk in osd.disk.iteritems() %}
-{% set id = osd.host_id~disk_id %}
-[osd.{{ id }}]
-host = {{ grains.host }}
-osd journal = {{ disk.journal }}
-{%- endfor %}
-
 {%- endif %}
 
 {%- if pillar.ceph.radosgw is defined %}
diff --git a/ceph/files/luminous/ceph.conf.Debian b/ceph/files/luminous/ceph.conf.Debian
index db98517..0888985 100644
--- a/ceph/files/luminous/ceph.conf.Debian
+++ b/ceph/files/luminous/ceph.conf.Debian
@@ -36,6 +36,18 @@
 
 {%- endfor %}
 
+{%- if osd.bluestore_block_size is defined %}
+bluestore_block_size = {{ osd.bluestore_block_size }}
+{%- endif %}
+
+{%- if osd.bluestore_block_db_size is defined %}
+bluestore_block_db_size = {{ osd.bluestore_block_db_size }}
+{%- endif %}
+
+{%- if osd.bluestore_block_wal_size is defined %}
+bluestore_block_wal_size = {{ osd.bluestore_block_wal_size }}
+{%- endif %}
+
 {%- if pillar.ceph.mon is defined %}
 
 [mon]
@@ -59,18 +71,14 @@
 {%- if pillar.ceph.osd is defined %}
 
 [osd]
+{%- if pillar.ceph.osd.journal_size is defined %}
+osd journal size = {{ pillar.ceph.osd.journal_size }}
+{%- endif %}
 
 {%- for key, value in common.get('config', {}).get('osd', {}).iteritems() %}
 {{ key }} = {{ value }}
 {%- endfor %}
 
-{%- for disk_id, disk in osd.disk.iteritems() %}
-{% set id = osd.host_id~disk_id %}
-[osd.{{ id }}]
-host = {{ grains.host }}
-osd journal = {{ disk.journal }}
-{%- endfor %}
-
 {%- endif %}
 
 {%- if pillar.ceph.radosgw is defined %}
diff --git a/ceph/meta/salt.yml b/ceph/meta/salt.yml
index 420de44..e69de29 100644
--- a/ceph/meta/salt.yml
+++ b/ceph/meta/salt.yml
@@ -1,13 +0,0 @@
-grain:
-  {%- if pillar.get('ceph', {}).get('osd', {}).get('enabled', False) %}
-  {%- from "ceph/map.jinja" import osd with context %}
-  ceph_osd_disk:
-    {%- set ceph_osd_disk = {'ceph_osd_disk': osd.disk} %}
-    {{ ceph_osd_disk|yaml(False)|indent(4) }}
-  ceph_osd_host_id:
-    {%- set ceph_osd_host_id = {'ceph_osd_host_id': osd.host_id} %}
-    {{ ceph_osd_host_id|yaml(False)|indent(4) }}
-  ceph_crush_parent:
-    {%- set ceph_crush_parent = {'ceph_crush_parent': osd.crush_parent} %}
-    {{ ceph_crush_parent|yaml(False)|indent(4) }}
-  {%- endif %}
diff --git a/ceph/mgr.yml b/ceph/mgr.sls
similarity index 87%
rename from ceph/mgr.yml
rename to ceph/mgr.sls
index 4553e40..5d5e271 100644
--- a/ceph/mgr.yml
+++ b/ceph/mgr.sls
@@ -24,18 +24,18 @@
   - require:
     - pkg: mon_packages
 
+reload_systemctl_daemon:
+  cmd.run:
+  - name: "systemctl daemon-reload"
+  - unless: "test -f /var/lib/ceph/mgr/ceph-{{ grains.host }}/keyring"
+
 ceph_create_mgr_keyring_{{ grains.host }}:
   cmd.run:
-  - name: "ceph auth get-or-create mgr.{{ grains.host }} mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /etc/ceph/ceph/mgr/ceph-{{ grains.host }}/keyring"
+  - name: "ceph auth get-or-create mgr.{{ grains.host }} mon 'allow profile mgr' osd 'allow *' mds 'allow *' > /var/lib/ceph/mgr/ceph-{{ grains.host }}/keyring"
   - unless: "test -f /var/lib/ceph/mgr/ceph-{{ grains.host }}/keyring"
   - require:
     - file: /var/lib/ceph/mgr/ceph-{{ grains.host }}/
 
-/var/lib/ceph/mgr/ceph-{{ grains.host }}/keyring:
-  file.managed:
-  - user: ceph
-  - group: ceph
-
 {%- if mgr.get('dashboard', {}).get('enabled', False) %}
 
 ceph_dashboard_address:
@@ -72,7 +72,6 @@
 
 {%- endif %}
 
-
 mon_services:
   service.running:
     - enable: true
@@ -81,7 +80,7 @@
       - file: /etc/ceph/ceph.conf
     - require:
       - pkg: mon_packages
-      - file: /var/lib/ceph/mgr/ceph-{{ grains.host }}/keyring
+      - cmd: ceph_create_mgr_keyring_{{ grains.host }}
     {%- if grains.get('noservices') %}
     - onlyif: /bin/false
     {%- endif %}
diff --git a/ceph/osd.sls b/ceph/osd.sls
index 5895830..9f6c19f 100644
--- a/ceph/osd.sls
+++ b/ceph/osd.sls
@@ -14,102 +14,134 @@
   - require:
     - pkg: ceph_osd_packages
 
-{% for disk_id, disk in osd.disk.iteritems() %}
+{% set ceph_version = pillar.ceph.common.version %}
 
-#Set ceph_host_id per node and interpolate
-{% set id = osd.host_id~disk_id %} 
+{%- for backend_name, backend in osd.backend.iteritems() %}
 
-#Not needed - need to test
-#create_osd_{{ id }}:
-#  cmd.run:
-#  - name: "ceph osd create $(ls -l /dev/disk/by-uuid | grep {{ disk.dev | replace("/dev/", "") }} | awk '{ print $9}') {{ id }} "
+{%- for disk in backend.disks %}
 
-#Move this thing into linux
-makefs_{{ id }}:
-  module.run:
-  - name: xfs.mkfs 
-  - device: {{ disk.dev }}
-  - unless: "ceph-disk list | grep {{ disk.dev }} | grep {{ osd.fs_type }}"
-  {%- if grains.get('noservices') %}
-  - onlyif: /bin/false
-  {%- endif %}
+{%- if disk.get('enabled', True) %}
 
-/var/lib/ceph/osd/ceph-{{ id }}:
-  mount.mounted:
-  - device: {{ disk.dev }}
-  - fstype: {{ osd.fs_type }}
-  - opts: {{ disk.get('opts', 'rw,noatime,inode64,logbufs=8,logbsize=256k') }} 
-  - mkmnt: True
-  {%- if grains.get('noservices') %}
-  - onlyif: /bin/false
-  {%- endif %}
+{% set dev = disk.dev %}
 
-permission_/var/lib/ceph/osd/ceph-{{ id }}:
-  file.directory:
-    - name: /var/lib/ceph/osd/ceph-{{ id }}
-    - user: ceph
-    - group: ceph
-    - mode: 755
-    - makedirs: False
-    - require:
-      - mount: /var/lib/ceph/osd/ceph-{{ id }}
-    {%- if grains.get('noservices') %}
-    - onlyif: /bin/false
-    {%- endif %}
-
-  
-{{ disk.journal }}:
-  file.managed:
-  - user: ceph
-  - group: ceph
-  - replace: false
-
-create_disk_{{ id }}:
+zap_disk_{{ dev }}:
   cmd.run:
-  - name: "ceph-osd  -i {{ id }} --conf /etc/ceph/ceph.conf --mkfs --mkkey --mkjournal --setuser ceph"
-  - unless: "test -f /var/lib/ceph/osd/ceph-{{ id }}/fsid"
+  - name: "ceph-disk zap {{ dev }}"
+  - unless: "ceph-disk list | grep {{ dev }} | grep ceph"
   - require:
-    - file: /var/lib/ceph/osd/ceph-{{ id }}
-    - mount: /var/lib/ceph/osd/ceph-{{ id }}
-  {%- if grains.get('noservices') %}
-  - onlyif: /bin/false
-  {%- endif %}
-
-add_keyring_{{ id }}:
-  cmd.run:
-  - name: "ceph auth add osd.{{ id }} osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-{{ id }}/keyring"
-  - unless: "ceph auth list | grep '^osd.{{ id }}'"
-  - require:
-    - cmd: create_disk_{{ id }}
-  {%- if grains.get('noservices') %}
-  - onlyif: /bin/false
-  {%- endif %}
-
-/var/lib/ceph/osd/ceph-{{ id }}/done:
-  file.managed:
-  - content: {}
-  - require:
-    - cmd: add_keyring_{{ id }}
-  {%- if grains.get('noservices') %}
-  - onlyif: /bin/false
-  {%- endif %}
-
-
-osd_services_{{ id }}_osd:
-  service.running:
-  - enable: true
-  - names: ['ceph-osd@{{ id }}']
-  - watch:
+    - pkg: ceph_osd_packages
     - file: /etc/ceph/ceph.conf
-  - require:
-    - file: /var/lib/ceph/osd/ceph-{{ id }}/done
-    - service: osd_services_perms
   {%- if grains.get('noservices') %}
   - onlyif: /bin/false
   {%- endif %}
 
-{% endfor %}
+{%- if disk.journal is defined %}
 
+zap_disk_journal_{{ disk.journal }}_for_{{ dev }}:
+  cmd.run:
+  - name: "ceph-disk zap {{ disk.journal }}"
+  - unless: "ceph-disk list | grep {{ disk.journal }} | grep ceph"
+  - require:
+    - pkg: ceph_osd_packages
+    - file: /etc/ceph/ceph.conf
+    - cmd: zap_disk_{{ dev }}
+  {%- if grains.get('noservices') %}
+  - onlyif: /bin/false
+  {%- endif %}
+
+{%- endif %}
+
+{%- if disk.block_db is defined %}
+
+zap_disk_blockdb_{{ disk.block_db }}_for_{{ dev }}:
+  cmd.run:
+  - name: "ceph-disk zap {{ disk.block_db }}"
+  - unless: "ceph-disk list | grep {{ disk.block_db }} | grep ceph"
+  - require:
+    - pkg: ceph_osd_packages
+    - file: /etc/ceph/ceph.conf
+    - cmd: zap_disk_{{ dev }}
+  {%- if grains.get('noservices') %}
+  - onlyif: /bin/false
+  {%- endif %}
+
+{%- endif %}
+
+{%- if disk.block_wal is defined %}
+
+zap_disk_blockwal_{{ disk.block_wal }}_for_{{ dev }}:
+  cmd.run:
+  - name: "ceph-disk zap {{ disk.block_wal }}"
+  - unless: "ceph-disk list | grep {{ disk.block_wal }} | grep ceph"
+  - require:
+    - pkg: ceph_osd_packages
+    - file: /etc/ceph/ceph.conf
+    - cmd: zap_disk_{{ dev }}
+  {%- if grains.get('noservices') %}
+  - onlyif: /bin/false
+  {%- endif %}
+
+{%- endif %}
+
+prepare_disk_{{ dev }}:
+  cmd.run:
+  {%- if backend_name == 'bluestore' and disk.block_db is defined and disk.block_wal is defined %}
+  - name: "ceph-disk prepare --bluestore {{ dev }} --block.db {{ disk.block_db }} --block.wal {{ disk.block_wal }}"
+  {%- elif backend_name == 'bluestore' and disk.block_db is defined %}
+  - name: "ceph-disk prepare --bluestore {{ dev }} --block.db {{ disk.block_db }}"
+  {%- elif backend_name == 'bluestore' and disk.block_wal is defined %}
+  - name: "ceph-disk prepare --bluestore {{ dev }} --block.wal {{ disk.block_wal }}"
+  {%- elif backend_name == 'bluestore' %}
+  - name: "ceph-disk prepare --bluestore {{ dev }}"
+  {%- elif backend_name == 'filestore' and disk.journal is defined and ceph_version == 'luminous' %}
+  - name: "ceph-disk prepare --filestore {{ dev }} {{ disk.journal }}"
+  {%- elif backend_name == 'filestore' and ceph_version == 'luminous' %}
+  - name: "ceph-disk prepare --filestore {{ dev }}"
+  {%- elif backend_name == 'filestore' and disk.journal is defined and ceph_version != 'luminous' %}
+  - name: "ceph-disk prepare {{ dev }} {{ disk.journal }}"
+  {%- else %}
+  - name: "ceph-disk prepare {{ dev }}"
+  {%- endif %}
+  - unless: "ceph-disk list | grep {{ dev }} | grep ceph"
+  - require:
+    - cmd: zap_disk_{{ dev }}
+    - pkg: ceph_osd_packages
+    - file: /etc/ceph/ceph.conf
+  {%- if grains.get('noservices') %}
+  - onlyif: /bin/false
+  {%- endif %}
+
+reload_partition_table_{{ dev }}:
+  cmd.run:
+  - name: "partprobe"
+  - unless: "ceph-disk list | grep {{ dev }} | grep active"
+  - require:
+    - cmd: prepare_disk_{{ dev }}
+    - cmd: zap_disk_{{ dev }}
+    - pkg: ceph_osd_packages
+    - file: /etc/ceph/ceph.conf
+  {%- if grains.get('noservices') %}
+  - onlyif: /bin/false
+  {%- endif %}
+
+activate_disk_{{ dev }}:
+  cmd.run:
+  - name: "ceph-disk activate --activate-key /etc/ceph/ceph.client.bootstrap-osd.keyring {{ dev }}1"
+  - unless: "ceph-disk list | grep {{ dev }} | grep active"
+  - require:
+    - cmd: prepare_disk_{{ dev }}
+    - cmd: zap_disk_{{ dev }}
+    - pkg: ceph_osd_packages
+    - file: /etc/ceph/ceph.conf
+  {%- if grains.get('noservices') %}
+  - onlyif: /bin/false
+  {%- endif %}
+
+{%- endif %}
+
+{%- endfor %}
+
+{%- endfor %}
 
 osd_services_global:
   service.running:
@@ -121,7 +153,6 @@
   - onlyif: /bin/false
   {%- endif %}
 
-
 osd_services:
   service.running:
   - enable: true
@@ -132,29 +163,3 @@
   - onlyif: /bin/false
   {%- endif %}
 
-
-/etc/systemd/system/ceph-osd-perms.service:
-  file.managed:
-    - contents: |
-        [Unit]
-        Description=Set OSD journals owned by ceph user
-        After=local-fs.target
-        Before=ceph-osd.target
-
-        [Service]
-        Type=oneshot
-        RemainAfterExit=yes
-        ExecStart=/bin/bash -c "chown -v ceph $(cat /etc/ceph/ceph.conf | grep 'osd journal' | awk '{print $4}')"
-
-        [Install]
-        WantedBy=multi-user.target
-
-osd_services_perms:
-  service.running:
-  - enable: true
-  - names: ['ceph-osd-perms']
-  - require:
-    - file: /etc/systemd/system/ceph-osd-perms.service
-  {%- if grains.get('noservices') %}
-  - onlyif: /bin/false
-  {%- endif %}
diff --git a/metadata/service/mgr/cluster.yml b/metadata/service/mgr/cluster.yml
new file mode 100644
index 0000000..2c4d0ea
--- /dev/null
+++ b/metadata/service/mgr/cluster.yml
@@ -0,0 +1,13 @@
+applications:
+- ceph
+classes:
+- service.ceph.common.cluster
+- service.ceph.support
+parameters:
+  ceph:
+    mgr:
+      enabled: true
+      dashboard:
+        enabled: true
+        host: ${_param:single_address}
+        port: 7000
diff --git a/metadata/service/osd/cluster.yml b/metadata/service/osd/cluster.yml
index 7b429f2..88e79d3 100644
--- a/metadata/service/osd/cluster.yml
+++ b/metadata/service/osd/cluster.yml
@@ -8,9 +8,3 @@
     osd:
       enabled: true
       host_id: ${_param:ceph_host_id}
-      crush_parent: ${_param:ceph_crush_parent}
-      copy_admin_key: true
-      journal_type: raw
-      dmcrypt: disable
-      osd_scenario: raw_journal_devices
-      fs_type: xfs
\ No newline at end of file
diff --git a/metadata/service/osd/single.yml b/metadata/service/osd/single.yml
index 5aec498..4fce284 100644
--- a/metadata/service/osd/single.yml
+++ b/metadata/service/osd/single.yml
@@ -8,8 +8,3 @@
     osd:
       enabled: true
       host_id: ${_param:ceph_host_id}
-      copy_admin_key: true
-      journal_type: raw
-      dmcrypt: disable
-      osd_scenario: raw_journal_devices
-      fs_type: xfs
\ No newline at end of file
diff --git a/tests/pillar/ceph_osd_single.sls b/tests/pillar/ceph_osd_single.sls
index f039bbc..3138ed7 100644
--- a/tests/pillar/ceph_osd_single.sls
+++ b/tests/pillar/ceph_osd_single.sls
@@ -29,28 +29,29 @@
     enabled: true
     version: kraken
     host_id: 10
-    crush_parent: rack01
-    copy_admin_key: true
-    journal_type: raw
-    dmcrypt: disable
-    osd_scenario: raw_journal_devices
-    fs_type: xfs
-    disk:
-      '00':
-        rule: hdd
-        dev: /dev/vdb2
-        journal: /dev/vdb1
-        class: besthdd
-        weight: 1.5
-      '01':
-        rule: hdd
-        dev: /dev/vdc2
-        journal: /dev/vdc1
-        class: besthdd
-        weight: 1.5
-      '02':
-        rule: hdd
-        dev: /dev/vdd2
-        journal: /dev/vdd1
-        class: besthdd
-        weight: 1.5
+    backend:
+      filestore:
+        disks:
+        - dev: /dev/sdm
+          enabled: false
+          rule: hdd
+          journal: /dev/sdn
+          fs_type: xfs
+          class: bestssd
+          weight: 1.5
+        - dev: /dev/sdl
+          rule: hdd
+          fs_type: xfs
+          class: bestssd
+          weight: 1.5
+        - dev: /dev/sdo
+          rule: hdd
+          journal: /dev/sdo
+          fs_type: xfs
+          class: bestssd
+          weight: 1.5
+      bluestore:
+        disks:
+        - dev: /dev/sdb
+          enabled: false
+        - dev: /dev/sdc