blob: 0f905730c0775cadad4cae8790c4aa6e51c85feb [file] [log] [blame]
jpavlik8425d362015-06-09 15:23:27 +02001
Ondrej Smola81d1a192017-08-17 11:13:10 +02002============
3Ceph formula
4============
jpavlik8425d362015-06-09 15:23:27 +02005
Ondrej Smola81d1a192017-08-17 11:13:10 +02006Ceph provides extraordinary data storage scalability. Thousands of client
7hosts or KVMs accessing petabytes to exabytes of data. Each one of your
8applications can use the object, block or file system interfaces to the same
9RADOS cluster simultaneously, which means your Ceph storage system serves as a
10flexible foundation for all of your data storage needs.
jpavlik8425d362015-06-09 15:23:27 +020011
Ondrej Smola81d1a192017-08-17 11:13:10 +020012Use salt-formula-linux for initial disk partitioning.
jpavlik8425d362015-06-09 15:23:27 +020013
14
Tomáš Kukráld2b82972017-08-29 12:45:45 +020015Daemons
16--------
17
18Ceph uses several daemons to handle data and cluster state. Each daemon type requires different computing capacity and hardware optimization.
19
20These daemons are currently supported by formula:
21
22* MON (`ceph.mon`)
23* OSD (`ceph.osd`)
24* RGW (`ceph.radosgw`)
25
26
27Architecture decisions
28-----------------------
29
30Please refer to upstream achritecture documents before designing your cluster. Solid understanding of Ceph principles is essential for making architecture decisions described bellow.
31http://docs.ceph.com/docs/master/architecture/
32
33* Ceph version
34
35There is 3 or 4 stable releases every year and many of nighty/dev release. You should decide which version will be used since the only stable releases are recommended for production. Some of the releases are marked LTS (Long Term Stable) and these releases receive bugfixed for longer period - usually until next LTS version is released.
36
37* Number of MON daemons
38
39Use 1 MON daemon for testing, 3 MONs for smaller production clusters and 5 MONs for very large production cluster. There is no need to have more than 5 MONs in normal environment because there isn't any significant benefit in running more than 5 MONs. Ceph require MONS to form quorum so you need to heve more than 50% of the MONs up and running to have fully operational cluster. Every I/O operation will stop once less than 50% MONs is availabe because they can't form quorum.
40
41* Number of PGs
42
43Placement groups are providing mappping between stored data and OSDs. It is necessary to calculate number of PGs because there should be stored decent amount of PGs on each OSD. Please keep in mind *decreasing number of PGs* isn't possible and *increading* can affect cluster performance.
44
45http://docs.ceph.com/docs/master/rados/operations/placement-groups/
46http://ceph.com/pgcalc/
47
48* Daemon colocation
49
50It is recommended to dedicate nodes for MONs and RWG since colocation can have and influence on cluster operations. Howerver, small clusters can be running MONs on OSD node but it is critical to have enough of resources for MON daemons because they are the most important part of the cluster.
51
52Installing RGW on node with other daemons isn't recommended because RGW daemon usually require a lot of bandwith and it harm cluster health.
53
Tomáš Kukráld2b82972017-08-29 12:45:45 +020054* Store type (Bluestore/Filestore)
55
56Recent version of Ceph support Bluestore as storage backend and backend should be used if available.
57
58http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
59
Jiri Broulikcc0d7752017-11-18 18:58:21 +010060* Block.db location for Bluestore
61
62There are two ways to setup block.db:
63 * **Colocated** block.db partition is created on the same disk as partition for the data. This setup is easier for installation and it doesn't require any other disk to be used. However, colocated setup is significantly slower than dedicated)
64 * **Dedicate** block.db is placed on different disk than data (or into partition). This setup can deliver much higher performance than colocated but it require to have more disks in servers. Block.db drives should be carefully selected because high I/O and durability is required.
65
66* Block.wal location for Bluestore
67
68There are two ways to setup block.wal - stores just the internal journal (write-ahead log):
69 * **Colocated** block.wal uses free space of the block.db device.
70 * **Dedicate** block.wal is placed on different disk than data (better put into partition as the size can be small) and possibly block.db device. This setup can deliver much higher performance than colocated but it require to have more disks in servers. Block.wal drives should be carefully selected because high I/O and durability is required.
71
72* Journal location for Filestore
73
74There are two ways to setup journal:
75 * **Colocated** journal is created on the same disk as partition for the data. This setup is easier for installation and it doesn't require any other disk to be used. However, colocated setup is significantly slower than dedicated)
76 * **Dedicate** journal is placed on different disk than data (or into partition). This setup can deliver much higher performance than colocated but it require to have more disks in servers. Journal drives should be carefully selected because high I/O and durability is required.
77
Tomáš Kukráld2b82972017-08-29 12:45:45 +020078* Cluster and public network
79
80Ceph cluster is accessed using network and thus you need to have decend capacity to handle all the client. There are two networks required for cluster: **public** network and cluster network. Public network is used for client connections and MONs and OSDs are listening on this network. Second network ic called **cluster** networks and this network is used for communication between OSDs.
81
82Both networks should have dedicated interfaces, bonding interfaces and dedicating vlans on bonded interfaces isn't allowed. Good practise is dedicate more throughput for the cluster network because cluster traffic is more important than client traffic.
83
84* Pool parameters (size, min_size, type)
85
86You should setup each pool according to it's expected usage, at least `min_size` and `size` and pool type should be considered.
87
88* Cluster monitoring
89
90* Hardware
91
92Please refer to upstream hardware recommendation guide for general information about hardware.
93
94Ceph servers are required to fulfil special requirements becauce load generated by Ceph can be diametrically opposed to common load.
95
96http://docs.ceph.com/docs/master/start/hardware-recommendations/
97
98
99Basic management commands
100------------------------------
101
102Cluster
103********
104
105- :code:`ceph health` - check if cluster is healthy (:code:`ceph health detail` can provide more information)
106
107
108.. code-block:: bash
109
110 root@c-01:~# ceph health
111 HEALTH_OK
112
113- :code:`ceph status` - shows basic information about cluster
114
115
116.. code-block:: bash
117
118 root@c-01:~# ceph status
119 cluster e2dc51ae-c5e4-48f0-afc1-9e9e97dfd650
120 health HEALTH_OK
121 monmap e1: 3 mons at {1=192.168.31.201:6789/0,2=192.168.31.202:6789/0,3=192.168.31.203:6789/0}
122 election epoch 38, quorum 0,1,2 1,2,3
123 osdmap e226: 6 osds: 6 up, 6 in
124 pgmap v27916: 400 pgs, 2 pools, 21233 MB data, 5315 objects
125 121 GB used, 10924 GB / 11058 GB avail
126 400 active+clean
127 client io 481 kB/s rd, 132 kB/s wr, 185 op/
128
129MON
130****
131
132http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/
133
134OSD
135****
136
137http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
138
139- :code:`ceph osd tree` - show all OSDs and it's state
140
141.. code-block:: bash
142
143 root@c-01:~# ceph osd tree
144 ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
145 -4 0 host c-04
146 -1 10.79993 root default
147 -2 3.59998 host c-01
148 0 1.79999 osd.0 up 1.00000 1.00000
149 1 1.79999 osd.1 up 1.00000 1.00000
150 -3 3.59998 host c-02
151 2 1.79999 osd.2 up 1.00000 1.00000
152 3 1.79999 osd.3 up 1.00000 1.00000
153 -5 3.59998 host c-03
154 4 1.79999 osd.4 up 1.00000 1.00000
155 5 1.79999 osd.5 up 1.00000 1.00000
156
157- :code:`ceph osd pools ls` - list of pool
158
159.. code-block:: bash
160
161 root@c-01:~# ceph osd lspools
162 0 rbd,1 test
163
164PG
165***
166
167http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg
168
169- :code:`ceph pg ls` - list placement groups
170
171.. code-block:: bash
172
173 root@c-01:~# ceph pg ls | head -n 4
174 pg_stat objects mip degr misp unf bytes log disklog state state_stamp v reported up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp
175 0.0 11 0 0 0 0 46137344 3044 3044 active+clean 2015-07-02 10:12:40.603692 226'10652 226:1798 [4,2,0] 4 [4,2,0] 4 0'0 2015-07-01 18:38:33.126953 0'0 2015-07-01 18:17:01.904194
176 0.1 7 0 0 0 0 25165936 3026 3026 active+clean 2015-07-02 10:12:40.585833 226'5808 226:1070 [2,4,1] 2 [2,4,1] 2 0'0 2015-07-01 18:38:32.352721 0'0 2015-07-01 18:17:01.904198
177 0.2 18 0 0 0 0 75497472 3039 3039 active+clean 2015-07-02 10:12:39.569630 226'17447 226:3213 [3,1,5] 3 [3,1,5] 3 0'0 2015-07-01 18:38:34.308228 0'0 2015-07-01 18:17:01.904199
178
179- :code:`ceph pg map 1.1` - show mapping between PG and OSD
180
181.. code-block:: bash
182
183 root@c-01:~# ceph pg map 1.1
184 osdmap e226 pg 1.1 (1.1) -> up [5,1,2] acting [5,1,2]
185
186
187
jpavlik8425d362015-06-09 15:23:27 +0200188Sample pillars
189==============
190
Ondrej Smola81d1a192017-08-17 11:13:10 +0200191Common metadata for all nodes/roles
jpavlik8425d362015-06-09 15:23:27 +0200192
193.. code-block:: yaml
194
195 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200196 common:
Jiri Broulikd5729042017-09-19 20:07:22 +0200197 version: luminous
jpavlik8425d362015-06-09 15:23:27 +0200198 config:
199 global:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200200 param1: value1
201 param2: value1
202 param3: value1
203 pool_section:
204 param1: value2
205 param2: value2
206 param3: value2
207 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
208 members:
209 - name: cmn01
210 host: 10.0.0.1
211 - name: cmn02
212 host: 10.0.0.2
213 - name: cmn03
214 host: 10.0.0.3
jpavlik8425d362015-06-09 15:23:27 +0200215 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200216 admin:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200217 caps:
218 mds: "allow *"
219 mgr: "allow *"
220 mon: "allow *"
221 osd: "allow *"
Jiri Broulikd5729042017-09-19 20:07:22 +0200222 bootstrap-osd:
Jiri Broulikd5729042017-09-19 20:07:22 +0200223 caps:
224 mon: "allow profile bootstrap-osd"
225
jpavlik8425d362015-06-09 15:23:27 +0200226
Ondrej Smola81d1a192017-08-17 11:13:10 +0200227Optional definition for cluster and public networks. Cluster network is used
228for replication. Public network for front-end communication.
jpavlik8425d362015-06-09 15:23:27 +0200229
230.. code-block:: yaml
231
232 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200233 common:
Jiri Broulikd5729042017-09-19 20:07:22 +0200234 version: luminous
Ondrej Smola81d1a192017-08-17 11:13:10 +0200235 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
236 ....
237 public_network: 10.0.0.0/24, 10.1.0.0/24
238 cluster_network: 10.10.0.0/24, 10.11.0.0/24
239
240
241Ceph mon (control) roles
242------------------------
243
244Monitors: A Ceph Monitor maintains maps of the cluster state, including the
245monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map.
246Ceph maintains a history (called an epoch”) of each state change in the Ceph
247Monitors, Ceph OSD Daemons, and PGs.
248
249.. code-block:: yaml
250
251 ceph:
252 common:
253 config:
254 mon:
255 key: value
jpavlik8425d362015-06-09 15:23:27 +0200256 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200257 enabled: true
jpavlik8425d362015-06-09 15:23:27 +0200258 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200259 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200260 caps:
261 mon: "allow *"
262 admin:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200263 caps:
264 mds: "allow *"
265 mgr: "allow *"
266 mon: "allow *"
267 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +0200268
Ondrej Smola91c83162017-09-12 16:40:02 +0200269Ceph mgr roles
270------------------------
271
272The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to provide additional monitoring and interfaces to external monitoring and management systems. Since the 12.x (luminous) Ceph release, the ceph-mgr daemon is required for normal operations. The ceph-mgr daemon is an optional component in the 11.x (kraken) Ceph release.
273
274By default, the manager daemon requires no additional configuration, beyond ensuring it is running. If there is no mgr daemon running, you will see a health warning to that effect, and some of the other information in the output of ceph status will be missing or stale until a mgr is started.
275
276
277.. code-block:: yaml
278
279 ceph:
280 mgr:
281 enabled: true
282 dashboard:
283 enabled: true
284 host: 10.103.255.252
285 port: 7000
286
Ondrej Smola81d1a192017-08-17 11:13:10 +0200287
288Ceph OSD (storage) roles
289------------------------
jpavlik8425d362015-06-09 15:23:27 +0200290
291.. code-block:: yaml
292
293 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200294 common:
Jiri Broulikec62dec2017-10-10 13:45:15 +0200295 version: luminous
296 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
297 public_network: 10.0.0.0/24, 10.1.0.0/24
298 cluster_network: 10.10.0.0/24, 10.11.0.0/24
299 keyring:
300 bootstrap-osd:
301 caps:
302 mon: "allow profile bootstrap-osd"
303 ....
Ondrej Smola81d1a192017-08-17 11:13:10 +0200304 osd:
305 enabled: true
Jiri Broulikec62dec2017-10-10 13:45:15 +0200306 crush_parent: rack01
307 journal_size: 20480 (20G)
308 bluestore_block_db_size: 10073741824 (10G)
309 bluestore_block_wal_size: 10073741824 (10G)
Jiri Broulikd5729042017-09-19 20:07:22 +0200310 bluestore_block_size: 807374182400 (800G)
311 backend:
312 filestore:
313 disks:
314 - dev: /dev/sdm
315 enabled: false
Jiri Broulikd5729042017-09-19 20:07:22 +0200316 journal: /dev/ssd
Jiri Broulikd5729042017-09-19 20:07:22 +0200317 class: bestssd
318 weight: 1.5
319 - dev: /dev/sdl
Jiri Broulikd5729042017-09-19 20:07:22 +0200320 journal: /dev/ssd
Jiri Broulikd5729042017-09-19 20:07:22 +0200321 class: bestssd
322 weight: 1.5
323 bluestore:
324 disks:
325 - dev: /dev/sdb
326 - dev: /dev/sdc
327 block_db: /dev/ssd
328 block_wal: /dev/ssd
Jiri Broulikc2be93b2017-10-03 14:20:00 +0200329 class: ssd
330 weight: 1.666
Jiri Broulikd5729042017-09-19 20:07:22 +0200331 - dev: /dev/sdd
332 enabled: false
jpavlik8425d362015-06-09 15:23:27 +0200333
Ondrej Smola81d1a192017-08-17 11:13:10 +0200334
Jiri Broulikc2be93b2017-10-03 14:20:00 +0200335Ceph client roles - ...Deprecated - use ceph:common instead
336--------------------------------------------------------
Ondrej Smola81d1a192017-08-17 11:13:10 +0200337
338Simple ceph client service
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200339
340.. code-block:: yaml
341
342 ceph:
343 client:
344 config:
345 global:
346 mon initial members: ceph1,ceph2,ceph3
347 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
348 keyring:
349 monitoring:
350 key: 00000000000000000000000000000000000000==
Ondrej Smola81d1a192017-08-17 11:13:10 +0200351
352At OpenStack control settings are usually located at cinder-volume or glance-
353registry services.
354
355.. code-block:: yaml
356
357 ceph:
358 client:
359 config:
360 global:
361 fsid: 00000000-0000-0000-0000-000000000000
362 mon initial members: ceph1,ceph2,ceph3
363 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
364 osd_fs_mkfs_arguments_xfs:
365 osd_fs_mount_options_xfs: rw,noatime
366 network public: 10.0.0.0/24
367 network cluster: 10.0.0.0/24
368 osd_fs_type: xfs
369 osd:
370 osd journal size: 7500
371 filestore xattr use omap: true
372 mon:
373 mon debug dump transactions: false
374 keyring:
375 cinder:
376 key: 00000000000000000000000000000000000000==
377 glance:
378 key: 00000000000000000000000000000000000000==
379
380
381Ceph gateway
382------------
383
384Rados gateway with keystone v2 auth backend
385
386.. code-block:: yaml
387
388 ceph:
389 radosgw:
390 enabled: true
391 hostname: gw.ceph.lab
392 bind:
393 address: 10.10.10.1
394 port: 8080
395 identity:
396 engine: keystone
397 api_version: 2
398 host: 10.10.10.100
399 port: 5000
400 user: admin
401 password: password
402 tenant: admin
403
404Rados gateway with keystone v3 auth backend
405
406.. code-block:: yaml
407
408 ceph:
409 radosgw:
410 enabled: true
411 hostname: gw.ceph.lab
412 bind:
413 address: 10.10.10.1
414 port: 8080
415 identity:
416 engine: keystone
417 api_version: 3
418 host: 10.10.10.100
419 port: 5000
420 user: admin
421 password: password
422 project: admin
423 domain: default
424
425
426Ceph setup role
427---------------
428
429Replicated ceph storage pool
430
431.. code-block:: yaml
432
433 ceph:
434 setup:
435 pool:
436 replicated_pool:
437 pg_num: 256
438 pgp_num: 256
439 type: replicated
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200440 crush_rule: sata
441 application: rbd
Ondrej Smola81d1a192017-08-17 11:13:10 +0200442
Jiri Broulikeaf41472017-10-18 09:56:33 +0200443 .. note:: For Kraken and earlier releases please specify crush_rule as a ruleset number.
444 For Kraken and earlier releases application param is not needed.
445
Ondrej Smola81d1a192017-08-17 11:13:10 +0200446Erasure ceph storage pool
447
448.. code-block:: yaml
449
450 ceph:
451 setup:
452 pool:
453 erasure_pool:
454 pg_num: 256
455 pgp_num: 256
456 type: erasure
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200457 crush_rule: ssd
458 application: rbd
Ondrej Smola81d1a192017-08-17 11:13:10 +0200459
Jiri Broulikd68e33a2017-10-24 10:54:43 +0200460
Jiri Broulike4ba9f62017-11-08 11:33:00 +0100461Inline compression for Bluestore backend
462
463.. code-block:: yaml
464
465 ceph:
466 setup:
467 pool:
468 volumes:
469 pg_num: 256
470 pgp_num: 256
471 type: replicated
472 crush_rule: hdd
473 application: rbd
474 compression_algorithm: snappy
475 compression_mode: aggressive
476 compression_required_ratio: .875
477 ...
478
479
Jiri Broulikd68e33a2017-10-24 10:54:43 +0200480Ceph manage keyring keys
481------------------------
482
483Keyrings are dynamically generated unless specified by the following pillar.
484
485.. code-block:: yaml
486
487 ceph:
488 common:
489 manage_keyring: true
490 keyring:
491 glance:
492 name: images
493 key: AACf3ulZFFPNDxAAd2DWds3aEkHh4IklZVgIaQ==
494 caps:
495 mon: "allow r"
496 osd: "allow class-read object_prefix rdb_children, allow rwx pool=images"
497
498
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200499Generate CRUSH map - Recommended way
500-----------------------------------
Tomáš Kukrál363d37d2017-08-17 13:40:20 +0200501
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200502It is required to define the `type` for crush buckets and these types must start with `root` (top) and end with `host`. OSD daemons will be assigned to hosts according to it's hostname. Weight of the buckets will be calculated according to weight of it's children.
503
504If the pools that are in use have size of 3 it is best to have 3 children of a specific type in the root CRUSH tree to replicate objects across (Specified in rule steps by 'type region').
Tomáš Kukrál363d37d2017-08-17 13:40:20 +0200505
506.. code-block:: yaml
507
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200508 ceph:
509 setup:
510 crush:
511 enabled: True
512 tunables:
513 choose_total_tries: 50
514 choose_local_tries: 0
515 choose_local_fallback_tries: 0
516 chooseleaf_descend_once: 1
517 chooseleaf_vary_r: 1
518 chooseleaf_stable: 1
519 straw_calc_version: 1
520 allowed_bucket_algs: 54
521 type:
522 - root
523 - region
524 - rack
525 - host
Jiri Broulikeaf41472017-10-18 09:56:33 +0200526 - osd
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200527 root:
528 - name: root-ssd
529 - name: root-sata
530 region:
531 - name: eu-1
532 parent: root-sata
533 - name: eu-2
534 parent: root-sata
535 - name: eu-3
536 parent: root-ssd
537 - name: us-1
538 parent: root-sata
539 rack:
540 - name: rack01
541 parent: eu-1
542 - name: rack02
543 parent: eu-2
544 - name: rack03
545 parent: us-1
546 rule:
547 sata:
548 ruleset: 0
549 type: replicated
550 min_size: 1
551 max_size: 10
552 steps:
553 - take take root-ssd
554 - chooseleaf firstn 0 type region
555 - emit
556 ssd:
557 ruleset: 1
558 type: replicated
559 min_size: 1
560 max_size: 10
561 steps:
562 - take take root-sata
563 - chooseleaf firstn 0 type region
564 - emit
565
566
567Generate CRUSH map - Alternative way
568------------------------------------
569
570It's necessary to create per OSD pillar.
571
572.. code-block:: yaml
573
574 ceph:
575 osd:
576 crush:
577 - type: root
578 name: root1
579 - type: region
580 name: eu-1
581 - type: rack
582 name: rack01
583 - type: host
584 name: osd001
585
586
587Apply CRUSH map
588---------------
589
590Before you apply CRUSH map please make sure that settings in generated file in /etc/ceph/crushmap are correct.
591
592.. code-block:: yaml
593
594 ceph:
595 setup:
596 crush:
597 enforce: true
598 pool:
599 images:
600 crush_rule: sata
601 application: rbd
602 volumes:
603 crush_rule: sata
604 application: rbd
605 vms:
606 crush_rule: ssd
607 application: rbd
608
Jiri Broulikeaf41472017-10-18 09:56:33 +0200609 .. note:: For Kraken and earlier releases please specify crush_rule as a ruleset number.
610 For Kraken and earlier releases application param is not needed.
611
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200612
613Persist CRUSH map
614--------------------
615
616After the CRUSH map is applied to Ceph it's recommended to persist the same settings even after OSD reboots.
617
618.. code-block:: yaml
619
620 ceph:
621 osd:
622 crush_update: false
623
Ondrej Smola81d1a192017-08-17 11:13:10 +0200624
625Ceph monitoring
626---------------
627
628Collect general cluster metrics
629
630.. code-block:: yaml
631
632 ceph:
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200633 monitoring:
634 cluster_stats:
635 enabled: true
636 ceph_user: monitoring
637
Ondrej Smola81d1a192017-08-17 11:13:10 +0200638Collect metrics from monitor and OSD services
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200639
640.. code-block:: yaml
641
642 ceph:
643 monitoring:
644 node_stats:
645 enabled: true
646
647
Ondrej Smola81d1a192017-08-17 11:13:10 +0200648More information
649================
jpavlik8425d362015-06-09 15:23:27 +0200650
651* https://github.com/cloud-ee/ceph-salt-formula
652* http://ceph.com/ceph-storage/
jan kaufman4f7757b2015-06-12 10:49:00 +0200653* http://ceph.com/docs/master/start/intro/
Filip Pytloun32841d72017-02-02 13:02:03 +0100654
Ondrej Smola81d1a192017-08-17 11:13:10 +0200655
656Documentation and bugs
Filip Pytloun32841d72017-02-02 13:02:03 +0100657======================
658
659To learn how to install and update salt-formulas, consult the documentation
660available online at:
661
662 http://salt-formulas.readthedocs.io/
663
664In the unfortunate event that bugs are discovered, they should be reported to
665the appropriate issue tracker. Use Github issue tracker for specific salt
666formula:
667
668 https://github.com/salt-formulas/salt-formula-ceph/issues
669
670For feature requests, bug reports or blueprints affecting entire ecosystem,
671use Launchpad salt-formulas project:
672
673 https://launchpad.net/salt-formulas
674
675You can also join salt-formulas-users team and subscribe to mailing list:
676
677 https://launchpad.net/~salt-formulas-users
678
679Developers wishing to work on the salt-formulas projects should always base
680their work on master branch and submit pull request against specific formula.
681
682 https://github.com/salt-formulas/salt-formula-ceph
683
684Any questions or feedback is always welcome so feel free to join our IRC
685channel:
686
687 #salt-formulas @ irc.freenode.net