blob: dcd94623ad6f83be66581b1a2d6e134c2a717975 [file] [log] [blame]
jpavlik8425d362015-06-09 15:23:27 +02001
Ondrej Smola81d1a192017-08-17 11:13:10 +02002============
3Ceph formula
4============
jpavlik8425d362015-06-09 15:23:27 +02005
Ondrej Smola81d1a192017-08-17 11:13:10 +02006Ceph provides extraordinary data storage scalability. Thousands of client
7hosts or KVMs accessing petabytes to exabytes of data. Each one of your
8applications can use the object, block or file system interfaces to the same
9RADOS cluster simultaneously, which means your Ceph storage system serves as a
10flexible foundation for all of your data storage needs.
jpavlik8425d362015-06-09 15:23:27 +020011
Ondrej Smola81d1a192017-08-17 11:13:10 +020012Use salt-formula-linux for initial disk partitioning.
jpavlik8425d362015-06-09 15:23:27 +020013
14
Tomáš Kukráld2b82972017-08-29 12:45:45 +020015Daemons
16--------
17
18Ceph uses several daemons to handle data and cluster state. Each daemon type requires different computing capacity and hardware optimization.
19
20These daemons are currently supported by formula:
21
22* MON (`ceph.mon`)
23* OSD (`ceph.osd`)
24* RGW (`ceph.radosgw`)
25
26
27Architecture decisions
28-----------------------
29
30Please refer to upstream achritecture documents before designing your cluster. Solid understanding of Ceph principles is essential for making architecture decisions described bellow.
31http://docs.ceph.com/docs/master/architecture/
32
33* Ceph version
34
35There is 3 or 4 stable releases every year and many of nighty/dev release. You should decide which version will be used since the only stable releases are recommended for production. Some of the releases are marked LTS (Long Term Stable) and these releases receive bugfixed for longer period - usually until next LTS version is released.
36
37* Number of MON daemons
38
39Use 1 MON daemon for testing, 3 MONs for smaller production clusters and 5 MONs for very large production cluster. There is no need to have more than 5 MONs in normal environment because there isn't any significant benefit in running more than 5 MONs. Ceph require MONS to form quorum so you need to heve more than 50% of the MONs up and running to have fully operational cluster. Every I/O operation will stop once less than 50% MONs is availabe because they can't form quorum.
40
41* Number of PGs
42
43Placement groups are providing mappping between stored data and OSDs. It is necessary to calculate number of PGs because there should be stored decent amount of PGs on each OSD. Please keep in mind *decreasing number of PGs* isn't possible and *increading* can affect cluster performance.
44
45http://docs.ceph.com/docs/master/rados/operations/placement-groups/
46http://ceph.com/pgcalc/
47
48* Daemon colocation
49
50It is recommended to dedicate nodes for MONs and RWG since colocation can have and influence on cluster operations. Howerver, small clusters can be running MONs on OSD node but it is critical to have enough of resources for MON daemons because they are the most important part of the cluster.
51
52Installing RGW on node with other daemons isn't recommended because RGW daemon usually require a lot of bandwith and it harm cluster health.
53
Tomáš Kukráld2b82972017-08-29 12:45:45 +020054* Store type (Bluestore/Filestore)
55
56Recent version of Ceph support Bluestore as storage backend and backend should be used if available.
57
58http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
59
Jiri Broulikcc0d7752017-11-18 18:58:21 +010060* Block.db location for Bluestore
61
62There are two ways to setup block.db:
63 * **Colocated** block.db partition is created on the same disk as partition for the data. This setup is easier for installation and it doesn't require any other disk to be used. However, colocated setup is significantly slower than dedicated)
64 * **Dedicate** block.db is placed on different disk than data (or into partition). This setup can deliver much higher performance than colocated but it require to have more disks in servers. Block.db drives should be carefully selected because high I/O and durability is required.
65
66* Block.wal location for Bluestore
67
68There are two ways to setup block.wal - stores just the internal journal (write-ahead log):
69 * **Colocated** block.wal uses free space of the block.db device.
70 * **Dedicate** block.wal is placed on different disk than data (better put into partition as the size can be small) and possibly block.db device. This setup can deliver much higher performance than colocated but it require to have more disks in servers. Block.wal drives should be carefully selected because high I/O and durability is required.
71
72* Journal location for Filestore
73
74There are two ways to setup journal:
75 * **Colocated** journal is created on the same disk as partition for the data. This setup is easier for installation and it doesn't require any other disk to be used. However, colocated setup is significantly slower than dedicated)
76 * **Dedicate** journal is placed on different disk than data (or into partition). This setup can deliver much higher performance than colocated but it require to have more disks in servers. Journal drives should be carefully selected because high I/O and durability is required.
77
Tomáš Kukráld2b82972017-08-29 12:45:45 +020078* Cluster and public network
79
Mateusz Los4dd8c4f2017-12-01 09:53:02 +010080Ceph cluster is accessed using network and thus you need to have decend capacity to handle all the client. There are two networks required for cluster: **public** network and cluster network. Public network is used for client connections and MONs and OSDs are listening on this network. Second network ic called **cluster** networks and this network is used for communication between OSDs.
Tomáš Kukráld2b82972017-08-29 12:45:45 +020081
82Both networks should have dedicated interfaces, bonding interfaces and dedicating vlans on bonded interfaces isn't allowed. Good practise is dedicate more throughput for the cluster network because cluster traffic is more important than client traffic.
83
84* Pool parameters (size, min_size, type)
85
86You should setup each pool according to it's expected usage, at least `min_size` and `size` and pool type should be considered.
87
88* Cluster monitoring
89
90* Hardware
91
92Please refer to upstream hardware recommendation guide for general information about hardware.
93
94Ceph servers are required to fulfil special requirements becauce load generated by Ceph can be diametrically opposed to common load.
95
96http://docs.ceph.com/docs/master/start/hardware-recommendations/
97
98
99Basic management commands
100------------------------------
101
102Cluster
103********
104
105- :code:`ceph health` - check if cluster is healthy (:code:`ceph health detail` can provide more information)
106
107
108.. code-block:: bash
109
110 root@c-01:~# ceph health
111 HEALTH_OK
112
113- :code:`ceph status` - shows basic information about cluster
114
115
116.. code-block:: bash
117
118 root@c-01:~# ceph status
119 cluster e2dc51ae-c5e4-48f0-afc1-9e9e97dfd650
120 health HEALTH_OK
121 monmap e1: 3 mons at {1=192.168.31.201:6789/0,2=192.168.31.202:6789/0,3=192.168.31.203:6789/0}
122 election epoch 38, quorum 0,1,2 1,2,3
123 osdmap e226: 6 osds: 6 up, 6 in
124 pgmap v27916: 400 pgs, 2 pools, 21233 MB data, 5315 objects
125 121 GB used, 10924 GB / 11058 GB avail
126 400 active+clean
127 client io 481 kB/s rd, 132 kB/s wr, 185 op/
128
129MON
130****
131
132http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/
133
134OSD
135****
136
137http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
138
139- :code:`ceph osd tree` - show all OSDs and it's state
140
141.. code-block:: bash
142
143 root@c-01:~# ceph osd tree
144 ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
145 -4 0 host c-04
146 -1 10.79993 root default
147 -2 3.59998 host c-01
148 0 1.79999 osd.0 up 1.00000 1.00000
149 1 1.79999 osd.1 up 1.00000 1.00000
150 -3 3.59998 host c-02
151 2 1.79999 osd.2 up 1.00000 1.00000
152 3 1.79999 osd.3 up 1.00000 1.00000
153 -5 3.59998 host c-03
154 4 1.79999 osd.4 up 1.00000 1.00000
155 5 1.79999 osd.5 up 1.00000 1.00000
156
157- :code:`ceph osd pools ls` - list of pool
158
159.. code-block:: bash
160
161 root@c-01:~# ceph osd lspools
162 0 rbd,1 test
163
164PG
165***
166
167http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg
168
169- :code:`ceph pg ls` - list placement groups
170
171.. code-block:: bash
172
173 root@c-01:~# ceph pg ls | head -n 4
174 pg_stat objects mip degr misp unf bytes log disklog state state_stamp v reported up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp
175 0.0 11 0 0 0 0 46137344 3044 3044 active+clean 2015-07-02 10:12:40.603692 226'10652 226:1798 [4,2,0] 4 [4,2,0] 4 0'0 2015-07-01 18:38:33.126953 0'0 2015-07-01 18:17:01.904194
176 0.1 7 0 0 0 0 25165936 3026 3026 active+clean 2015-07-02 10:12:40.585833 226'5808 226:1070 [2,4,1] 2 [2,4,1] 2 0'0 2015-07-01 18:38:32.352721 0'0 2015-07-01 18:17:01.904198
177 0.2 18 0 0 0 0 75497472 3039 3039 active+clean 2015-07-02 10:12:39.569630 226'17447 226:3213 [3,1,5] 3 [3,1,5] 3 0'0 2015-07-01 18:38:34.308228 0'0 2015-07-01 18:17:01.904199
178
179- :code:`ceph pg map 1.1` - show mapping between PG and OSD
180
181.. code-block:: bash
182
183 root@c-01:~# ceph pg map 1.1
184 osdmap e226 pg 1.1 (1.1) -> up [5,1,2] acting [5,1,2]
185
186
187
jpavlik8425d362015-06-09 15:23:27 +0200188Sample pillars
189==============
190
Ondrej Smola81d1a192017-08-17 11:13:10 +0200191Common metadata for all nodes/roles
jpavlik8425d362015-06-09 15:23:27 +0200192
193.. code-block:: yaml
194
195 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200196 common:
Jiri Broulikd5729042017-09-19 20:07:22 +0200197 version: luminous
Jiri Broulik42552052018-02-15 15:23:29 +0100198 cluster_name: ceph
jpavlik8425d362015-06-09 15:23:27 +0200199 config:
200 global:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200201 param1: value1
202 param2: value1
203 param3: value1
204 pool_section:
205 param1: value2
206 param2: value2
207 param3: value2
208 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
209 members:
210 - name: cmn01
211 host: 10.0.0.1
212 - name: cmn02
213 host: 10.0.0.2
214 - name: cmn03
215 host: 10.0.0.3
jpavlik8425d362015-06-09 15:23:27 +0200216 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200217 admin:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200218 caps:
219 mds: "allow *"
220 mgr: "allow *"
221 mon: "allow *"
222 osd: "allow *"
Jiri Broulikd5729042017-09-19 20:07:22 +0200223 bootstrap-osd:
Jiri Broulikd5729042017-09-19 20:07:22 +0200224 caps:
225 mon: "allow profile bootstrap-osd"
226
jpavlik8425d362015-06-09 15:23:27 +0200227
Ondrej Smola81d1a192017-08-17 11:13:10 +0200228Optional definition for cluster and public networks. Cluster network is used
229for replication. Public network for front-end communication.
jpavlik8425d362015-06-09 15:23:27 +0200230
231.. code-block:: yaml
232
233 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200234 common:
Jiri Broulikd5729042017-09-19 20:07:22 +0200235 version: luminous
Ondrej Smola81d1a192017-08-17 11:13:10 +0200236 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
237 ....
238 public_network: 10.0.0.0/24, 10.1.0.0/24
239 cluster_network: 10.10.0.0/24, 10.11.0.0/24
240
241
242Ceph mon (control) roles
243------------------------
244
245Monitors: A Ceph Monitor maintains maps of the cluster state, including the
246monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map.
247Ceph maintains a history (called an epoch”) of each state change in the Ceph
248Monitors, Ceph OSD Daemons, and PGs.
249
250.. code-block:: yaml
251
252 ceph:
253 common:
254 config:
255 mon:
256 key: value
jpavlik8425d362015-06-09 15:23:27 +0200257 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200258 enabled: true
jpavlik8425d362015-06-09 15:23:27 +0200259 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200260 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200261 caps:
262 mon: "allow *"
263 admin:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200264 caps:
265 mds: "allow *"
266 mgr: "allow *"
267 mon: "allow *"
268 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +0200269
Ondrej Smola91c83162017-09-12 16:40:02 +0200270Ceph mgr roles
271------------------------
272
273The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to provide additional monitoring and interfaces to external monitoring and management systems. Since the 12.x (luminous) Ceph release, the ceph-mgr daemon is required for normal operations. The ceph-mgr daemon is an optional component in the 11.x (kraken) Ceph release.
274
275By default, the manager daemon requires no additional configuration, beyond ensuring it is running. If there is no mgr daemon running, you will see a health warning to that effect, and some of the other information in the output of ceph status will be missing or stale until a mgr is started.
276
277
278.. code-block:: yaml
279
280 ceph:
281 mgr:
282 enabled: true
283 dashboard:
284 enabled: true
285 host: 10.103.255.252
286 port: 7000
287
Ondrej Smola81d1a192017-08-17 11:13:10 +0200288
289Ceph OSD (storage) roles
290------------------------
jpavlik8425d362015-06-09 15:23:27 +0200291
292.. code-block:: yaml
293
294 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200295 common:
Jiri Broulikec62dec2017-10-10 13:45:15 +0200296 version: luminous
297 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
298 public_network: 10.0.0.0/24, 10.1.0.0/24
299 cluster_network: 10.10.0.0/24, 10.11.0.0/24
300 keyring:
301 bootstrap-osd:
302 caps:
303 mon: "allow profile bootstrap-osd"
304 ....
Ondrej Smola81d1a192017-08-17 11:13:10 +0200305 osd:
306 enabled: true
Jiri Broulikec62dec2017-10-10 13:45:15 +0200307 crush_parent: rack01
308 journal_size: 20480 (20G)
309 bluestore_block_db_size: 10073741824 (10G)
310 bluestore_block_wal_size: 10073741824 (10G)
Jiri Broulikd5729042017-09-19 20:07:22 +0200311 bluestore_block_size: 807374182400 (800G)
312 backend:
313 filestore:
314 disks:
315 - dev: /dev/sdm
316 enabled: false
Jiri Broulikd5729042017-09-19 20:07:22 +0200317 journal: /dev/ssd
Jiri Broulik8870b872018-01-24 18:04:25 +0100318 journal_partition: 5
319 data_partition: 6
320 lockbox_partition: 7
321 data_partition_size: 12000 (MB)
Jiri Broulikd5729042017-09-19 20:07:22 +0200322 class: bestssd
Jiri Broulik8870b872018-01-24 18:04:25 +0100323 weight: 1.666
Jiri Broulik58ff84b2017-11-21 14:23:51 +0100324 dmcrypt: true
Jiri Broulik8870b872018-01-24 18:04:25 +0100325 journal_dmcrypt: false
326 - dev: /dev/sdf
327 journal: /dev/ssd
328 journal_dmcrypt: true
329 class: bestssd
330 weight: 1.666
Jiri Broulikd5729042017-09-19 20:07:22 +0200331 - dev: /dev/sdl
Jiri Broulikd5729042017-09-19 20:07:22 +0200332 journal: /dev/ssd
Jiri Broulikd5729042017-09-19 20:07:22 +0200333 class: bestssd
Jiri Broulik8870b872018-01-24 18:04:25 +0100334 weight: 1.666
Jiri Broulikd5729042017-09-19 20:07:22 +0200335 bluestore:
336 disks:
337 - dev: /dev/sdb
Jiri Broulik8870b872018-01-24 18:04:25 +0100338 - dev: /dev/sdf
339 block_db: /dev/ssd
340 block_wal: /dev/ssd
341 block_db_dmcrypt: true
342 block_wal_dmcrypt: true
Jiri Broulikd5729042017-09-19 20:07:22 +0200343 - dev: /dev/sdc
344 block_db: /dev/ssd
345 block_wal: /dev/ssd
Jiri Broulik8870b872018-01-24 18:04:25 +0100346 data_partition: 1
347 block_partition: 2
348 lockbox_partition: 5
349 block_db_partition: 3
350 block_wal_partition: 4
Jiri Broulikc2be93b2017-10-03 14:20:00 +0200351 class: ssd
352 weight: 1.666
Jiri Broulik58ff84b2017-11-21 14:23:51 +0100353 dmcrypt: true
Jiri Broulik8870b872018-01-24 18:04:25 +0100354 block_db_dmcrypt: false
355 block_wal_dmcrypt: false
Jiri Broulikd5729042017-09-19 20:07:22 +0200356 - dev: /dev/sdd
357 enabled: false
jpavlik8425d362015-06-09 15:23:27 +0200358
Ondrej Smola81d1a192017-08-17 11:13:10 +0200359
Jiri Broulikc2be93b2017-10-03 14:20:00 +0200360Ceph client roles - ...Deprecated - use ceph:common instead
361--------------------------------------------------------
Ondrej Smola81d1a192017-08-17 11:13:10 +0200362
363Simple ceph client service
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200364
365.. code-block:: yaml
366
367 ceph:
368 client:
369 config:
370 global:
371 mon initial members: ceph1,ceph2,ceph3
372 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
373 keyring:
374 monitoring:
375 key: 00000000000000000000000000000000000000==
Ondrej Smola81d1a192017-08-17 11:13:10 +0200376
377At OpenStack control settings are usually located at cinder-volume or glance-
378registry services.
379
380.. code-block:: yaml
381
382 ceph:
383 client:
384 config:
385 global:
386 fsid: 00000000-0000-0000-0000-000000000000
387 mon initial members: ceph1,ceph2,ceph3
388 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
389 osd_fs_mkfs_arguments_xfs:
390 osd_fs_mount_options_xfs: rw,noatime
391 network public: 10.0.0.0/24
392 network cluster: 10.0.0.0/24
393 osd_fs_type: xfs
394 osd:
395 osd journal size: 7500
396 filestore xattr use omap: true
397 mon:
398 mon debug dump transactions: false
399 keyring:
400 cinder:
401 key: 00000000000000000000000000000000000000==
402 glance:
403 key: 00000000000000000000000000000000000000==
404
405
406Ceph gateway
407------------
408
409Rados gateway with keystone v2 auth backend
410
411.. code-block:: yaml
412
413 ceph:
414 radosgw:
415 enabled: true
416 hostname: gw.ceph.lab
417 bind:
418 address: 10.10.10.1
419 port: 8080
420 identity:
421 engine: keystone
422 api_version: 2
423 host: 10.10.10.100
424 port: 5000
425 user: admin
426 password: password
427 tenant: admin
428
429Rados gateway with keystone v3 auth backend
430
431.. code-block:: yaml
432
433 ceph:
cdodda9b8362c2018-04-19 18:06:41 -0500434 common:
435 config:
436 rgw:
437 key: value
Ondrej Smola81d1a192017-08-17 11:13:10 +0200438 radosgw:
439 enabled: true
440 hostname: gw.ceph.lab
441 bind:
442 address: 10.10.10.1
443 port: 8080
444 identity:
445 engine: keystone
446 api_version: 3
447 host: 10.10.10.100
448 port: 5000
449 user: admin
450 password: password
451 project: admin
452 domain: default
Jiri Broulik4870e802018-06-25 12:14:34 +0200453 swift:
454 versioning:
455 enabled: true
Ivan Berezovskiy645d4442018-11-21 17:09:54 +0400456 enforce_content_length: true
Ondrej Smola81d1a192017-08-17 11:13:10 +0200457
458
459Ceph setup role
460---------------
461
462Replicated ceph storage pool
463
464.. code-block:: yaml
465
466 ceph:
467 setup:
468 pool:
469 replicated_pool:
470 pg_num: 256
471 pgp_num: 256
472 type: replicated
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200473 crush_rule: sata
474 application: rbd
Ondrej Smola81d1a192017-08-17 11:13:10 +0200475
Jiri Broulikeaf41472017-10-18 09:56:33 +0200476 .. note:: For Kraken and earlier releases please specify crush_rule as a ruleset number.
477 For Kraken and earlier releases application param is not needed.
478
Ondrej Smola81d1a192017-08-17 11:13:10 +0200479Erasure ceph storage pool
480
481.. code-block:: yaml
482
483 ceph:
484 setup:
485 pool:
486 erasure_pool:
487 pg_num: 256
488 pgp_num: 256
489 type: erasure
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200490 crush_rule: ssd
491 application: rbd
Ondrej Smola81d1a192017-08-17 11:13:10 +0200492
Jiri Broulikd68e33a2017-10-24 10:54:43 +0200493
Jiri Broulike4ba9f62017-11-08 11:33:00 +0100494Inline compression for Bluestore backend
495
496.. code-block:: yaml
497
498 ceph:
499 setup:
500 pool:
501 volumes:
502 pg_num: 256
503 pgp_num: 256
504 type: replicated
505 crush_rule: hdd
506 application: rbd
507 compression_algorithm: snappy
508 compression_mode: aggressive
509 compression_required_ratio: .875
510 ...
511
512
Jiri Broulikd68e33a2017-10-24 10:54:43 +0200513Ceph manage keyring keys
514------------------------
515
516Keyrings are dynamically generated unless specified by the following pillar.
517
518.. code-block:: yaml
519
520 ceph:
521 common:
522 manage_keyring: true
523 keyring:
524 glance:
525 name: images
526 key: AACf3ulZFFPNDxAAd2DWds3aEkHh4IklZVgIaQ==
527 caps:
528 mon: "allow r"
529 osd: "allow class-read object_prefix rdb_children, allow rwx pool=images"
530
531
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200532Generate CRUSH map - Recommended way
533-----------------------------------
Tomáš Kukrál363d37d2017-08-17 13:40:20 +0200534
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200535It is required to define the `type` for crush buckets and these types must start with `root` (top) and end with `host`. OSD daemons will be assigned to hosts according to it's hostname. Weight of the buckets will be calculated according to weight of it's children.
536
537If the pools that are in use have size of 3 it is best to have 3 children of a specific type in the root CRUSH tree to replicate objects across (Specified in rule steps by 'type region').
Tomáš Kukrál363d37d2017-08-17 13:40:20 +0200538
539.. code-block:: yaml
540
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200541 ceph:
542 setup:
543 crush:
544 enabled: True
545 tunables:
546 choose_total_tries: 50
547 choose_local_tries: 0
548 choose_local_fallback_tries: 0
549 chooseleaf_descend_once: 1
550 chooseleaf_vary_r: 1
551 chooseleaf_stable: 1
552 straw_calc_version: 1
553 allowed_bucket_algs: 54
554 type:
555 - root
556 - region
557 - rack
558 - host
Jiri Broulikeaf41472017-10-18 09:56:33 +0200559 - osd
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200560 root:
561 - name: root-ssd
562 - name: root-sata
563 region:
564 - name: eu-1
565 parent: root-sata
566 - name: eu-2
567 parent: root-sata
568 - name: eu-3
569 parent: root-ssd
570 - name: us-1
571 parent: root-sata
572 rack:
573 - name: rack01
574 parent: eu-1
575 - name: rack02
576 parent: eu-2
577 - name: rack03
578 parent: us-1
579 rule:
580 sata:
581 ruleset: 0
582 type: replicated
583 min_size: 1
584 max_size: 10
585 steps:
586 - take take root-ssd
587 - chooseleaf firstn 0 type region
588 - emit
589 ssd:
590 ruleset: 1
591 type: replicated
592 min_size: 1
593 max_size: 10
594 steps:
595 - take take root-sata
596 - chooseleaf firstn 0 type region
597 - emit
598
599
600Generate CRUSH map - Alternative way
601------------------------------------
602
603It's necessary to create per OSD pillar.
604
605.. code-block:: yaml
606
607 ceph:
608 osd:
609 crush:
610 - type: root
611 name: root1
612 - type: region
613 name: eu-1
614 - type: rack
615 name: rack01
616 - type: host
617 name: osd001
618
Jiri Broulik8870b872018-01-24 18:04:25 +0100619Add OSDs with specific weight
620-----------------------------
621
622Add OSD device(s) with initial weight set specifically to certain value.
623
624.. code-block:: yaml
625
626 ceph:
627 osd:
628 crush_initial_weight: 0
629
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200630
631Apply CRUSH map
632---------------
633
634Before you apply CRUSH map please make sure that settings in generated file in /etc/ceph/crushmap are correct.
635
636.. code-block:: yaml
637
638 ceph:
639 setup:
640 crush:
641 enforce: true
642 pool:
643 images:
644 crush_rule: sata
645 application: rbd
646 volumes:
647 crush_rule: sata
648 application: rbd
649 vms:
650 crush_rule: ssd
651 application: rbd
652
Jiri Broulikeaf41472017-10-18 09:56:33 +0200653 .. note:: For Kraken and earlier releases please specify crush_rule as a ruleset number.
654 For Kraken and earlier releases application param is not needed.
655
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200656
657Persist CRUSH map
658--------------------
659
660After the CRUSH map is applied to Ceph it's recommended to persist the same settings even after OSD reboots.
661
662.. code-block:: yaml
663
664 ceph:
665 osd:
666 crush_update: false
667
Ondrej Smola81d1a192017-08-17 11:13:10 +0200668
669Ceph monitoring
670---------------
671
Jiri Broulik44574072017-11-14 12:27:39 +0100672By default monitoring is setup to collect information from MON and OSD nodes. To change the default values add the following pillar to MON nodes.
Ondrej Smola81d1a192017-08-17 11:13:10 +0200673
674.. code-block:: yaml
675
676 ceph:
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200677 monitoring:
Jiri Broulik44574072017-11-14 12:27:39 +0100678 space_used_warning_threshold: 0.75
679 space_used_critical_threshold: 0.85
680 apply_latency_threshold: 0.007
681 commit_latency_threshold: 0.7
Machi Hoshino50682992018-09-19 11:49:05 +0900682 pool:
683 vms:
684 pool_space_used_utilization_warning_threshold: 0.75
685 pool_space_used_critical_threshold: 0.85
686 pool_write_ops_threshold: 200
687 pool_write_bytes_threshold: 70000000
688 pool_read_bytes_threshold: 70000000
689 pool_read_ops_threshold: 1000
690 images:
691 pool_space_used_utilization_warning_threshold: 0.50
692 pool_space_used_critical_threshold: 0.95
693 pool_write_ops_threshold: 100
694 pool_write_bytes_threshold: 50000000
695 pool_read_bytes_threshold: 50000000
696 pool_read_ops_threshold: 500
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200697
Mateusz Los4dd8c4f2017-12-01 09:53:02 +0100698Ceph monitor backups
699--------------------
700
701Backup client with ssh/rsync remote host
702
703.. code-block:: yaml
704
705 ceph:
706 backup:
707 client:
708 enabled: true
709 full_backups_to_keep: 3
710 hours_before_full: 24
711 target:
712 host: cfg01
Jiri Broulik44feb042018-03-05 12:10:19 +0100713 backup_dir: server-backup-dir
Mateusz Los4dd8c4f2017-12-01 09:53:02 +0100714
715Backup client with local backup only
716
717.. code-block:: yaml
718
719 ceph:
720 backup:
721 client:
722 enabled: true
723 full_backups_to_keep: 3
724 hours_before_full: 24
725
Martin Polreich8d37f282018-03-04 17:38:15 +0100726
727Backup client at exact times:
728
729..code-block:: yaml
730
731 ceph:
732 backup:
733 client:
734 enabled: true
735 full_backups_to_keep: 3
736 incr_before_full: 3
737 backup_times:
Martin Polreichfe1b3902018-04-25 15:32:30 +0200738 day_of_week: 0
Martin Polreich8d37f282018-03-04 17:38:15 +0100739 hour: 4
740 minute: 52
741 compression: true
742 compression_threads: 2
743 database:
744 user: user
745 password: password
746 target:
747 host: host01
748
749 .. note:: Parameters in ``backup_times`` section can be used to set up exact
750 time the cron job should be executed. In this example, the backup job
751 would be executed every Sunday at 4:52 AM. If any of the individual
752 ``backup_times`` parameters is not defined, the defalut ``*`` value will be
753 used. For example, if minute parameter is ``*``, it will run the backup every minute,
754 which is ususally not desired.
Martin Polreichfe1b3902018-04-25 15:32:30 +0200755 Available parameters are ``day_of_week``, ``day_of_month``, ``month``, ``hour`` and ``minute``.
Martin Polreich8d37f282018-03-04 17:38:15 +0100756 Please see the crontab reference for further info on how to set these parameters.
757
758 .. note:: Please be aware that only ``backup_times`` section OR
759 ``hours_before_full(incr)`` can be defined. If both are defined,
760 the ``backup_times`` section will be peferred.
761
762 .. note:: New parameter ``incr_before_full`` needs to be defined. This
763 number sets number of incremental backups to be run, before a full backup
764 is performed.
765
Mateusz Los4dd8c4f2017-12-01 09:53:02 +0100766Backup server rsync
767
768.. code-block:: yaml
769
770 ceph:
771 backup:
772 server:
773 enabled: true
774 hours_before_full: 24
775 full_backups_to_keep: 5
776 key:
777 ceph_pub_key:
778 enabled: true
779 key: ssh_rsa
780
Jiri Broulik62892df2018-02-28 16:22:00 +0100781Backup server without strict client restriction
782
783.. code-block:: yaml
784
785 ceph:
786 backup:
787 restrict_clients: false
788
Martin Polreich8d37f282018-03-04 17:38:15 +0100789Backup server at exact times:
790
791..code-block:: yaml
792
793 ceph:
794 backup:
795 server:
796 enabled: true
797 full_backups_to_keep: 3
798 incr_before_full: 3
799 backup_dir: /srv/backup
800 backup_times:
Martin Polreichfe1b3902018-04-25 15:32:30 +0200801 day_of_week: 0
Martin Polreich8d37f282018-03-04 17:38:15 +0100802 hour: 4
803 minute: 52
804 key:
805 ceph_pub_key:
806 enabled: true
807 key: key
808
809 .. note:: Parameters in ``backup_times`` section can be used to set up exact
810 time the cron job should be executed. In this example, the backup job
811 would be executed every Sunday at 4:52 AM. If any of the individual
812 ``backup_times`` parameters is not defined, the defalut ``*`` value will be
813 used. For example, if minute parameter is ``*``, it will run the backup every minute,
814 which is ususally not desired.
Martin Polreichfe1b3902018-04-25 15:32:30 +0200815 Available parameters are ``day_of_week``, ``day_of_month``, ``month``, ``hour`` and ``minute``.
Martin Polreich8d37f282018-03-04 17:38:15 +0100816 Please see the crontab reference for further info on how to set these parameters.
817
818 .. note:: Please be aware that only ``backup_times`` section OR
819 ``hours_before_full(incr)`` can be defined. If both are defined, The
820 ``backup_times`` section will be peferred.
821
822 .. note:: New parameter ``incr_before_full`` needs to be defined. This
823 number sets number of incremental backups to be run, before a full backup
824 is performed.
825
Jiri Broulik42552052018-02-15 15:23:29 +0100826Migration from Decapod to salt-formula-ceph
827--------------------------------------------
828
829The following configuration will run a python script which will generate ceph config and osd disk mappings to be put in cluster model.
830
831.. code-block:: yaml
832
833 ceph:
834 decapod:
835 ip: 192.168.1.10
836 user: user
837 password: psswd
838 deploy_config_name: ceph
Mateusz Los4dd8c4f2017-12-01 09:53:02 +0100839
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200840
Ondrej Smola81d1a192017-08-17 11:13:10 +0200841More information
842================
jpavlik8425d362015-06-09 15:23:27 +0200843
844* https://github.com/cloud-ee/ceph-salt-formula
845* http://ceph.com/ceph-storage/
jan kaufman4f7757b2015-06-12 10:49:00 +0200846* http://ceph.com/docs/master/start/intro/
Filip Pytloun32841d72017-02-02 13:02:03 +0100847
Ondrej Smola81d1a192017-08-17 11:13:10 +0200848
849Documentation and bugs
Filip Pytloun32841d72017-02-02 13:02:03 +0100850======================
851
852To learn how to install and update salt-formulas, consult the documentation
853available online at:
854
855 http://salt-formulas.readthedocs.io/
856
857In the unfortunate event that bugs are discovered, they should be reported to
858the appropriate issue tracker. Use Github issue tracker for specific salt
859formula:
860
861 https://github.com/salt-formulas/salt-formula-ceph/issues
862
863For feature requests, bug reports or blueprints affecting entire ecosystem,
864use Launchpad salt-formulas project:
865
866 https://launchpad.net/salt-formulas
867
868You can also join salt-formulas-users team and subscribe to mailing list:
869
870 https://launchpad.net/~salt-formulas-users
871
872Developers wishing to work on the salt-formulas projects should always base
873their work on master branch and submit pull request against specific formula.
874
875 https://github.com/salt-formulas/salt-formula-ceph
876
877Any questions or feedback is always welcome so feel free to join our IRC
878channel:
879
880 #salt-formulas @ irc.freenode.net