blob: 825f128366303ba629f0ca5f909c1d3074cf3a36 [file] [log] [blame]
jpavlik8425d362015-06-09 15:23:27 +02001
Ondrej Smola81d1a192017-08-17 11:13:10 +02002============
3Ceph formula
4============
jpavlik8425d362015-06-09 15:23:27 +02005
Ondrej Smola81d1a192017-08-17 11:13:10 +02006Ceph provides extraordinary data storage scalability. Thousands of client
7hosts or KVMs accessing petabytes to exabytes of data. Each one of your
8applications can use the object, block or file system interfaces to the same
9RADOS cluster simultaneously, which means your Ceph storage system serves as a
10flexible foundation for all of your data storage needs.
jpavlik8425d362015-06-09 15:23:27 +020011
Ondrej Smola81d1a192017-08-17 11:13:10 +020012Use salt-formula-linux for initial disk partitioning.
jpavlik8425d362015-06-09 15:23:27 +020013
14
Tomáš Kukráld2b82972017-08-29 12:45:45 +020015Daemons
16--------
17
18Ceph uses several daemons to handle data and cluster state. Each daemon type requires different computing capacity and hardware optimization.
19
20These daemons are currently supported by formula:
21
22* MON (`ceph.mon`)
23* OSD (`ceph.osd`)
24* RGW (`ceph.radosgw`)
25
26
27Architecture decisions
28-----------------------
29
30Please refer to upstream achritecture documents before designing your cluster. Solid understanding of Ceph principles is essential for making architecture decisions described bellow.
31http://docs.ceph.com/docs/master/architecture/
32
33* Ceph version
34
35There is 3 or 4 stable releases every year and many of nighty/dev release. You should decide which version will be used since the only stable releases are recommended for production. Some of the releases are marked LTS (Long Term Stable) and these releases receive bugfixed for longer period - usually until next LTS version is released.
36
37* Number of MON daemons
38
39Use 1 MON daemon for testing, 3 MONs for smaller production clusters and 5 MONs for very large production cluster. There is no need to have more than 5 MONs in normal environment because there isn't any significant benefit in running more than 5 MONs. Ceph require MONS to form quorum so you need to heve more than 50% of the MONs up and running to have fully operational cluster. Every I/O operation will stop once less than 50% MONs is availabe because they can't form quorum.
40
41* Number of PGs
42
43Placement groups are providing mappping between stored data and OSDs. It is necessary to calculate number of PGs because there should be stored decent amount of PGs on each OSD. Please keep in mind *decreasing number of PGs* isn't possible and *increading* can affect cluster performance.
44
45http://docs.ceph.com/docs/master/rados/operations/placement-groups/
46http://ceph.com/pgcalc/
47
48* Daemon colocation
49
50It is recommended to dedicate nodes for MONs and RWG since colocation can have and influence on cluster operations. Howerver, small clusters can be running MONs on OSD node but it is critical to have enough of resources for MON daemons because they are the most important part of the cluster.
51
52Installing RGW on node with other daemons isn't recommended because RGW daemon usually require a lot of bandwith and it harm cluster health.
53
Tomáš Kukráld2b82972017-08-29 12:45:45 +020054* Store type (Bluestore/Filestore)
55
56Recent version of Ceph support Bluestore as storage backend and backend should be used if available.
57
58http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/
59
Jiri Broulikcc0d7752017-11-18 18:58:21 +010060* Block.db location for Bluestore
61
62There are two ways to setup block.db:
63 * **Colocated** block.db partition is created on the same disk as partition for the data. This setup is easier for installation and it doesn't require any other disk to be used. However, colocated setup is significantly slower than dedicated)
64 * **Dedicate** block.db is placed on different disk than data (or into partition). This setup can deliver much higher performance than colocated but it require to have more disks in servers. Block.db drives should be carefully selected because high I/O and durability is required.
65
66* Block.wal location for Bluestore
67
68There are two ways to setup block.wal - stores just the internal journal (write-ahead log):
69 * **Colocated** block.wal uses free space of the block.db device.
70 * **Dedicate** block.wal is placed on different disk than data (better put into partition as the size can be small) and possibly block.db device. This setup can deliver much higher performance than colocated but it require to have more disks in servers. Block.wal drives should be carefully selected because high I/O and durability is required.
71
72* Journal location for Filestore
73
74There are two ways to setup journal:
75 * **Colocated** journal is created on the same disk as partition for the data. This setup is easier for installation and it doesn't require any other disk to be used. However, colocated setup is significantly slower than dedicated)
76 * **Dedicate** journal is placed on different disk than data (or into partition). This setup can deliver much higher performance than colocated but it require to have more disks in servers. Journal drives should be carefully selected because high I/O and durability is required.
77
Tomáš Kukráld2b82972017-08-29 12:45:45 +020078* Cluster and public network
79
Mateusz Los4dd8c4f2017-12-01 09:53:02 +010080Ceph cluster is accessed using network and thus you need to have decend capacity to handle all the client. There are two networks required for cluster: **public** network and cluster network. Public network is used for client connections and MONs and OSDs are listening on this network. Second network ic called **cluster** networks and this network is used for communication between OSDs.
Tomáš Kukráld2b82972017-08-29 12:45:45 +020081
82Both networks should have dedicated interfaces, bonding interfaces and dedicating vlans on bonded interfaces isn't allowed. Good practise is dedicate more throughput for the cluster network because cluster traffic is more important than client traffic.
83
84* Pool parameters (size, min_size, type)
85
86You should setup each pool according to it's expected usage, at least `min_size` and `size` and pool type should be considered.
87
88* Cluster monitoring
89
90* Hardware
91
92Please refer to upstream hardware recommendation guide for general information about hardware.
93
94Ceph servers are required to fulfil special requirements becauce load generated by Ceph can be diametrically opposed to common load.
95
96http://docs.ceph.com/docs/master/start/hardware-recommendations/
97
98
99Basic management commands
100------------------------------
101
102Cluster
103********
104
105- :code:`ceph health` - check if cluster is healthy (:code:`ceph health detail` can provide more information)
106
107
108.. code-block:: bash
109
110 root@c-01:~# ceph health
111 HEALTH_OK
112
113- :code:`ceph status` - shows basic information about cluster
114
115
116.. code-block:: bash
117
118 root@c-01:~# ceph status
119 cluster e2dc51ae-c5e4-48f0-afc1-9e9e97dfd650
120 health HEALTH_OK
121 monmap e1: 3 mons at {1=192.168.31.201:6789/0,2=192.168.31.202:6789/0,3=192.168.31.203:6789/0}
122 election epoch 38, quorum 0,1,2 1,2,3
123 osdmap e226: 6 osds: 6 up, 6 in
124 pgmap v27916: 400 pgs, 2 pools, 21233 MB data, 5315 objects
125 121 GB used, 10924 GB / 11058 GB avail
126 400 active+clean
127 client io 481 kB/s rd, 132 kB/s wr, 185 op/
128
129MON
130****
131
132http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/
133
134OSD
135****
136
137http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
138
139- :code:`ceph osd tree` - show all OSDs and it's state
140
141.. code-block:: bash
142
143 root@c-01:~# ceph osd tree
144 ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
145 -4 0 host c-04
146 -1 10.79993 root default
147 -2 3.59998 host c-01
148 0 1.79999 osd.0 up 1.00000 1.00000
149 1 1.79999 osd.1 up 1.00000 1.00000
150 -3 3.59998 host c-02
151 2 1.79999 osd.2 up 1.00000 1.00000
152 3 1.79999 osd.3 up 1.00000 1.00000
153 -5 3.59998 host c-03
154 4 1.79999 osd.4 up 1.00000 1.00000
155 5 1.79999 osd.5 up 1.00000 1.00000
156
157- :code:`ceph osd pools ls` - list of pool
158
159.. code-block:: bash
160
161 root@c-01:~# ceph osd lspools
162 0 rbd,1 test
163
164PG
165***
166
167http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg
168
169- :code:`ceph pg ls` - list placement groups
170
171.. code-block:: bash
172
173 root@c-01:~# ceph pg ls | head -n 4
174 pg_stat objects mip degr misp unf bytes log disklog state state_stamp v reported up up_primary acting acting_primary last_scrub scrub_stamp last_deep_scrub deep_scrub_stamp
175 0.0 11 0 0 0 0 46137344 3044 3044 active+clean 2015-07-02 10:12:40.603692 226'10652 226:1798 [4,2,0] 4 [4,2,0] 4 0'0 2015-07-01 18:38:33.126953 0'0 2015-07-01 18:17:01.904194
176 0.1 7 0 0 0 0 25165936 3026 3026 active+clean 2015-07-02 10:12:40.585833 226'5808 226:1070 [2,4,1] 2 [2,4,1] 2 0'0 2015-07-01 18:38:32.352721 0'0 2015-07-01 18:17:01.904198
177 0.2 18 0 0 0 0 75497472 3039 3039 active+clean 2015-07-02 10:12:39.569630 226'17447 226:3213 [3,1,5] 3 [3,1,5] 3 0'0 2015-07-01 18:38:34.308228 0'0 2015-07-01 18:17:01.904199
178
179- :code:`ceph pg map 1.1` - show mapping between PG and OSD
180
181.. code-block:: bash
182
183 root@c-01:~# ceph pg map 1.1
184 osdmap e226 pg 1.1 (1.1) -> up [5,1,2] acting [5,1,2]
185
186
187
jpavlik8425d362015-06-09 15:23:27 +0200188Sample pillars
189==============
190
Ondrej Smola81d1a192017-08-17 11:13:10 +0200191Common metadata for all nodes/roles
jpavlik8425d362015-06-09 15:23:27 +0200192
193.. code-block:: yaml
194
195 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200196 common:
Jiri Broulikd5729042017-09-19 20:07:22 +0200197 version: luminous
Jiri Broulik42552052018-02-15 15:23:29 +0100198 cluster_name: ceph
jpavlik8425d362015-06-09 15:23:27 +0200199 config:
200 global:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200201 param1: value1
202 param2: value1
203 param3: value1
204 pool_section:
205 param1: value2
206 param2: value2
207 param3: value2
208 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
209 members:
210 - name: cmn01
211 host: 10.0.0.1
212 - name: cmn02
213 host: 10.0.0.2
214 - name: cmn03
215 host: 10.0.0.3
jpavlik8425d362015-06-09 15:23:27 +0200216 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200217 admin:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200218 caps:
219 mds: "allow *"
220 mgr: "allow *"
221 mon: "allow *"
222 osd: "allow *"
Jiri Broulikd5729042017-09-19 20:07:22 +0200223 bootstrap-osd:
Jiri Broulikd5729042017-09-19 20:07:22 +0200224 caps:
225 mon: "allow profile bootstrap-osd"
226
jpavlik8425d362015-06-09 15:23:27 +0200227
Ondrej Smola81d1a192017-08-17 11:13:10 +0200228Optional definition for cluster and public networks. Cluster network is used
229for replication. Public network for front-end communication.
jpavlik8425d362015-06-09 15:23:27 +0200230
231.. code-block:: yaml
232
233 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200234 common:
Jiri Broulikd5729042017-09-19 20:07:22 +0200235 version: luminous
Ondrej Smola81d1a192017-08-17 11:13:10 +0200236 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
237 ....
238 public_network: 10.0.0.0/24, 10.1.0.0/24
239 cluster_network: 10.10.0.0/24, 10.11.0.0/24
240
241
242Ceph mon (control) roles
243------------------------
244
245Monitors: A Ceph Monitor maintains maps of the cluster state, including the
246monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map.
247Ceph maintains a history (called an epoch”) of each state change in the Ceph
248Monitors, Ceph OSD Daemons, and PGs.
249
250.. code-block:: yaml
251
252 ceph:
253 common:
254 config:
255 mon:
256 key: value
jpavlik8425d362015-06-09 15:23:27 +0200257 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200258 enabled: true
jpavlik8425d362015-06-09 15:23:27 +0200259 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200260 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200261 caps:
262 mon: "allow *"
263 admin:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200264 caps:
265 mds: "allow *"
266 mgr: "allow *"
267 mon: "allow *"
268 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +0200269
Ondrej Smola91c83162017-09-12 16:40:02 +0200270Ceph mgr roles
271------------------------
272
273The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to provide additional monitoring and interfaces to external monitoring and management systems. Since the 12.x (luminous) Ceph release, the ceph-mgr daemon is required for normal operations. The ceph-mgr daemon is an optional component in the 11.x (kraken) Ceph release.
274
275By default, the manager daemon requires no additional configuration, beyond ensuring it is running. If there is no mgr daemon running, you will see a health warning to that effect, and some of the other information in the output of ceph status will be missing or stale until a mgr is started.
276
277
278.. code-block:: yaml
279
280 ceph:
281 mgr:
282 enabled: true
283 dashboard:
284 enabled: true
285 host: 10.103.255.252
286 port: 7000
287
Ondrej Smola81d1a192017-08-17 11:13:10 +0200288
289Ceph OSD (storage) roles
290------------------------
jpavlik8425d362015-06-09 15:23:27 +0200291
292.. code-block:: yaml
293
294 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200295 common:
Jiri Broulikec62dec2017-10-10 13:45:15 +0200296 version: luminous
297 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
298 public_network: 10.0.0.0/24, 10.1.0.0/24
299 cluster_network: 10.10.0.0/24, 10.11.0.0/24
300 keyring:
301 bootstrap-osd:
302 caps:
303 mon: "allow profile bootstrap-osd"
304 ....
Ondrej Smola81d1a192017-08-17 11:13:10 +0200305 osd:
306 enabled: true
Jiri Broulikec62dec2017-10-10 13:45:15 +0200307 crush_parent: rack01
308 journal_size: 20480 (20G)
309 bluestore_block_db_size: 10073741824 (10G)
310 bluestore_block_wal_size: 10073741824 (10G)
Jiri Broulikd5729042017-09-19 20:07:22 +0200311 bluestore_block_size: 807374182400 (800G)
312 backend:
313 filestore:
314 disks:
315 - dev: /dev/sdm
316 enabled: false
Jiri Broulikd5729042017-09-19 20:07:22 +0200317 journal: /dev/ssd
Jiri Broulik8870b872018-01-24 18:04:25 +0100318 journal_partition: 5
319 data_partition: 6
320 lockbox_partition: 7
321 data_partition_size: 12000 (MB)
Jiri Broulikd5729042017-09-19 20:07:22 +0200322 class: bestssd
Jiri Broulik8870b872018-01-24 18:04:25 +0100323 weight: 1.666
Jiri Broulik58ff84b2017-11-21 14:23:51 +0100324 dmcrypt: true
Jiri Broulik8870b872018-01-24 18:04:25 +0100325 journal_dmcrypt: false
326 - dev: /dev/sdf
327 journal: /dev/ssd
328 journal_dmcrypt: true
329 class: bestssd
330 weight: 1.666
Jiri Broulikd5729042017-09-19 20:07:22 +0200331 - dev: /dev/sdl
Jiri Broulikd5729042017-09-19 20:07:22 +0200332 journal: /dev/ssd
Jiri Broulikd5729042017-09-19 20:07:22 +0200333 class: bestssd
Jiri Broulik8870b872018-01-24 18:04:25 +0100334 weight: 1.666
Jiri Broulikd5729042017-09-19 20:07:22 +0200335 bluestore:
336 disks:
337 - dev: /dev/sdb
Jiri Broulik8870b872018-01-24 18:04:25 +0100338 - dev: /dev/sdf
339 block_db: /dev/ssd
340 block_wal: /dev/ssd
341 block_db_dmcrypt: true
342 block_wal_dmcrypt: true
Jiri Broulikd5729042017-09-19 20:07:22 +0200343 - dev: /dev/sdc
344 block_db: /dev/ssd
345 block_wal: /dev/ssd
Jiri Broulik8870b872018-01-24 18:04:25 +0100346 data_partition: 1
347 block_partition: 2
348 lockbox_partition: 5
349 block_db_partition: 3
350 block_wal_partition: 4
Jiri Broulikc2be93b2017-10-03 14:20:00 +0200351 class: ssd
352 weight: 1.666
Jiri Broulik58ff84b2017-11-21 14:23:51 +0100353 dmcrypt: true
Jiri Broulik8870b872018-01-24 18:04:25 +0100354 block_db_dmcrypt: false
355 block_wal_dmcrypt: false
Jiri Broulikd5729042017-09-19 20:07:22 +0200356 - dev: /dev/sdd
357 enabled: false
jpavlik8425d362015-06-09 15:23:27 +0200358
Ondrej Smola81d1a192017-08-17 11:13:10 +0200359
Jiri Broulikc2be93b2017-10-03 14:20:00 +0200360Ceph client roles - ...Deprecated - use ceph:common instead
361--------------------------------------------------------
Ondrej Smola81d1a192017-08-17 11:13:10 +0200362
363Simple ceph client service
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200364
365.. code-block:: yaml
366
367 ceph:
368 client:
369 config:
370 global:
371 mon initial members: ceph1,ceph2,ceph3
372 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
373 keyring:
374 monitoring:
375 key: 00000000000000000000000000000000000000==
Ondrej Smola81d1a192017-08-17 11:13:10 +0200376
377At OpenStack control settings are usually located at cinder-volume or glance-
378registry services.
379
380.. code-block:: yaml
381
382 ceph:
383 client:
384 config:
385 global:
386 fsid: 00000000-0000-0000-0000-000000000000
387 mon initial members: ceph1,ceph2,ceph3
388 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
389 osd_fs_mkfs_arguments_xfs:
390 osd_fs_mount_options_xfs: rw,noatime
391 network public: 10.0.0.0/24
392 network cluster: 10.0.0.0/24
393 osd_fs_type: xfs
394 osd:
395 osd journal size: 7500
396 filestore xattr use omap: true
397 mon:
398 mon debug dump transactions: false
399 keyring:
400 cinder:
401 key: 00000000000000000000000000000000000000==
402 glance:
403 key: 00000000000000000000000000000000000000==
404
405
406Ceph gateway
407------------
408
409Rados gateway with keystone v2 auth backend
410
411.. code-block:: yaml
412
413 ceph:
414 radosgw:
415 enabled: true
416 hostname: gw.ceph.lab
417 bind:
418 address: 10.10.10.1
419 port: 8080
420 identity:
421 engine: keystone
422 api_version: 2
423 host: 10.10.10.100
424 port: 5000
425 user: admin
426 password: password
427 tenant: admin
428
429Rados gateway with keystone v3 auth backend
430
431.. code-block:: yaml
432
433 ceph:
cdodda9b8362c2018-04-19 18:06:41 -0500434 common:
435 config:
436 rgw:
437 key: value
Ondrej Smola81d1a192017-08-17 11:13:10 +0200438 radosgw:
439 enabled: true
440 hostname: gw.ceph.lab
441 bind:
442 address: 10.10.10.1
443 port: 8080
444 identity:
445 engine: keystone
446 api_version: 3
447 host: 10.10.10.100
448 port: 5000
449 user: admin
450 password: password
451 project: admin
452 domain: default
Jiri Broulik4870e802018-06-25 12:14:34 +0200453 swift:
454 versioning:
455 enabled: true
Ondrej Smola81d1a192017-08-17 11:13:10 +0200456
457
458Ceph setup role
459---------------
460
461Replicated ceph storage pool
462
463.. code-block:: yaml
464
465 ceph:
466 setup:
467 pool:
468 replicated_pool:
469 pg_num: 256
470 pgp_num: 256
471 type: replicated
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200472 crush_rule: sata
473 application: rbd
Ondrej Smola81d1a192017-08-17 11:13:10 +0200474
Jiri Broulikeaf41472017-10-18 09:56:33 +0200475 .. note:: For Kraken and earlier releases please specify crush_rule as a ruleset number.
476 For Kraken and earlier releases application param is not needed.
477
Ondrej Smola81d1a192017-08-17 11:13:10 +0200478Erasure ceph storage pool
479
480.. code-block:: yaml
481
482 ceph:
483 setup:
484 pool:
485 erasure_pool:
486 pg_num: 256
487 pgp_num: 256
488 type: erasure
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200489 crush_rule: ssd
490 application: rbd
Ondrej Smola81d1a192017-08-17 11:13:10 +0200491
Jiri Broulikd68e33a2017-10-24 10:54:43 +0200492
Jiri Broulike4ba9f62017-11-08 11:33:00 +0100493Inline compression for Bluestore backend
494
495.. code-block:: yaml
496
497 ceph:
498 setup:
499 pool:
500 volumes:
501 pg_num: 256
502 pgp_num: 256
503 type: replicated
504 crush_rule: hdd
505 application: rbd
506 compression_algorithm: snappy
507 compression_mode: aggressive
508 compression_required_ratio: .875
509 ...
510
511
Jiri Broulikd68e33a2017-10-24 10:54:43 +0200512Ceph manage keyring keys
513------------------------
514
515Keyrings are dynamically generated unless specified by the following pillar.
516
517.. code-block:: yaml
518
519 ceph:
520 common:
521 manage_keyring: true
522 keyring:
523 glance:
524 name: images
525 key: AACf3ulZFFPNDxAAd2DWds3aEkHh4IklZVgIaQ==
526 caps:
527 mon: "allow r"
528 osd: "allow class-read object_prefix rdb_children, allow rwx pool=images"
529
530
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200531Generate CRUSH map - Recommended way
532-----------------------------------
Tomáš Kukrál363d37d2017-08-17 13:40:20 +0200533
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200534It is required to define the `type` for crush buckets and these types must start with `root` (top) and end with `host`. OSD daemons will be assigned to hosts according to it's hostname. Weight of the buckets will be calculated according to weight of it's children.
535
536If the pools that are in use have size of 3 it is best to have 3 children of a specific type in the root CRUSH tree to replicate objects across (Specified in rule steps by 'type region').
Tomáš Kukrál363d37d2017-08-17 13:40:20 +0200537
538.. code-block:: yaml
539
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200540 ceph:
541 setup:
542 crush:
543 enabled: True
544 tunables:
545 choose_total_tries: 50
546 choose_local_tries: 0
547 choose_local_fallback_tries: 0
548 chooseleaf_descend_once: 1
549 chooseleaf_vary_r: 1
550 chooseleaf_stable: 1
551 straw_calc_version: 1
552 allowed_bucket_algs: 54
553 type:
554 - root
555 - region
556 - rack
557 - host
Jiri Broulikeaf41472017-10-18 09:56:33 +0200558 - osd
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200559 root:
560 - name: root-ssd
561 - name: root-sata
562 region:
563 - name: eu-1
564 parent: root-sata
565 - name: eu-2
566 parent: root-sata
567 - name: eu-3
568 parent: root-ssd
569 - name: us-1
570 parent: root-sata
571 rack:
572 - name: rack01
573 parent: eu-1
574 - name: rack02
575 parent: eu-2
576 - name: rack03
577 parent: us-1
578 rule:
579 sata:
580 ruleset: 0
581 type: replicated
582 min_size: 1
583 max_size: 10
584 steps:
585 - take take root-ssd
586 - chooseleaf firstn 0 type region
587 - emit
588 ssd:
589 ruleset: 1
590 type: replicated
591 min_size: 1
592 max_size: 10
593 steps:
594 - take take root-sata
595 - chooseleaf firstn 0 type region
596 - emit
597
598
599Generate CRUSH map - Alternative way
600------------------------------------
601
602It's necessary to create per OSD pillar.
603
604.. code-block:: yaml
605
606 ceph:
607 osd:
608 crush:
609 - type: root
610 name: root1
611 - type: region
612 name: eu-1
613 - type: rack
614 name: rack01
615 - type: host
616 name: osd001
617
Jiri Broulik8870b872018-01-24 18:04:25 +0100618Add OSDs with specific weight
619-----------------------------
620
621Add OSD device(s) with initial weight set specifically to certain value.
622
623.. code-block:: yaml
624
625 ceph:
626 osd:
627 crush_initial_weight: 0
628
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200629
630Apply CRUSH map
631---------------
632
633Before you apply CRUSH map please make sure that settings in generated file in /etc/ceph/crushmap are correct.
634
635.. code-block:: yaml
636
637 ceph:
638 setup:
639 crush:
640 enforce: true
641 pool:
642 images:
643 crush_rule: sata
644 application: rbd
645 volumes:
646 crush_rule: sata
647 application: rbd
648 vms:
649 crush_rule: ssd
650 application: rbd
651
Jiri Broulikeaf41472017-10-18 09:56:33 +0200652 .. note:: For Kraken and earlier releases please specify crush_rule as a ruleset number.
653 For Kraken and earlier releases application param is not needed.
654
Jiri Broulik97af8ab2017-10-12 14:32:51 +0200655
656Persist CRUSH map
657--------------------
658
659After the CRUSH map is applied to Ceph it's recommended to persist the same settings even after OSD reboots.
660
661.. code-block:: yaml
662
663 ceph:
664 osd:
665 crush_update: false
666
Ondrej Smola81d1a192017-08-17 11:13:10 +0200667
668Ceph monitoring
669---------------
670
Jiri Broulik44574072017-11-14 12:27:39 +0100671By default monitoring is setup to collect information from MON and OSD nodes. To change the default values add the following pillar to MON nodes.
Ondrej Smola81d1a192017-08-17 11:13:10 +0200672
673.. code-block:: yaml
674
675 ceph:
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200676 monitoring:
Jiri Broulik44574072017-11-14 12:27:39 +0100677 space_used_warning_threshold: 0.75
678 space_used_critical_threshold: 0.85
679 apply_latency_threshold: 0.007
680 commit_latency_threshold: 0.7
Machi Hoshino50682992018-09-19 11:49:05 +0900681 pool:
682 vms:
683 pool_space_used_utilization_warning_threshold: 0.75
684 pool_space_used_critical_threshold: 0.85
685 pool_write_ops_threshold: 200
686 pool_write_bytes_threshold: 70000000
687 pool_read_bytes_threshold: 70000000
688 pool_read_ops_threshold: 1000
689 images:
690 pool_space_used_utilization_warning_threshold: 0.50
691 pool_space_used_critical_threshold: 0.95
692 pool_write_ops_threshold: 100
693 pool_write_bytes_threshold: 50000000
694 pool_read_bytes_threshold: 50000000
695 pool_read_ops_threshold: 500
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200696
Mateusz Los4dd8c4f2017-12-01 09:53:02 +0100697Ceph monitor backups
698--------------------
699
700Backup client with ssh/rsync remote host
701
702.. code-block:: yaml
703
704 ceph:
705 backup:
706 client:
707 enabled: true
708 full_backups_to_keep: 3
709 hours_before_full: 24
710 target:
711 host: cfg01
Jiri Broulik44feb042018-03-05 12:10:19 +0100712 backup_dir: server-backup-dir
Mateusz Los4dd8c4f2017-12-01 09:53:02 +0100713
714Backup client with local backup only
715
716.. code-block:: yaml
717
718 ceph:
719 backup:
720 client:
721 enabled: true
722 full_backups_to_keep: 3
723 hours_before_full: 24
724
Martin Polreich8d37f282018-03-04 17:38:15 +0100725
726Backup client at exact times:
727
728..code-block:: yaml
729
730 ceph:
731 backup:
732 client:
733 enabled: true
734 full_backups_to_keep: 3
735 incr_before_full: 3
736 backup_times:
Martin Polreichfe1b3902018-04-25 15:32:30 +0200737 day_of_week: 0
Martin Polreich8d37f282018-03-04 17:38:15 +0100738 hour: 4
739 minute: 52
740 compression: true
741 compression_threads: 2
742 database:
743 user: user
744 password: password
745 target:
746 host: host01
747
748 .. note:: Parameters in ``backup_times`` section can be used to set up exact
749 time the cron job should be executed. In this example, the backup job
750 would be executed every Sunday at 4:52 AM. If any of the individual
751 ``backup_times`` parameters is not defined, the defalut ``*`` value will be
752 used. For example, if minute parameter is ``*``, it will run the backup every minute,
753 which is ususally not desired.
Martin Polreichfe1b3902018-04-25 15:32:30 +0200754 Available parameters are ``day_of_week``, ``day_of_month``, ``month``, ``hour`` and ``minute``.
Martin Polreich8d37f282018-03-04 17:38:15 +0100755 Please see the crontab reference for further info on how to set these parameters.
756
757 .. note:: Please be aware that only ``backup_times`` section OR
758 ``hours_before_full(incr)`` can be defined. If both are defined,
759 the ``backup_times`` section will be peferred.
760
761 .. note:: New parameter ``incr_before_full`` needs to be defined. This
762 number sets number of incremental backups to be run, before a full backup
763 is performed.
764
Mateusz Los4dd8c4f2017-12-01 09:53:02 +0100765Backup server rsync
766
767.. code-block:: yaml
768
769 ceph:
770 backup:
771 server:
772 enabled: true
773 hours_before_full: 24
774 full_backups_to_keep: 5
775 key:
776 ceph_pub_key:
777 enabled: true
778 key: ssh_rsa
779
Jiri Broulik62892df2018-02-28 16:22:00 +0100780Backup server without strict client restriction
781
782.. code-block:: yaml
783
784 ceph:
785 backup:
786 restrict_clients: false
787
Martin Polreich8d37f282018-03-04 17:38:15 +0100788Backup server at exact times:
789
790..code-block:: yaml
791
792 ceph:
793 backup:
794 server:
795 enabled: true
796 full_backups_to_keep: 3
797 incr_before_full: 3
798 backup_dir: /srv/backup
799 backup_times:
Martin Polreichfe1b3902018-04-25 15:32:30 +0200800 day_of_week: 0
Martin Polreich8d37f282018-03-04 17:38:15 +0100801 hour: 4
802 minute: 52
803 key:
804 ceph_pub_key:
805 enabled: true
806 key: key
807
808 .. note:: Parameters in ``backup_times`` section can be used to set up exact
809 time the cron job should be executed. In this example, the backup job
810 would be executed every Sunday at 4:52 AM. If any of the individual
811 ``backup_times`` parameters is not defined, the defalut ``*`` value will be
812 used. For example, if minute parameter is ``*``, it will run the backup every minute,
813 which is ususally not desired.
Martin Polreichfe1b3902018-04-25 15:32:30 +0200814 Available parameters are ``day_of_week``, ``day_of_month``, ``month``, ``hour`` and ``minute``.
Martin Polreich8d37f282018-03-04 17:38:15 +0100815 Please see the crontab reference for further info on how to set these parameters.
816
817 .. note:: Please be aware that only ``backup_times`` section OR
818 ``hours_before_full(incr)`` can be defined. If both are defined, The
819 ``backup_times`` section will be peferred.
820
821 .. note:: New parameter ``incr_before_full`` needs to be defined. This
822 number sets number of incremental backups to be run, before a full backup
823 is performed.
824
Jiri Broulik42552052018-02-15 15:23:29 +0100825Migration from Decapod to salt-formula-ceph
826--------------------------------------------
827
828The following configuration will run a python script which will generate ceph config and osd disk mappings to be put in cluster model.
829
830.. code-block:: yaml
831
832 ceph:
833 decapod:
834 ip: 192.168.1.10
835 user: user
836 password: psswd
837 deploy_config_name: ceph
Mateusz Los4dd8c4f2017-12-01 09:53:02 +0100838
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200839
Ondrej Smola81d1a192017-08-17 11:13:10 +0200840More information
841================
jpavlik8425d362015-06-09 15:23:27 +0200842
843* https://github.com/cloud-ee/ceph-salt-formula
844* http://ceph.com/ceph-storage/
jan kaufman4f7757b2015-06-12 10:49:00 +0200845* http://ceph.com/docs/master/start/intro/
Filip Pytloun32841d72017-02-02 13:02:03 +0100846
Ondrej Smola81d1a192017-08-17 11:13:10 +0200847
848Documentation and bugs
Filip Pytloun32841d72017-02-02 13:02:03 +0100849======================
850
851To learn how to install and update salt-formulas, consult the documentation
852available online at:
853
854 http://salt-formulas.readthedocs.io/
855
856In the unfortunate event that bugs are discovered, they should be reported to
857the appropriate issue tracker. Use Github issue tracker for specific salt
858formula:
859
860 https://github.com/salt-formulas/salt-formula-ceph/issues
861
862For feature requests, bug reports or blueprints affecting entire ecosystem,
863use Launchpad salt-formulas project:
864
865 https://launchpad.net/salt-formulas
866
867You can also join salt-formulas-users team and subscribe to mailing list:
868
869 https://launchpad.net/~salt-formulas-users
870
871Developers wishing to work on the salt-formulas projects should always base
872their work on master branch and submit pull request against specific formula.
873
874 https://github.com/salt-formulas/salt-formula-ceph
875
876Any questions or feedback is always welcome so feel free to join our IRC
877channel:
878
879 #salt-formulas @ irc.freenode.net