blob: c7db5ea8e0872ee030f081894ef89882a31f7454 [file] [log] [blame]
jpavlik8425d362015-06-09 15:23:27 +02001
Ondrej Smola81d1a192017-08-17 11:13:10 +02002============
3Ceph formula
4============
jpavlik8425d362015-06-09 15:23:27 +02005
Ondrej Smola81d1a192017-08-17 11:13:10 +02006Ceph provides extraordinary data storage scalability. Thousands of client
7hosts or KVMs accessing petabytes to exabytes of data. Each one of your
8applications can use the object, block or file system interfaces to the same
9RADOS cluster simultaneously, which means your Ceph storage system serves as a
10flexible foundation for all of your data storage needs.
jpavlik8425d362015-06-09 15:23:27 +020011
Ondrej Smola81d1a192017-08-17 11:13:10 +020012Use salt-formula-linux for initial disk partitioning.
jpavlik8425d362015-06-09 15:23:27 +020013
14
15Sample pillars
16==============
17
Ondrej Smola81d1a192017-08-17 11:13:10 +020018Common metadata for all nodes/roles
jpavlik8425d362015-06-09 15:23:27 +020019
20.. code-block:: yaml
21
22 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +020023 common:
24 version: kraken
jpavlik8425d362015-06-09 15:23:27 +020025 config:
26 global:
Ondrej Smola81d1a192017-08-17 11:13:10 +020027 param1: value1
28 param2: value1
29 param3: value1
30 pool_section:
31 param1: value2
32 param2: value2
33 param3: value2
34 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
35 members:
36 - name: cmn01
37 host: 10.0.0.1
38 - name: cmn02
39 host: 10.0.0.2
40 - name: cmn03
41 host: 10.0.0.3
jpavlik8425d362015-06-09 15:23:27 +020042 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +020043 admin:
44 key: AQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA==
45 caps:
46 mds: "allow *"
47 mgr: "allow *"
48 mon: "allow *"
49 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +020050
Ondrej Smola81d1a192017-08-17 11:13:10 +020051Optional definition for cluster and public networks. Cluster network is used
52for replication. Public network for front-end communication.
jpavlik8425d362015-06-09 15:23:27 +020053
54.. code-block:: yaml
55
56 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +020057 common:
58 version: kraken
59 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
60 ....
61 public_network: 10.0.0.0/24, 10.1.0.0/24
62 cluster_network: 10.10.0.0/24, 10.11.0.0/24
63
64
65Ceph mon (control) roles
66------------------------
67
68Monitors: A Ceph Monitor maintains maps of the cluster state, including the
69monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map.
70Ceph maintains a history (called an epoch”) of each state change in the Ceph
71Monitors, Ceph OSD Daemons, and PGs.
72
73.. code-block:: yaml
74
75 ceph:
76 common:
77 config:
78 mon:
79 key: value
jpavlik8425d362015-06-09 15:23:27 +020080 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +020081 enabled: true
jpavlik8425d362015-06-09 15:23:27 +020082 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +020083 mon:
84 key: AQAnQIhZ6in5KxAAdf467upoRMWFcVg5pbh1yg==
85 caps:
86 mon: "allow *"
87 admin:
88 key: AQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA==
89 caps:
90 mds: "allow *"
91 mgr: "allow *"
92 mon: "allow *"
93 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +020094
Ondrej Smola91c83162017-09-12 16:40:02 +020095Ceph mgr roles
96------------------------
97
98The Ceph Manager daemon (ceph-mgr) runs alongside monitor daemons, to provide additional monitoring and interfaces to external monitoring and management systems. Since the 12.x (luminous) Ceph release, the ceph-mgr daemon is required for normal operations. The ceph-mgr daemon is an optional component in the 11.x (kraken) Ceph release.
99
100By default, the manager daemon requires no additional configuration, beyond ensuring it is running. If there is no mgr daemon running, you will see a health warning to that effect, and some of the other information in the output of ceph status will be missing or stale until a mgr is started.
101
102
103.. code-block:: yaml
104
105 ceph:
106 mgr:
107 enabled: true
108 dashboard:
109 enabled: true
110 host: 10.103.255.252
111 port: 7000
112
Ondrej Smola81d1a192017-08-17 11:13:10 +0200113
114Ceph OSD (storage) roles
115------------------------
jpavlik8425d362015-06-09 15:23:27 +0200116
117.. code-block:: yaml
118
119 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200120 common:
jpavlik8425d362015-06-09 15:23:27 +0200121 config:
jpavlik8425d362015-06-09 15:23:27 +0200122 osd:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200123 key: value
124 osd:
125 enabled: true
126 host_id: 10
127 copy_admin_key: true
128 journal_type: raw
129 dmcrypt: disable
130 osd_scenario: raw_journal_devices
131 fs_type: xfs
132 disk:
133 '00':
134 rule: hdd
135 dev: /dev/vdb2
136 journal: /dev/vdb1
137 class: besthdd
138 weight: 1.5
139 '01':
140 rule: hdd
141 dev: /dev/vdc2
142 journal: /dev/vdc1
143 class: besthdd
144 weight: 1.5
145 '02':
146 rule: hdd
147 dev: /dev/vdd2
148 journal: /dev/vdd1
149 class: besthdd
150 weight: 1.5
jpavlik8425d362015-06-09 15:23:27 +0200151
Ondrej Smola81d1a192017-08-17 11:13:10 +0200152
153Ceph client roles
154-----------------
155
156Simple ceph client service
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200157
158.. code-block:: yaml
159
160 ceph:
161 client:
162 config:
163 global:
164 mon initial members: ceph1,ceph2,ceph3
165 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
166 keyring:
167 monitoring:
168 key: 00000000000000000000000000000000000000==
Ondrej Smola81d1a192017-08-17 11:13:10 +0200169
170At OpenStack control settings are usually located at cinder-volume or glance-
171registry services.
172
173.. code-block:: yaml
174
175 ceph:
176 client:
177 config:
178 global:
179 fsid: 00000000-0000-0000-0000-000000000000
180 mon initial members: ceph1,ceph2,ceph3
181 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
182 osd_fs_mkfs_arguments_xfs:
183 osd_fs_mount_options_xfs: rw,noatime
184 network public: 10.0.0.0/24
185 network cluster: 10.0.0.0/24
186 osd_fs_type: xfs
187 osd:
188 osd journal size: 7500
189 filestore xattr use omap: true
190 mon:
191 mon debug dump transactions: false
192 keyring:
193 cinder:
194 key: 00000000000000000000000000000000000000==
195 glance:
196 key: 00000000000000000000000000000000000000==
197
198
199Ceph gateway
200------------
201
202Rados gateway with keystone v2 auth backend
203
204.. code-block:: yaml
205
206 ceph:
207 radosgw:
208 enabled: true
209 hostname: gw.ceph.lab
210 bind:
211 address: 10.10.10.1
212 port: 8080
213 identity:
214 engine: keystone
215 api_version: 2
216 host: 10.10.10.100
217 port: 5000
218 user: admin
219 password: password
220 tenant: admin
221
222Rados gateway with keystone v3 auth backend
223
224.. code-block:: yaml
225
226 ceph:
227 radosgw:
228 enabled: true
229 hostname: gw.ceph.lab
230 bind:
231 address: 10.10.10.1
232 port: 8080
233 identity:
234 engine: keystone
235 api_version: 3
236 host: 10.10.10.100
237 port: 5000
238 user: admin
239 password: password
240 project: admin
241 domain: default
242
243
244Ceph setup role
245---------------
246
247Replicated ceph storage pool
248
249.. code-block:: yaml
250
251 ceph:
252 setup:
253 pool:
254 replicated_pool:
255 pg_num: 256
256 pgp_num: 256
257 type: replicated
258 crush_ruleset_name: 0
259
260Erasure ceph storage pool
261
262.. code-block:: yaml
263
264 ceph:
265 setup:
266 pool:
267 erasure_pool:
268 pg_num: 256
269 pgp_num: 256
270 type: erasure
271 crush_ruleset_name: 0
Ondrej Smola81d1a192017-08-17 11:13:10 +0200272
Tomáš Kukrál363d37d2017-08-17 13:40:20 +0200273Generate CRUSH map
274+++++++++++++++++++
275
276It is required to define the `type` for crush buckets and these types must start with `root` (top) and end with `host`. OSD daemons will be assigned to hosts according to it's hostname. Weight of the buckets will be calculated according to weight of it's childen.
277
278.. code-block:: yaml
279
Tomáš Kukrál9ddb95b2017-08-17 14:18:51 +0200280 ceph:
281 setup:
282 crush:
283 enabled: True
284 tunables:
285 choose_total_tries: 50
286 type:
287 - root
288 - region
289 - rack
290 - host
291 root:
292 - name: root1
293 - name: root2
294 region:
295 - name: eu-1
296 parent: root1
297 - name: eu-2
298 parent: root1
299 - name: us-1
300 parent: root2
301 rack:
302 - name: rack01
303 parent: eu-1
304 - name: rack02
305 parent: eu-2
306 - name: rack03
307 parent: us-1
308 rule:
309 sata:
310 ruleset: 0
311 type: replicated
312 min_size: 1
313 max_size: 10
314 steps:
315 - take crushroot.performanceblock.satahss.1
316 - choseleaf firstn 0 type failure_domain
317 - emit
Ondrej Smola81d1a192017-08-17 11:13:10 +0200318
319Ceph monitoring
320---------------
321
322Collect general cluster metrics
323
324.. code-block:: yaml
325
326 ceph:
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200327 monitoring:
328 cluster_stats:
329 enabled: true
330 ceph_user: monitoring
331
Ondrej Smola81d1a192017-08-17 11:13:10 +0200332Collect metrics from monitor and OSD services
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200333
334.. code-block:: yaml
335
336 ceph:
337 monitoring:
338 node_stats:
339 enabled: true
340
341
Ondrej Smola81d1a192017-08-17 11:13:10 +0200342More information
343================
jpavlik8425d362015-06-09 15:23:27 +0200344
345* https://github.com/cloud-ee/ceph-salt-formula
346* http://ceph.com/ceph-storage/
jan kaufman4f7757b2015-06-12 10:49:00 +0200347* http://ceph.com/docs/master/start/intro/
Filip Pytloun32841d72017-02-02 13:02:03 +0100348
Ondrej Smola81d1a192017-08-17 11:13:10 +0200349
350Documentation and bugs
Filip Pytloun32841d72017-02-02 13:02:03 +0100351======================
352
353To learn how to install and update salt-formulas, consult the documentation
354available online at:
355
356 http://salt-formulas.readthedocs.io/
357
358In the unfortunate event that bugs are discovered, they should be reported to
359the appropriate issue tracker. Use Github issue tracker for specific salt
360formula:
361
362 https://github.com/salt-formulas/salt-formula-ceph/issues
363
364For feature requests, bug reports or blueprints affecting entire ecosystem,
365use Launchpad salt-formulas project:
366
367 https://launchpad.net/salt-formulas
368
369You can also join salt-formulas-users team and subscribe to mailing list:
370
371 https://launchpad.net/~salt-formulas-users
372
373Developers wishing to work on the salt-formulas projects should always base
374their work on master branch and submit pull request against specific formula.
375
376 https://github.com/salt-formulas/salt-formula-ceph
377
378Any questions or feedback is always welcome so feel free to join our IRC
379channel:
380
381 #salt-formulas @ irc.freenode.net