blob: df8bf2562b57d080399e5fb6fd5016e49545e5df [file] [log] [blame]
jpavlik8425d362015-06-09 15:23:27 +02001
Ondrej Smola81d1a192017-08-17 11:13:10 +02002============
3Ceph formula
4============
jpavlik8425d362015-06-09 15:23:27 +02005
Ondrej Smola81d1a192017-08-17 11:13:10 +02006Ceph provides extraordinary data storage scalability. Thousands of client
7hosts or KVMs accessing petabytes to exabytes of data. Each one of your
8applications can use the object, block or file system interfaces to the same
9RADOS cluster simultaneously, which means your Ceph storage system serves as a
10flexible foundation for all of your data storage needs.
jpavlik8425d362015-06-09 15:23:27 +020011
Ondrej Smola81d1a192017-08-17 11:13:10 +020012Use salt-formula-linux for initial disk partitioning.
jpavlik8425d362015-06-09 15:23:27 +020013
14
15Sample pillars
16==============
17
Ondrej Smola81d1a192017-08-17 11:13:10 +020018Common metadata for all nodes/roles
jpavlik8425d362015-06-09 15:23:27 +020019
20.. code-block:: yaml
21
22 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +020023 common:
24 version: kraken
jpavlik8425d362015-06-09 15:23:27 +020025 config:
26 global:
Ondrej Smola81d1a192017-08-17 11:13:10 +020027 param1: value1
28 param2: value1
29 param3: value1
30 pool_section:
31 param1: value2
32 param2: value2
33 param3: value2
34 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
35 members:
36 - name: cmn01
37 host: 10.0.0.1
38 - name: cmn02
39 host: 10.0.0.2
40 - name: cmn03
41 host: 10.0.0.3
jpavlik8425d362015-06-09 15:23:27 +020042 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +020043 admin:
44 key: AQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA==
45 caps:
46 mds: "allow *"
47 mgr: "allow *"
48 mon: "allow *"
49 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +020050
Ondrej Smola81d1a192017-08-17 11:13:10 +020051Optional definition for cluster and public networks. Cluster network is used
52for replication. Public network for front-end communication.
jpavlik8425d362015-06-09 15:23:27 +020053
54.. code-block:: yaml
55
56 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +020057 common:
58 version: kraken
59 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
60 ....
61 public_network: 10.0.0.0/24, 10.1.0.0/24
62 cluster_network: 10.10.0.0/24, 10.11.0.0/24
63
64
65Ceph mon (control) roles
66------------------------
67
68Monitors: A Ceph Monitor maintains maps of the cluster state, including the
69monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map.
70Ceph maintains a history (called an epoch”) of each state change in the Ceph
71Monitors, Ceph OSD Daemons, and PGs.
72
73.. code-block:: yaml
74
75 ceph:
76 common:
77 config:
78 mon:
79 key: value
jpavlik8425d362015-06-09 15:23:27 +020080 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +020081 enabled: true
jpavlik8425d362015-06-09 15:23:27 +020082 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +020083 mon:
84 key: AQAnQIhZ6in5KxAAdf467upoRMWFcVg5pbh1yg==
85 caps:
86 mon: "allow *"
87 admin:
88 key: AQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA==
89 caps:
90 mds: "allow *"
91 mgr: "allow *"
92 mon: "allow *"
93 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +020094
Ondrej Smola81d1a192017-08-17 11:13:10 +020095
96Ceph OSD (storage) roles
97------------------------
jpavlik8425d362015-06-09 15:23:27 +020098
99.. code-block:: yaml
100
101 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200102 common:
jpavlik8425d362015-06-09 15:23:27 +0200103 config:
jpavlik8425d362015-06-09 15:23:27 +0200104 osd:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200105 key: value
106 osd:
107 enabled: true
108 host_id: 10
109 copy_admin_key: true
110 journal_type: raw
111 dmcrypt: disable
112 osd_scenario: raw_journal_devices
113 fs_type: xfs
114 disk:
115 '00':
116 rule: hdd
117 dev: /dev/vdb2
118 journal: /dev/vdb1
119 class: besthdd
120 weight: 1.5
121 '01':
122 rule: hdd
123 dev: /dev/vdc2
124 journal: /dev/vdc1
125 class: besthdd
126 weight: 1.5
127 '02':
128 rule: hdd
129 dev: /dev/vdd2
130 journal: /dev/vdd1
131 class: besthdd
132 weight: 1.5
jpavlik8425d362015-06-09 15:23:27 +0200133
Ondrej Smola81d1a192017-08-17 11:13:10 +0200134
135Ceph client roles
136-----------------
137
138Simple ceph client service
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200139
140.. code-block:: yaml
141
142 ceph:
143 client:
144 config:
145 global:
146 mon initial members: ceph1,ceph2,ceph3
147 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
148 keyring:
149 monitoring:
150 key: 00000000000000000000000000000000000000==
Ondrej Smola81d1a192017-08-17 11:13:10 +0200151
152At OpenStack control settings are usually located at cinder-volume or glance-
153registry services.
154
155.. code-block:: yaml
156
157 ceph:
158 client:
159 config:
160 global:
161 fsid: 00000000-0000-0000-0000-000000000000
162 mon initial members: ceph1,ceph2,ceph3
163 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
164 osd_fs_mkfs_arguments_xfs:
165 osd_fs_mount_options_xfs: rw,noatime
166 network public: 10.0.0.0/24
167 network cluster: 10.0.0.0/24
168 osd_fs_type: xfs
169 osd:
170 osd journal size: 7500
171 filestore xattr use omap: true
172 mon:
173 mon debug dump transactions: false
174 keyring:
175 cinder:
176 key: 00000000000000000000000000000000000000==
177 glance:
178 key: 00000000000000000000000000000000000000==
179
180
181Ceph gateway
182------------
183
184Rados gateway with keystone v2 auth backend
185
186.. code-block:: yaml
187
188 ceph:
189 radosgw:
190 enabled: true
191 hostname: gw.ceph.lab
192 bind:
193 address: 10.10.10.1
194 port: 8080
195 identity:
196 engine: keystone
197 api_version: 2
198 host: 10.10.10.100
199 port: 5000
200 user: admin
201 password: password
202 tenant: admin
203
204Rados gateway with keystone v3 auth backend
205
206.. code-block:: yaml
207
208 ceph:
209 radosgw:
210 enabled: true
211 hostname: gw.ceph.lab
212 bind:
213 address: 10.10.10.1
214 port: 8080
215 identity:
216 engine: keystone
217 api_version: 3
218 host: 10.10.10.100
219 port: 5000
220 user: admin
221 password: password
222 project: admin
223 domain: default
224
225
226Ceph setup role
227---------------
228
229Replicated ceph storage pool
230
231.. code-block:: yaml
232
233 ceph:
234 setup:
235 pool:
236 replicated_pool:
237 pg_num: 256
238 pgp_num: 256
239 type: replicated
240 crush_ruleset_name: 0
241
242Erasure ceph storage pool
243
244.. code-block:: yaml
245
246 ceph:
247 setup:
248 pool:
249 erasure_pool:
250 pg_num: 256
251 pgp_num: 256
252 type: erasure
253 crush_ruleset_name: 0
Ondrej Smola81d1a192017-08-17 11:13:10 +0200254
Tomáš Kukrál363d37d2017-08-17 13:40:20 +0200255Generate CRUSH map
256+++++++++++++++++++
257
258It is required to define the `type` for crush buckets and these types must start with `root` (top) and end with `host`. OSD daemons will be assigned to hosts according to it's hostname. Weight of the buckets will be calculated according to weight of it's childen.
259
260.. code-block:: yaml
261
Tomáš Kukrál9ddb95b2017-08-17 14:18:51 +0200262 ceph:
263 setup:
264 crush:
265 enabled: True
266 tunables:
267 choose_total_tries: 50
268 type:
269 - root
270 - region
271 - rack
272 - host
273 root:
274 - name: root1
275 - name: root2
276 region:
277 - name: eu-1
278 parent: root1
279 - name: eu-2
280 parent: root1
281 - name: us-1
282 parent: root2
283 rack:
284 - name: rack01
285 parent: eu-1
286 - name: rack02
287 parent: eu-2
288 - name: rack03
289 parent: us-1
290 rule:
291 sata:
292 ruleset: 0
293 type: replicated
294 min_size: 1
295 max_size: 10
296 steps:
297 - take crushroot.performanceblock.satahss.1
298 - choseleaf firstn 0 type failure_domain
299 - emit
Ondrej Smola81d1a192017-08-17 11:13:10 +0200300
301Ceph monitoring
302---------------
303
304Collect general cluster metrics
305
306.. code-block:: yaml
307
308 ceph:
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200309 monitoring:
310 cluster_stats:
311 enabled: true
312 ceph_user: monitoring
313
Ondrej Smola81d1a192017-08-17 11:13:10 +0200314Collect metrics from monitor and OSD services
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200315
316.. code-block:: yaml
317
318 ceph:
319 monitoring:
320 node_stats:
321 enabled: true
322
323
Ondrej Smola81d1a192017-08-17 11:13:10 +0200324More information
325================
jpavlik8425d362015-06-09 15:23:27 +0200326
327* https://github.com/cloud-ee/ceph-salt-formula
328* http://ceph.com/ceph-storage/
jan kaufman4f7757b2015-06-12 10:49:00 +0200329* http://ceph.com/docs/master/start/intro/
Filip Pytloun32841d72017-02-02 13:02:03 +0100330
Ondrej Smola81d1a192017-08-17 11:13:10 +0200331
332Documentation and bugs
Filip Pytloun32841d72017-02-02 13:02:03 +0100333======================
334
335To learn how to install and update salt-formulas, consult the documentation
336available online at:
337
338 http://salt-formulas.readthedocs.io/
339
340In the unfortunate event that bugs are discovered, they should be reported to
341the appropriate issue tracker. Use Github issue tracker for specific salt
342formula:
343
344 https://github.com/salt-formulas/salt-formula-ceph/issues
345
346For feature requests, bug reports or blueprints affecting entire ecosystem,
347use Launchpad salt-formulas project:
348
349 https://launchpad.net/salt-formulas
350
351You can also join salt-formulas-users team and subscribe to mailing list:
352
353 https://launchpad.net/~salt-formulas-users
354
355Developers wishing to work on the salt-formulas projects should always base
356their work on master branch and submit pull request against specific formula.
357
358 https://github.com/salt-formulas/salt-formula-ceph
359
360Any questions or feedback is always welcome so feel free to join our IRC
361channel:
362
363 #salt-formulas @ irc.freenode.net