blob: c9a12f7ac733cc73d04134e5630544abe507b985 [file] [log] [blame]
jpavlik8425d362015-06-09 15:23:27 +02001
Ondrej Smola81d1a192017-08-17 11:13:10 +02002============
3Ceph formula
4============
jpavlik8425d362015-06-09 15:23:27 +02005
Ondrej Smola81d1a192017-08-17 11:13:10 +02006Ceph provides extraordinary data storage scalability. Thousands of client
7hosts or KVMs accessing petabytes to exabytes of data. Each one of your
8applications can use the object, block or file system interfaces to the same
9RADOS cluster simultaneously, which means your Ceph storage system serves as a
10flexible foundation for all of your data storage needs.
jpavlik8425d362015-06-09 15:23:27 +020011
Ondrej Smola81d1a192017-08-17 11:13:10 +020012Use salt-formula-linux for initial disk partitioning.
jpavlik8425d362015-06-09 15:23:27 +020013
14
15Sample pillars
16==============
17
Ondrej Smola81d1a192017-08-17 11:13:10 +020018Common metadata for all nodes/roles
jpavlik8425d362015-06-09 15:23:27 +020019
20.. code-block:: yaml
21
22 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +020023 common:
24 version: kraken
jpavlik8425d362015-06-09 15:23:27 +020025 config:
26 global:
Ondrej Smola81d1a192017-08-17 11:13:10 +020027 param1: value1
28 param2: value1
29 param3: value1
30 pool_section:
31 param1: value2
32 param2: value2
33 param3: value2
34 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
35 members:
36 - name: cmn01
37 host: 10.0.0.1
38 - name: cmn02
39 host: 10.0.0.2
40 - name: cmn03
41 host: 10.0.0.3
jpavlik8425d362015-06-09 15:23:27 +020042 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +020043 admin:
44 key: AQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA==
45 caps:
46 mds: "allow *"
47 mgr: "allow *"
48 mon: "allow *"
49 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +020050
Ondrej Smola81d1a192017-08-17 11:13:10 +020051Optional definition for cluster and public networks. Cluster network is used
52for replication. Public network for front-end communication.
jpavlik8425d362015-06-09 15:23:27 +020053
54.. code-block:: yaml
55
56 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +020057 common:
58 version: kraken
59 fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d
60 ....
61 public_network: 10.0.0.0/24, 10.1.0.0/24
62 cluster_network: 10.10.0.0/24, 10.11.0.0/24
63
64
65Ceph mon (control) roles
66------------------------
67
68Monitors: A Ceph Monitor maintains maps of the cluster state, including the
69monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map.
70Ceph maintains a history (called an epoch”) of each state change in the Ceph
71Monitors, Ceph OSD Daemons, and PGs.
72
73.. code-block:: yaml
74
75 ceph:
76 common:
77 config:
78 mon:
79 key: value
jpavlik8425d362015-06-09 15:23:27 +020080 mon:
Ondrej Smola81d1a192017-08-17 11:13:10 +020081 enabled: true
jpavlik8425d362015-06-09 15:23:27 +020082 keyring:
Ondrej Smola81d1a192017-08-17 11:13:10 +020083 mon:
84 key: AQAnQIhZ6in5KxAAdf467upoRMWFcVg5pbh1yg==
85 caps:
86 mon: "allow *"
87 admin:
88 key: AQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA==
89 caps:
90 mds: "allow *"
91 mgr: "allow *"
92 mon: "allow *"
93 osd: "allow *"
jpavlik8425d362015-06-09 15:23:27 +020094
Ondrej Smola81d1a192017-08-17 11:13:10 +020095
96Ceph OSD (storage) roles
97------------------------
jpavlik8425d362015-06-09 15:23:27 +020098
99.. code-block:: yaml
100
101 ceph:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200102 common:
jpavlik8425d362015-06-09 15:23:27 +0200103 config:
jpavlik8425d362015-06-09 15:23:27 +0200104 osd:
Ondrej Smola81d1a192017-08-17 11:13:10 +0200105 key: value
106 osd:
107 enabled: true
108 host_id: 10
109 copy_admin_key: true
110 journal_type: raw
111 dmcrypt: disable
112 osd_scenario: raw_journal_devices
113 fs_type: xfs
114 disk:
115 '00':
116 rule: hdd
117 dev: /dev/vdb2
118 journal: /dev/vdb1
119 class: besthdd
120 weight: 1.5
121 '01':
122 rule: hdd
123 dev: /dev/vdc2
124 journal: /dev/vdc1
125 class: besthdd
126 weight: 1.5
127 '02':
128 rule: hdd
129 dev: /dev/vdd2
130 journal: /dev/vdd1
131 class: besthdd
132 weight: 1.5
jpavlik8425d362015-06-09 15:23:27 +0200133
Ondrej Smola81d1a192017-08-17 11:13:10 +0200134
135Ceph client roles
136-----------------
137
138Simple ceph client service
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200139
140.. code-block:: yaml
141
142 ceph:
143 client:
144 config:
145 global:
146 mon initial members: ceph1,ceph2,ceph3
147 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
148 keyring:
149 monitoring:
150 key: 00000000000000000000000000000000000000==
Ondrej Smola81d1a192017-08-17 11:13:10 +0200151
152At OpenStack control settings are usually located at cinder-volume or glance-
153registry services.
154
155.. code-block:: yaml
156
157 ceph:
158 client:
159 config:
160 global:
161 fsid: 00000000-0000-0000-0000-000000000000
162 mon initial members: ceph1,ceph2,ceph3
163 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
164 osd_fs_mkfs_arguments_xfs:
165 osd_fs_mount_options_xfs: rw,noatime
166 network public: 10.0.0.0/24
167 network cluster: 10.0.0.0/24
168 osd_fs_type: xfs
169 osd:
170 osd journal size: 7500
171 filestore xattr use omap: true
172 mon:
173 mon debug dump transactions: false
174 keyring:
175 cinder:
176 key: 00000000000000000000000000000000000000==
177 glance:
178 key: 00000000000000000000000000000000000000==
179
180
181Ceph gateway
182------------
183
184Rados gateway with keystone v2 auth backend
185
186.. code-block:: yaml
187
188 ceph:
189 radosgw:
190 enabled: true
191 hostname: gw.ceph.lab
192 bind:
193 address: 10.10.10.1
194 port: 8080
195 identity:
196 engine: keystone
197 api_version: 2
198 host: 10.10.10.100
199 port: 5000
200 user: admin
201 password: password
202 tenant: admin
203
204Rados gateway with keystone v3 auth backend
205
206.. code-block:: yaml
207
208 ceph:
209 radosgw:
210 enabled: true
211 hostname: gw.ceph.lab
212 bind:
213 address: 10.10.10.1
214 port: 8080
215 identity:
216 engine: keystone
217 api_version: 3
218 host: 10.10.10.100
219 port: 5000
220 user: admin
221 password: password
222 project: admin
223 domain: default
224
225
226Ceph setup role
227---------------
228
229Replicated ceph storage pool
230
231.. code-block:: yaml
232
233 ceph:
234 setup:
235 pool:
236 replicated_pool:
237 pg_num: 256
238 pgp_num: 256
239 type: replicated
240 crush_ruleset_name: 0
241
242Erasure ceph storage pool
243
244.. code-block:: yaml
245
246 ceph:
247 setup:
248 pool:
249 erasure_pool:
250 pg_num: 256
251 pgp_num: 256
252 type: erasure
253 crush_ruleset_name: 0
254 erasure_code_profile:
255
256
257Ceph monitoring
258---------------
259
260Collect general cluster metrics
261
262.. code-block:: yaml
263
264 ceph:
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200265 monitoring:
266 cluster_stats:
267 enabled: true
268 ceph_user: monitoring
269
Ondrej Smola81d1a192017-08-17 11:13:10 +0200270Collect metrics from monitor and OSD services
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +0200271
272.. code-block:: yaml
273
274 ceph:
275 monitoring:
276 node_stats:
277 enabled: true
278
279
Ondrej Smola81d1a192017-08-17 11:13:10 +0200280More information
281================
jpavlik8425d362015-06-09 15:23:27 +0200282
283* https://github.com/cloud-ee/ceph-salt-formula
284* http://ceph.com/ceph-storage/
jan kaufman4f7757b2015-06-12 10:49:00 +0200285* http://ceph.com/docs/master/start/intro/
Filip Pytloun32841d72017-02-02 13:02:03 +0100286
Ondrej Smola81d1a192017-08-17 11:13:10 +0200287
288Documentation and bugs
Filip Pytloun32841d72017-02-02 13:02:03 +0100289======================
290
291To learn how to install and update salt-formulas, consult the documentation
292available online at:
293
294 http://salt-formulas.readthedocs.io/
295
296In the unfortunate event that bugs are discovered, they should be reported to
297the appropriate issue tracker. Use Github issue tracker for specific salt
298formula:
299
300 https://github.com/salt-formulas/salt-formula-ceph/issues
301
302For feature requests, bug reports or blueprints affecting entire ecosystem,
303use Launchpad salt-formulas project:
304
305 https://launchpad.net/salt-formulas
306
307You can also join salt-formulas-users team and subscribe to mailing list:
308
309 https://launchpad.net/~salt-formulas-users
310
311Developers wishing to work on the salt-formulas projects should always base
312their work on master branch and submit pull request against specific formula.
313
314 https://github.com/salt-formulas/salt-formula-ceph
315
316Any questions or feedback is always welcome so feel free to join our IRC
317channel:
318
319 #salt-formulas @ irc.freenode.net