jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 1 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 2 | ============ |
| 3 | Ceph formula |
| 4 | ============ |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 5 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 6 | Ceph provides extraordinary data storage scalability. Thousands of client |
| 7 | hosts or KVMs accessing petabytes to exabytes of data. Each one of your |
| 8 | applications can use the object, block or file system interfaces to the same |
| 9 | RADOS cluster simultaneously, which means your Ceph storage system serves as a |
| 10 | flexible foundation for all of your data storage needs. |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 11 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 12 | Use salt-formula-linux for initial disk partitioning. |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 13 | |
| 14 | |
| 15 | Sample pillars |
| 16 | ============== |
| 17 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 18 | Common metadata for all nodes/roles |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 19 | |
| 20 | .. code-block:: yaml |
| 21 | |
| 22 | ceph: |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 23 | common: |
| 24 | version: kraken |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 25 | config: |
| 26 | global: |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 27 | param1: value1 |
| 28 | param2: value1 |
| 29 | param3: value1 |
| 30 | pool_section: |
| 31 | param1: value2 |
| 32 | param2: value2 |
| 33 | param3: value2 |
| 34 | fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d |
| 35 | members: |
| 36 | - name: cmn01 |
| 37 | host: 10.0.0.1 |
| 38 | - name: cmn02 |
| 39 | host: 10.0.0.2 |
| 40 | - name: cmn03 |
| 41 | host: 10.0.0.3 |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 42 | keyring: |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 43 | admin: |
| 44 | key: AQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA== |
| 45 | caps: |
| 46 | mds: "allow *" |
| 47 | mgr: "allow *" |
| 48 | mon: "allow *" |
| 49 | osd: "allow *" |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 50 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 51 | Optional definition for cluster and public networks. Cluster network is used |
| 52 | for replication. Public network for front-end communication. |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 53 | |
| 54 | .. code-block:: yaml |
| 55 | |
| 56 | ceph: |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 57 | common: |
| 58 | version: kraken |
| 59 | fsid: a619c5fc-c4ed-4f22-9ed2-66cf2feca23d |
| 60 | .... |
| 61 | public_network: 10.0.0.0/24, 10.1.0.0/24 |
| 62 | cluster_network: 10.10.0.0/24, 10.11.0.0/24 |
| 63 | |
| 64 | |
| 65 | Ceph mon (control) roles |
| 66 | ------------------------ |
| 67 | |
| 68 | Monitors: A Ceph Monitor maintains maps of the cluster state, including the |
| 69 | monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map. |
| 70 | Ceph maintains a history (called an “epoch”) of each state change in the Ceph |
| 71 | Monitors, Ceph OSD Daemons, and PGs. |
| 72 | |
| 73 | .. code-block:: yaml |
| 74 | |
| 75 | ceph: |
| 76 | common: |
| 77 | config: |
| 78 | mon: |
| 79 | key: value |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 80 | mon: |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 81 | enabled: true |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 82 | keyring: |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 83 | mon: |
| 84 | key: AQAnQIhZ6in5KxAAdf467upoRMWFcVg5pbh1yg== |
| 85 | caps: |
| 86 | mon: "allow *" |
| 87 | admin: |
| 88 | key: AQBHPYhZv5mYDBAAvisaSzCTQkC5gywGUp/voA== |
| 89 | caps: |
| 90 | mds: "allow *" |
| 91 | mgr: "allow *" |
| 92 | mon: "allow *" |
| 93 | osd: "allow *" |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 94 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 95 | |
| 96 | Ceph OSD (storage) roles |
| 97 | ------------------------ |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 98 | |
| 99 | .. code-block:: yaml |
| 100 | |
| 101 | ceph: |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 102 | common: |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 103 | config: |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 104 | osd: |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 105 | key: value |
| 106 | osd: |
| 107 | enabled: true |
| 108 | host_id: 10 |
| 109 | copy_admin_key: true |
| 110 | journal_type: raw |
| 111 | dmcrypt: disable |
| 112 | osd_scenario: raw_journal_devices |
| 113 | fs_type: xfs |
| 114 | disk: |
| 115 | '00': |
| 116 | rule: hdd |
| 117 | dev: /dev/vdb2 |
| 118 | journal: /dev/vdb1 |
| 119 | class: besthdd |
| 120 | weight: 1.5 |
| 121 | '01': |
| 122 | rule: hdd |
| 123 | dev: /dev/vdc2 |
| 124 | journal: /dev/vdc1 |
| 125 | class: besthdd |
| 126 | weight: 1.5 |
| 127 | '02': |
| 128 | rule: hdd |
| 129 | dev: /dev/vdd2 |
| 130 | journal: /dev/vdd1 |
| 131 | class: besthdd |
| 132 | weight: 1.5 |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 133 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 134 | |
| 135 | Ceph client roles |
| 136 | ----------------- |
| 137 | |
| 138 | Simple ceph client service |
Simon Pasquier | f8e6f9e | 2017-07-03 10:15:20 +0200 | [diff] [blame] | 139 | |
| 140 | .. code-block:: yaml |
| 141 | |
| 142 | ceph: |
| 143 | client: |
| 144 | config: |
| 145 | global: |
| 146 | mon initial members: ceph1,ceph2,ceph3 |
| 147 | mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789 |
| 148 | keyring: |
| 149 | monitoring: |
| 150 | key: 00000000000000000000000000000000000000== |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 151 | |
| 152 | At OpenStack control settings are usually located at cinder-volume or glance- |
| 153 | registry services. |
| 154 | |
| 155 | .. code-block:: yaml |
| 156 | |
| 157 | ceph: |
| 158 | client: |
| 159 | config: |
| 160 | global: |
| 161 | fsid: 00000000-0000-0000-0000-000000000000 |
| 162 | mon initial members: ceph1,ceph2,ceph3 |
| 163 | mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789 |
| 164 | osd_fs_mkfs_arguments_xfs: |
| 165 | osd_fs_mount_options_xfs: rw,noatime |
| 166 | network public: 10.0.0.0/24 |
| 167 | network cluster: 10.0.0.0/24 |
| 168 | osd_fs_type: xfs |
| 169 | osd: |
| 170 | osd journal size: 7500 |
| 171 | filestore xattr use omap: true |
| 172 | mon: |
| 173 | mon debug dump transactions: false |
| 174 | keyring: |
| 175 | cinder: |
| 176 | key: 00000000000000000000000000000000000000== |
| 177 | glance: |
| 178 | key: 00000000000000000000000000000000000000== |
| 179 | |
| 180 | |
| 181 | Ceph gateway |
| 182 | ------------ |
| 183 | |
| 184 | Rados gateway with keystone v2 auth backend |
| 185 | |
| 186 | .. code-block:: yaml |
| 187 | |
| 188 | ceph: |
| 189 | radosgw: |
| 190 | enabled: true |
| 191 | hostname: gw.ceph.lab |
| 192 | bind: |
| 193 | address: 10.10.10.1 |
| 194 | port: 8080 |
| 195 | identity: |
| 196 | engine: keystone |
| 197 | api_version: 2 |
| 198 | host: 10.10.10.100 |
| 199 | port: 5000 |
| 200 | user: admin |
| 201 | password: password |
| 202 | tenant: admin |
| 203 | |
| 204 | Rados gateway with keystone v3 auth backend |
| 205 | |
| 206 | .. code-block:: yaml |
| 207 | |
| 208 | ceph: |
| 209 | radosgw: |
| 210 | enabled: true |
| 211 | hostname: gw.ceph.lab |
| 212 | bind: |
| 213 | address: 10.10.10.1 |
| 214 | port: 8080 |
| 215 | identity: |
| 216 | engine: keystone |
| 217 | api_version: 3 |
| 218 | host: 10.10.10.100 |
| 219 | port: 5000 |
| 220 | user: admin |
| 221 | password: password |
| 222 | project: admin |
| 223 | domain: default |
| 224 | |
| 225 | |
| 226 | Ceph setup role |
| 227 | --------------- |
| 228 | |
| 229 | Replicated ceph storage pool |
| 230 | |
| 231 | .. code-block:: yaml |
| 232 | |
| 233 | ceph: |
| 234 | setup: |
| 235 | pool: |
| 236 | replicated_pool: |
| 237 | pg_num: 256 |
| 238 | pgp_num: 256 |
| 239 | type: replicated |
| 240 | crush_ruleset_name: 0 |
| 241 | |
| 242 | Erasure ceph storage pool |
| 243 | |
| 244 | .. code-block:: yaml |
| 245 | |
| 246 | ceph: |
| 247 | setup: |
| 248 | pool: |
| 249 | erasure_pool: |
| 250 | pg_num: 256 |
| 251 | pgp_num: 256 |
| 252 | type: erasure |
| 253 | crush_ruleset_name: 0 |
| 254 | erasure_code_profile: |
| 255 | |
| 256 | |
| 257 | Ceph monitoring |
| 258 | --------------- |
| 259 | |
| 260 | Collect general cluster metrics |
| 261 | |
| 262 | .. code-block:: yaml |
| 263 | |
| 264 | ceph: |
Simon Pasquier | f8e6f9e | 2017-07-03 10:15:20 +0200 | [diff] [blame] | 265 | monitoring: |
| 266 | cluster_stats: |
| 267 | enabled: true |
| 268 | ceph_user: monitoring |
| 269 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 270 | Collect metrics from monitor and OSD services |
Simon Pasquier | f8e6f9e | 2017-07-03 10:15:20 +0200 | [diff] [blame] | 271 | |
| 272 | .. code-block:: yaml |
| 273 | |
| 274 | ceph: |
| 275 | monitoring: |
| 276 | node_stats: |
| 277 | enabled: true |
| 278 | |
| 279 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 280 | More information |
| 281 | ================ |
jpavlik | 8425d36 | 2015-06-09 15:23:27 +0200 | [diff] [blame] | 282 | |
| 283 | * https://github.com/cloud-ee/ceph-salt-formula |
| 284 | * http://ceph.com/ceph-storage/ |
jan kaufman | 4f7757b | 2015-06-12 10:49:00 +0200 | [diff] [blame] | 285 | * http://ceph.com/docs/master/start/intro/ |
Filip Pytloun | 32841d7 | 2017-02-02 13:02:03 +0100 | [diff] [blame] | 286 | |
Ondrej Smola | 81d1a19 | 2017-08-17 11:13:10 +0200 | [diff] [blame] | 287 | |
| 288 | Documentation and bugs |
Filip Pytloun | 32841d7 | 2017-02-02 13:02:03 +0100 | [diff] [blame] | 289 | ====================== |
| 290 | |
| 291 | To learn how to install and update salt-formulas, consult the documentation |
| 292 | available online at: |
| 293 | |
| 294 | http://salt-formulas.readthedocs.io/ |
| 295 | |
| 296 | In the unfortunate event that bugs are discovered, they should be reported to |
| 297 | the appropriate issue tracker. Use Github issue tracker for specific salt |
| 298 | formula: |
| 299 | |
| 300 | https://github.com/salt-formulas/salt-formula-ceph/issues |
| 301 | |
| 302 | For feature requests, bug reports or blueprints affecting entire ecosystem, |
| 303 | use Launchpad salt-formulas project: |
| 304 | |
| 305 | https://launchpad.net/salt-formulas |
| 306 | |
| 307 | You can also join salt-formulas-users team and subscribe to mailing list: |
| 308 | |
| 309 | https://launchpad.net/~salt-formulas-users |
| 310 | |
| 311 | Developers wishing to work on the salt-formulas projects should always base |
| 312 | their work on master branch and submit pull request against specific formula. |
| 313 | |
| 314 | https://github.com/salt-formulas/salt-formula-ceph |
| 315 | |
| 316 | Any questions or feedback is always welcome so feel free to join our IRC |
| 317 | channel: |
| 318 | |
| 319 | #salt-formulas @ irc.freenode.net |