blob: a8fa04aca55e89725d877c2efb856eba66d20c97 [file] [log] [blame]
jpavlik8425d362015-06-09 15:23:27 +02001========
2CEPH RBD
3========
4
5Cephs RADOS provides you with extraordinary data storage scalabilitythousands of client hosts or KVMs accessing petabytes to exabytes of data. Each one of your applications can use the object, block or file system interfaces to the same RADOS cluster simultaneously, which means your Ceph storage system serves as a flexible foundation for all of your data storage needs.
6
7Install and configure the Ceph MON and ODS services
8
9
10
11Sample pillars
12==============
13
14Ceph OSDs: A Ceph OSD Daemon (Ceph OSD) stores data, handles data replication, recovery, backfilling, rebalancing, and provides some monitoring information to Ceph Monitors by checking other Ceph OSD Daemons for a heartbeat. A Ceph Storage Cluster requires at least two Ceph OSD Daemons to achieve an active + clean state when the cluster makes two copies of your data (Ceph makes 2 copies by default, but you can adjust it).
15
16.. code-block:: yaml
17
18 ceph:
19 osd:
20 config:
21 global:
22 fsid: 00000000-0000-0000-0000-000000000000
23 mon initial members: ceph1,ceph2,ceph3
24 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
jan kaufmanf13ccb92016-01-25 22:49:50 +010025 osd_fs_mkfs_arguments_xfs:
jpavlik8425d362015-06-09 15:23:27 +020026 osd_fs_mount_options_xfs: rw,noatime
27 network public: 10.0.0.0/24
28 network cluster: 10.0.0.0/24
29 osd_fs_type: xfs
30 osd:
31 osd journal size: 7500
32 filestore xattr use omap: true
33 mon:
34 mon debug dump transactions: false
35 keyring:
jan kaufman4f7757b2015-06-12 10:49:00 +020036 cinder:
37 key: 00000000000000000000000000000000000000==
38 glance:
39 key: 00000000000000000000000000000000000000==
jpavlik8425d362015-06-09 15:23:27 +020040
41Monitors: A Ceph Monitor maintains maps of the cluster state, including the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map. Ceph maintains a history (called an epoch”) of each state change in the Ceph Monitors, Ceph OSD Daemons, and PGs.
42
43.. code-block:: yaml
44
45 ceph:
46 mon:
47 config:
48 global:
49 fsid: 00000000-0000-0000-0000-000000000000
50 mon initial members: ceph1,ceph2,ceph3
51 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
jan kaufmanf13ccb92016-01-25 22:49:50 +010052 osd_fs_mkfs_arguments_xfs:
jpavlik8425d362015-06-09 15:23:27 +020053 osd_fs_mount_options_xfs: rw,noatime
54 network public: 10.0.0.0/24
55 network cluster: 10.0.0.0/24
56 osd_fs_type: xfs
57 osd:
58 osd journal size: 7500
59 filestore xattr use omap: true
60 mon:
61 mon debug dump transactions: false
62 keyring:
jan kaufman4f7757b2015-06-12 10:49:00 +020063 cinder:
64 key: 00000000000000000000000000000000000000==
65 glance:
66 key: 00000000000000000000000000000000000000==
jpavlik8425d362015-06-09 15:23:27 +020067
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +020068Client pillar - usually located at cinder-volume or glance-registry.
jpavlik8425d362015-06-09 15:23:27 +020069
70.. code-block:: yaml
71
72 ceph:
73 client:
74 config:
75 global:
76 fsid: 00000000-0000-0000-0000-000000000000
77 mon initial members: ceph1,ceph2,ceph3
78 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
jan kaufmanf13ccb92016-01-25 22:49:50 +010079 osd_fs_mkfs_arguments_xfs:
jpavlik8425d362015-06-09 15:23:27 +020080 osd_fs_mount_options_xfs: rw,noatime
81 network public: 10.0.0.0/24
82 network cluster: 10.0.0.0/24
83 osd_fs_type: xfs
84 osd:
85 osd journal size: 7500
86 filestore xattr use omap: true
87 mon:
88 mon debug dump transactions: false
89 keyring:
jan kaufman4f7757b2015-06-12 10:49:00 +020090 cinder:
91 key: 00000000000000000000000000000000000000==
92 glance:
93 key: 00000000000000000000000000000000000000==
jpavlik8425d362015-06-09 15:23:27 +020094
Simon Pasquierf8e6f9e2017-07-03 10:15:20 +020095Monitoring Ceph cluster - collect cluster metrics
96
97.. code-block:: yaml
98
99 ceph:
100 client:
101 config:
102 global:
103 mon initial members: ceph1,ceph2,ceph3
104 mon host: 10.103.255.252:6789,10.103.255.253:6789,10.103.255.254:6789
105 keyring:
106 monitoring:
107 key: 00000000000000000000000000000000000000==
108 monitoring:
109 cluster_stats:
110 enabled: true
111 ceph_user: monitoring
112
113Monitoring Ceph services - collect metrics from monitor and OSD services
114
115.. code-block:: yaml
116
117 ceph:
118 monitoring:
119 node_stats:
120 enabled: true
121
122
jpavlik8425d362015-06-09 15:23:27 +0200123Read more
124=========
125
126* https://github.com/cloud-ee/ceph-salt-formula
127* http://ceph.com/ceph-storage/
jan kaufman4f7757b2015-06-12 10:49:00 +0200128* http://ceph.com/docs/master/start/intro/
Filip Pytloun32841d72017-02-02 13:02:03 +0100129
130Documentation and Bugs
131======================
132
133To learn how to install and update salt-formulas, consult the documentation
134available online at:
135
136 http://salt-formulas.readthedocs.io/
137
138In the unfortunate event that bugs are discovered, they should be reported to
139the appropriate issue tracker. Use Github issue tracker for specific salt
140formula:
141
142 https://github.com/salt-formulas/salt-formula-ceph/issues
143
144For feature requests, bug reports or blueprints affecting entire ecosystem,
145use Launchpad salt-formulas project:
146
147 https://launchpad.net/salt-formulas
148
149You can also join salt-formulas-users team and subscribe to mailing list:
150
151 https://launchpad.net/~salt-formulas-users
152
153Developers wishing to work on the salt-formulas projects should always base
154their work on master branch and submit pull request against specific formula.
155
156 https://github.com/salt-formulas/salt-formula-ceph
157
158Any questions or feedback is always welcome so feel free to join our IRC
159channel:
160
161 #salt-formulas @ irc.freenode.net