blob: 9ef87d117ec2341846678d8a182390878839150a [file] [log] [blame]
=====
Usage
=====
Salt is a new approach to infrastructure management. Easy enough to get
running in minutes, scalable enough to manage tens of thousands of servers,
and fast enough to communicate with them in seconds.
Salt delivers a dynamic communication bus for infrastructures that can be used
for orchestration, remote execution, configuration management and much more.
Sample Metadata
===============
Salt Master
-----------
Salt master with base formulas and pillar metadata back end:
.. literalinclude:: tests/pillar/master_single_pillar.sls
:language: yaml
Salt master with reclass ENC metadata back end:
.. literalinclude:: tests/pillar/master_single_reclass.sls
:language: yaml
Salt master with Architect ENC metadata back end:
.. code-block:: yaml
salt:
master:
enabled: true
pillar:
engine: architect
project: project-name
host: architect-api
port: 8181
username: salt
password: password
Salt master with multiple ``ext_pillars``:
.. code-block:: yaml
salt:
master:
enabled: true
pillar:
engine: salt
source:
engine: local
ext_pillars:
1:
module: cmd_json
params: '"echo {\"arg\": \"val\"}"'
2:
module: cmd_yaml
params: /usr/local/bin/get_yml.sh
Salt master with API:
.. literalinclude:: tests/pillar/master_api.sls
:language: yaml
Salt master with defined user ACLs:
.. literalinclude:: tests/pillar/master_acl.sls
:language: yaml
Salt master with preset minions:
.. code-block:: yaml
salt:
master:
enabled: true
minions:
- name: 'node1.system.location.domain.com'
Salt master with pip based installation (optional):
.. code-block:: yaml
salt:
master:
enabled: true
...
source:
engine: pip
version: 2016.3.0rc2
Install formula through system package management:
.. code-block:: yaml
salt:
master:
enabled: true
...
environment:
prd:
keystone:
source: pkg
name: salt-formula-keystone
nova:
source: pkg
name: salt-formula-keystone
version: 0.1+0~20160818133412.24~1.gbp6e1ebb
postresql:
source: pkg
name: salt-formula-postgresql
version: purged
Formula keystone is installed latest version and the formulas
without version are installed in one call to aptpkg module.
If the version attribute is present sls iterates over formulas
and take action to install specific version or remove it.
The version attribute may have these values
``[latest|purged|removed|<VERSION>]``.
Clone master branch of keystone formula as local feature branch:
.. code-block:: yaml
salt:
master:
enabled: true
...
environment:
dev:
formula:
keystone:
source: git
address: git@github.com:openstack/salt-formula-keystone.git
revision: master
branch: feature
Salt master with specified formula refs (for example, for Gerrit
review):
.. code-block:: yaml
salt:
master:
enabled: true
...
environment:
dev:
formula:
keystone:
source: git
address: https://git.openstack.org/openstack/salt-formula-keystone
revision: refs/changes/56/123456/1
Salt master logging configuration:
.. code-block:: yaml
salt:
master:
enabled: true
log:
level: warning
file: '/var/log/salt/master'
level_logfile: warning
Salt minion logging configuration:
.. code-block:: yaml
salt:
minion:
enabled: true
log:
level: info
file: '/var/log/salt/minion'
level_logfile: warning
Salt master with logging handlers:
.. code-block:: yaml
salt:
master:
enabled: true
handler:
handler01:
engine: udp
bind:
host: 127.0.0.1
port: 9999
minion:
handler:
handler01:
engine: udp
bind:
host: 127.0.0.1
port: 9999
handler02:
engine: zmq
bind:
host: 127.0.0.1
port: 9999
Salt engine definition for saltgraph metadata collector:
.. code-block:: yaml
salt:
master:
engine:
graph_metadata:
engine: saltgraph
host: 127.0.0.1
port: 5432
user: salt
password: salt
database: salt
Salt engine definition for Architect service:
.. code-block:: yaml
salt:
master:
engine:
architect:
engine: architect
project: project-name
host: architect-api
port: 8181
username: salt
password: password
Salt engine definition for sending events from docker events:
.. code-block:: yaml
salt:
master:
engine:
docker_events:
docker_url: unix://var/run/docker.sock
Salt master peer setup for remote certificate signing:
.. code-block:: yaml
salt:
master:
peer:
".*":
- x509.sign_remote_certificate
Salt master backup configuration:
.. code-block:: yaml
salt:
master:
backup: true
initial_data:
engine: backupninja
home_dir: remote-backup-home-dir
source: backup-node-host
host: original-salt-master-id
Configure verbosity of state output (used for :command:`salt`
command):
.. code-block:: yaml
salt:
master:
state_output: changes
Pass pillar render error to minion log:
.. note:: When set to `False` this option is great for debuging.
However it is not recomended for any production environment as it may contain
templating data as passwords, and so on, that minion should not expose.
.. code-block:: yaml
salt:
master:
pillar_safe_render_error: False
Enable Windows repository support:
.. code-block:: yaml
salt:
master:
win_repo:
source: git
address: https://github.com/saltstack/salt-winrepo-ng
revision: master
Configure a gitfs_remotes resource:
.. code-block:: yaml
salt:
master:
gitfs_remotes:
salt_formula:
url: https://gerrit.mcp.mirantis.com/salt-formulas/salt.git
enabled: true
params:
base: master
Read more about gitfs resource options in the official Salt documentation.
Event/Reactor systems
~~~~~~~~~~~~~~~~~~~~~
Salt to synchronize node pillar and modules after start:
.. code-block:: yaml
salt:
master:
reactor:
salt/minion/*/start:
- salt://salt/reactor/node_start.sls
Trigger basic node install:
.. code-block:: yaml
salt:
master:
reactor:
salt/minion/install:
- salt://salt/reactor/node_install.sls
Sample event to trigger the node installation:
.. code-block:: bash
salt-call event.send 'salt/minion/install'
Run any defined orchestration pipeline:
.. code-block:: yaml
salt:
master:
reactor:
salt/orchestrate/start:
- salt://salt/reactor/orchestrate_start.sls
Event to trigger the orchestration pipeline:
.. code-block:: bash
salt-call event.send 'salt/orchestrate/start' "{'orchestrate': 'salt/orchestrate/infra_install.sls'}"
Synchronise modules and pillars on minion start:
.. code-block:: yaml
salt:
master:
reactor:
'salt/minion/*/start':
- salt://salt/reactor/minion_start.sls
Add and/or remove the minion key:
.. code-block:: yaml
salt:
master:
reactor:
salt/key/create:
- salt://salt/reactor/key_create.sls
salt/key/remove:
- salt://salt/reactor/key_remove.sls
Event to trigger the key creation:
.. code-block:: bash
salt-call event.send 'salt/key/create' \
> "{'node_id': 'id-of-minion', 'node_host': '172.16.10.100', 'orch_post_create': 'kubernetes.orchestrate.compute_install', 'post_create_pillar': {'node_name': 'id-of-minion'}}"
.. note::
You can add pass additional ``orch_pre_create``, ``orch_post_create``,
``orch_pre_remove`` or ``orch_post_remove`` parameters to the event
to call extra orchestrate files. This can be useful for example for
registering/unregistering nodes from the monitoring alarms or dashboards.
The key creation event needs to be run from other machine than the one
being registered.
Event to trigger the key removal:
.. code-block:: bash
salt-call event.send 'salt/key/remove'
Control VM provisioning:
.. code-block:: yaml
_param:
vcp_links: &vcp_links
- type: phy
id: ens2
name: ens2
private-ipv4: &private-ipv4
- id: private-ipv4
type: ipv4
link: ens2
netmask: 255.255.255.0
routes:
- gateway: 192.168.0.1
netmask: 0.0.0.0
network: 0.0.0.0
virt:
disk:
three_disks:
- system:
size: 4096
image: ubuntu.qcow
- repository_snapshot:
size: 8192
image: snapshot.qcow
- cinder-volume:
size: 2048
nic:
control:
- name: nic01
bridge: br-pxe
model: virtio
- name: nic02
bridge: br-cp
model: virtio
- name: nic03
bridge: br-store-front
model: virtio
- name: nic04
bridge: br-public
model: virtio
- name: nic05
bridge: br-prv
model: virtio
virtualport:
type: openvswitch
salt:
control:
enabled: true
virt_enabled: true
size:
medium_three_disks:
cpu: 2
ram: 4
disk_profile: three_disks
cluster:
mycluster:
domain: neco.virt.domain.com
engine: virt
# Cluster global settings
rng: false
enable_vnc: True
seed: cloud-init
cloud_init:
user_data:
disable_ec2_metadata: true
resize_rootfs: True
timezone: UTC
ssh_deletekeys: True
ssh_genkeytypes: ['rsa', 'dsa', 'ecdsa']
ssh_svcname: ssh
locale: en_US.UTF-8
disable_root: true
apt_preserve_sources_list: false
apt:
sources_list: ""
sources:
ubuntu.list:
source: ${linux:system:repo:ubuntu:source}
mcp_saltstack.list:
source: ${linux:system:repo:mcp_saltstack:source}
node:
ubuntu1:
provider: node01.domain.com
image: ubuntu.qcow
size: medium
img_dest: /var/lib/libvirt/ssdimages
# Node settings override cluster global ones
enable_vnc: False
rng:
backend: /dev/urandom
model: random
rate:
period: '1800'
bytes: '1500'
# Custom per-node loader definition (e.g. for AArch64 UEFI)
loader:
readonly: yes
type: pflash
path: /usr/share/AAVMF/AAVMF_CODE.fd
machine: virt-2.11 # Custom per-node virt machine type
cpu_mode: host-passthrough
cpuset: '1-4'
mac:
nic01: AC:DE:48:AA:AA:AA
nic02: AC:DE:48:AA:AA:BB
# netconfig affects: hostname during boot
# manual interfaces configuration
cloud_init:
network_data:
links: *vcp_links
networks:
- <<: *private-ipv4
ip_address: 192.168.0.161
user_data:
salt_minion:
conf:
master: 10.1.1.1
ubuntu2:
seed: qemu-nbd
cloud_init:
enabled: false
There are two methods to seed an initial Salt minion configuration to
Libvirt VMs: mount a disk and update a filesystem or create a ConfigDrive with
a Cloud-init config. This is controlled by the "seed" parameter on cluster and
node levels. When set to _True_ or "qemu-nbd", the old method of mounting a disk
will be used. When set to "cloud-init", the new method will be used. When set
to _False_, no seeding will happen. The default value is _True_, meaning
the "qemu-nbd" method will be used. This is done for backward compatibility
and may be changed in future.
The recommended method is to use Cloud-init.
It's controlled by the "cloud_init" dictionary on cluster and node levels.
Node level parameters are merged on top of cluster level parameters.
The Salt Minion config is populated automatically based on a VM name and config
settings of the minion who is actually executing a state. To override them,
add the "salt_minion" section into the "user_data" section as shown above.
It is possible to disable Cloud-init by setting "cloud_init.enabled" to _False_.
To enable Redis plugin for the Salt caching subsystem, use the
below pillar structure:
.. code-block:: yaml
salt:
master:
cache:
plugin: redis
host: localhost
port: 6379
db: '0'
password: pass_word
bank_prefix: 'MCP'
bank_keys_prefix: 'MCPKEY'
key_prefix: 'KEY'
separator: '@'
Jinja options
-------------
Use the following options to update default Jinja renderer options.
Salt recognize Jinja options for templates and for the ``sls`` files.
For full list of options, see Jinja documentation:
http://jinja.pocoo.org/docs/api/#high-level-api
.. code-block:: yaml
salt:
renderer:
# for templates
jinja: &jina_env
# Default Jinja environment options
block_start_string: '{%'
block_end_string: '%}'
variable_start_string: '{{'
variable_end_string: '}}'
comment_start_string: '{#'
comment_end_string: '#}'
keep_trailing_newline: False
newline_sequence: '\n'
# Next two are enabled by default in Salt
trim_blocks: True
lstrip_blocks: True
# Next two are not enabled by default in Salt
# but worth to consider to enable in future for salt-formulas
line_statement_prefix: '%'
line_comment_prefix: '##'
# for .sls state files
jinja_sls: *jinja_env
With the ``line_statement/comment* _prefix`` options enabled following
code statements are valid:
.. code-block:: yaml
%- set myvar = 'one'
## You can mix even with '{%'
{%- set myvar = 'two' %} ## comment
%- set mylist = ['one', 'two', 'three'] ## comment
## comment
%- for item in mylist: ## comment
{{- item }}
%- endfor
MCP Cluster health checks
~~~~~~~~~~~~~~~~~~~~~~~~~
Swiss army knife toolset for verifying MCP cluster health.
.. note:: Health checks are tested with salt modules >= 2017.7.
Install health_checks module:
.. code-block:: bash
cp health_checks.py /usr/share/salt-formulas/env/_modules/health_checks.py
salt -C '*' saltutil.sync_all
Usually exit codes are not catched and salt-call for a module
will always return exit 0 regardless of errors in output.
If you want control exit code for scripting, you should pass
**--retcode-passthrough** to each salt call:
.. code-block:: bash
salt-call health_checks.minions_check --retcode-passthrough
Verify if minions are online.
Use it to determine which minions are offline.
.. code-block:: bash
salt-call health_checks.minions_check
Verify time diff on your minions:
.. code-block:: bash
salt-call health_checks.time_diff_check
In case of failure, dump diff JSON:
.. code-block:: bash
salt-call health_checks.time_diff_check debug=True --out=json
Get JSON stats from ntpq:
.. code-block:: bash
salt-call health_checks.ntp_status
Verify NTP peers status on the environment:
.. code-block:: bash
salt-call health_checks.ntp_check
salt-call health_checks.ntp_check min_peers=2 max_stratum=2
Verify contrail nodes contrail-status output:
.. code-block:: bash
salt-call health_checks.contrail_check debug=True
Verify galera cluster status:
.. code-block:: bash
salt-call health_checks.galera_check debug=True
salt-call health_checks.galera_check cluster_size=3 debug=True
Verify rabbitmq cluster status:
.. code-block:: bash
salt-call health_checks.mysql_check debug=True
Get rabbitmq json objects on command execution.
.. warning:: This code is experimental. It is a hack to convert erlang object to JSON. May fail.
.. code-block:: bash
salt-call health_checks.rabbitmq_cmd status
salt-call health_checks.rabbitmq_cmd cluster_status
salt-call health_checks.rabbitmq_cmd list_hashes
salt-call health_checks.rabbitmq_cmd list_ciphers
Get rabbitmq json dict of queues on the node (vhost '/' by default)
.. code-block:: bash
salt -C 's-msg01*' health_checks.rabbitmq_list_queues
salt -C 's-msg01*' health_checks.rabbitmq_list_queues /openstack
Verify haproxy upstream status:
.. code-block:: bash
salt-call health_checks.haproxy_check debug=True
salt-call health_checks.haproxy_check ignore_no_upstream=True
Get haproxy JSON stats (native python calls to socket):
.. code-block:: bash
salt-call health_checks.haproxy_status
salt-call health_checks.haproxy_status socket_path='/var/run/haproxy/admin.sock' stats_filter=['status']
Verify disk space usage:
.. code-block:: bash
salt-call health_checks.df_check
salt-call health_checks.df_check verify=space space_limit=90 ignore_partitions=['/']
Verify disk inodes usage:
.. code-block:: bash
salt-call health_checks.df_check verify=inodes
salt-call health_checks.df_check verify=inodes inode_limit=10
Verify load average on the environment:
.. code-block:: bash
salt-call health_checks.load_check
salt-call health_checks.load_check la1=4 la5=1 la15=1
Verify ifaces rx/tx drops:
.. code-block:: bash
salt-call health_checks.netdev_check
salt-call health_checks.netdev_check rx_drop_limit=0 tx_drop_limit=0
Verify memory usage:
.. code-block:: bash
salt-call health_checks.mem_check
salt-call health_checks.mem_check used_limit=50
Verify ceph health status:
.. code-block:: bash
salt-call health_checks.ceph_health_check
salt-call health_checks.ceph_health_check target=s-mon01* target_type=glob debug=True
Execute arbitrary ceph command and get JSON output:
.. code-block:: bash
salt -C 'cmn01*' health_checks.ceph_cmd 'osd tree'
Retrieve docker registry recursively:
.. code-block:: bash
salt-call health_checks.docker_registry_list s-apt01:5000
salt-call health_checks.docker_registry_list http://127.0.0.1:5000
Retrieve glusterfs pool list:
.. code-block:: bash
salt -C 'I@glusterfs:server and *02*' health_checks.gluster_pool_list
Retrieve glusterfs volumes status:
.. code-block:: bash
salt -C 'I@glusterfs:server and *02*' health_checks.gluster_volume_status
Check glusterfs pool health and size:
.. code-block:: bash
salt-call health_checks.gluster_pool_check
salt-call health_checks.gluster_pool_check expected_size=5
Check glusterfs volumes bricks health and size:
.. code-block:: bash
salt-call health_checks.gluster_volumes_check
salt-call health_checks.gluster_volumes_check expected_size=1 ignore_volumes=['/srv/volumes/aptly']
Encrypted pillars
~~~~~~~~~~~~~~~~~
.. note:: NACL and the below configuration will be available in Salt > 2017.7.
External resources:
- Tutorial to configure the Salt and Reclass ``ext_pillar`` and NACL:
http://apealive.net/post/2017-09-salt-nacl-ext-pillar/
- SaltStack documentation:
https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.nacl.html
Configure salt NACL module:
.. code-block:: bash
pip install --upgrade libnacl===1.5.2
salt-call --local nacl.keygen /etc/salt/pki/master/nacl
local:
saved sk_file:/etc/salt/pki/master/nacl pk_file: /etc/salt/pki/master/nacl.pub
.. code-block:: yaml
salt:
master:
pillar:
reclass: *reclass
nacl:
index: 99
nacl:
box_type: sealedbox
sk_file: /etc/salt/pki/master/nacl
pk_file: /etc/salt/pki/master/nacl.pub
#sk: None
#pk: None
NACL encrypt secrets:
.. code-block:: bash
salt-call --local nacl.enc 'my_secret_value' pk_file=/etc/salt/pki/master/nacl.pub
hXTkJpC1hcKMS7yZVGESutWrkvzusXfETXkacSklIxYjfWDlMJmR37MlmthdIgjXpg4f2AlBKb8tc9Woma7q
# or
salt-run nacl.enc 'myotherpass'
ADDFD0Rav6p6+63sojl7Htfrncp5rrDVyeE4BSPO7ipq8fZuLDIVAzQLf4PCbDqi+Fau5KD3/J/E+Pw=
NACL encrypted values on pillar:
Use Boxed syntax `NACL[CryptedValue=]` to encode value on pillar:
.. code-block:: yaml
my_pillar:
my_nacl:
key0: unencrypted_value
key1: NACL[hXTkJpC1hcKMS7yZVGESutWrkvzusXfETXkacSklIxYjfWDlMJmR37MlmthdIgjXpg4f2AlBKb8tc9Woma7q]
NACL large files:
.. code-block:: bash
salt-call nacl.enc_file /tmp/cert.crt out=/srv/salt/env/dev/cert.nacl
# or more advanced
cert=$(cat /tmp/cert.crt)
salt-call --out=newline_values_only nacl.enc_pub data="$cert" > /srv/salt/env/dev/cert.nacl
NACL within template/native pillars:
.. code-block:: yaml
pillarexample:
user: root
password1: {{salt.nacl.dec('DRB7Q6/X5gGSRCTpZyxS6hlbWj0llUA+uaVyvou3vJ4=')|json}}
cert_key: {{salt.nacl.dec_file('/srv/salt/env/dev/certs/example.com/cert.nacl')|json}}
cert_key2: {{salt.nacl.dec_file('salt:///certs/example.com/cert2.nacl')|json}}
Salt Syndic
-----------
The master of masters:
.. code-block:: yaml
salt:
master:
enabled: true
order_masters: True
Lower syndicated master:
.. code-block:: yaml
salt:
syndic:
enabled: true
master:
host: master-of-master-host
timeout: 5
Syndicated master with multiple master of masters:
.. code-block:: yaml
salt:
syndic:
enabled: true
masters:
- host: master-of-master-host1
- host: master-of-master-host2
timeout: 5
Salt Minion
-----------
Minion ID by default triggers dependency on Linux formula, as it uses fqdn
configured from `linux.system.name` and `linux.system.domain` pillar.
To override, provide exact minion ID you require. The same can be set for
master ID rendered at ``master.conf``.
.. code-block:: yaml
salt:
minion:
id: minion1.production
master:
id: master.production
Simplest Salt minion setup with central configuration node:
.. literalinclude:: tests/pillar/minion_master.sls
:language: yaml
Multi-master Salt minion setup:
.. literalinclude:: tests/pillar/minion_multi_master.sls
:language: yaml
Salt minion with salt mine options:
.. literalinclude:: tests/pillar/minion_mine.sls
:language: yaml
Salt minion with graphing dependencies:
.. literalinclude:: tests/pillar/minion_graph.sls
:language: yaml
Salt minion behind HTTP proxy:
.. code-block:: yaml
salt:
minion:
proxy:
host: 127.0.0.1
port: 3128
Salt minion to specify non-default HTTP backend. The default
tornado backend does not respect HTTP proxy settings set as
environment variables. This is useful for cases where you need
to set no_proxy lists.
.. code-block:: yaml
salt:
minion:
backend: urllib2
Salt minion with PKI certificate authority (CA):
.. literalinclude:: tests/pillar/minion_pki_ca.sls
:language: yaml
Salt minion using PKI certificate
.. literalinclude:: tests/pillar/minion_pki_cert.sls
:language: yaml
Salt minion trust CA certificates issued by salt CA on a
specific host (ie: salt-master node):
.. code-block:: yaml
salt:
minion:
trusted_ca_minions:
- cfg01
Salt Minion Proxy
~~~~~~~~~~~~~~~~~
Salt proxy pillar:
.. code-block:: yaml
salt:
minion:
proxy_minion:
master: localhost
device:
vsrx01.mydomain.local:
enabled: true
engine: napalm
csr1000v.mydomain.local:
enabled: true
engine: napalm
.. note:: This is pillar of the the real salt-minion
Proxy pillar for IOS device:
.. code-block:: yaml
proxy:
proxytype: napalm
driver: ios
host: csr1000v.mydomain.local
username: root
passwd: r00tme
.. note:: This is pillar of the node thats not able to run
salt-minion itself.
Proxy pillar for JunOS device:
.. code-block:: yaml
proxy:
proxytype: napalm
driver: junos
host: vsrx01.mydomain.local
username: root
passwd: r00tme
optional_args:
config_format: set
.. note:: This pillar applies to the node that can not run
salt-minion itself.
Salt SSH
~~~~~~~~
Salt SSH with sudoer using key:
.. literalinclude:: tests/pillar/master_ssh_minion_key.sls
:language: yaml
Salt SSH with sudoer using password:
.. literalinclude:: tests/pillar/master_ssh_minion_password.sls
:language: yaml
Salt SSH with root using password:
.. literalinclude:: tests/pillar/master_ssh_minion_root.sls
:language: yaml
Salt control (cloud/kvm/docker)
-------------------------------
Salt cloud with local OpenStack provider:
.. literalinclude:: tests/pillar/control_cloud_openstack.sls
:language: yaml
Salt cloud with Digital Ocean provider:
.. literalinclude:: tests/pillar/control_cloud_digitalocean.sls
:language: yaml
Salt virt with KVM cluster:
.. literalinclude:: tests/pillar/control_virt.sls
:language: yaml
Salt virt with custom destination for image file:
.. literalinclude:: tests/pillar/control_virt_custom.sls
:language: yaml
Salt shared library
-------------------
This formula includes 'sharedlib' execution module which is a kind
of 'library' of function and / or classes to be used in Jinja templates
or directly as execution module.
'sharedlib' implements a loader that is able to scan nested directories
and import Python classes / functions from nested modules. Salt doesn't
allow this as it only imports top-level modules:
https://github.com/saltstack/salt/issues/37273
'sharedlib' implements 4 main functions:
* 'sharedlib.list' - search and print functions / classes found in nested directories
* 'sharedlib.info' - print docstring of a function (if it exists)
* 'sharedlib.get' - get function / class object, but not execute it immediately
* 'sharedlib.call' - get function / class and execute / initialize it with
arguments given.
Each of the commands above also have it's own docstring so it's possible to
use them on a system:
.. code-block:: text
# salt-call sys.doc sharedlib.list
local:
----------
sharedlib.list:
List available functions.
.. code-block::
salt-call sharedlib.list
Usage examples:
.. code-block:: text
# salt-call sharedlib.list
local:
----------
sharedlib.list:
----------
classes:
- misc.Test
- misc2.Test
functions:
- misc.cast_dict_keys_to_int
.. code-block:: text
# salt-call sharedlib.info misc.cast_dict_keys_to_int
local:
----------
sharedlib.info:
----------
misc.cast_dict_keys_to_int:
Return a dictionary with keys casted to int.
This usually is required when you want sort the dict later.
Jinja example:
.. code-block: jinja
{%- set ruleset = salt['sharedlib.call']('misc.cast_dict_keys_to_int', c.get('ruleset', {})) %}
.. code-block:: jinja
{%- set func = salt['sharedlib.get']('misc.cast_dict_keys_to_int') %}
{%- for c_name, c in t.chains.items() %}
{%- set ruleset = func(c.get('ruleset', {})) %}
{%- for rule_id, r in ruleset | dictsort %}
...
{%- endfor %}
Procedure of Data model and Schemas validation.
-------------------
Salt formula supports several mechanisms which allows to validate data model by jsonschema validator.
It supports unversioned schemas which must be contained at:
.. code-block:: text
'salt://{}/schemas/{}.yaml'.format(service, role)
and/or versioned schemas which must be contained at:
.. code-block:: text
'salt://{}/schemas/{}/{}.yaml'.format(service, version, role)
Returns list of all defined schema files.
.. code-block:: bash
salt-call modelschema.schema_list
Returns pillar schema for given service and role.
.. code-block:: bash
salt-call modelschema.schema_get ntp server
.. or ..
salt-call modelschema.schema_get keystone server pike
Validates pillar schema itself of given service and role.
.. code-block:: bash
salt-call modelschema.schema_validate ntp server
.. or ..
salt-call modelschema.schema_validate keystone server pike
Validates pillar metadata by schema for given service and role.
.. code-block:: bash
salt-call modelschema.model_validate ntp server
.. or ..
salt-call modelschema.model_validate keystone server pike
Validates pillar metadata by schema for given service and role.
.. code-block:: bash
salt-run modelschema.data_validate {'a': 'b'} {'a': 'b'}
Generate pillar schema skeleton for given service. Method iterates throught
test pillars and generates schema scaffold structure in JSON format that
can be passed to service like http://jsonschema.net/ to get the basic
schema for the individual roles of the service.
.. code-block:: bash
salt-call modelschema.schema_from_tests ntp
.. or ..
salt-call modelschema.schema_from_tests keystone pike
You can read more about it here:
http://json-schema.org/implementations.html
https://github.com/Julian/jsonschema
Usage
=====
Working with salt-cloud:
.. code-block:: bash
salt-cloud -m /path/to/map --assume-yes
Debug LIBCLOUD for salt-cloud connection:
.. code-block:: bash
export LIBCLOUD_DEBUG=/dev/stderr; salt-cloud --list-sizes provider_name --log-level all
Read more
=========
* http://salt.readthedocs.org/en/latest/
* https://github.com/DanielBryan/salt-state-graph
* http://karlgrz.com/testing-salt-states-rapidly-with-docker/
* https://mywushublog.com/2013/03/configuration-management-with-salt-stack/
* http://russell.ballestrini.net/replace-the-nagios-scheduler-and-nrpe-with-salt-stack/
* https://github.com/saltstack-formulas/salt-formula
* http://docs.saltstack.com/en/latest/topics/tutorials/multimaster.html
salt-cloud
----------
* http://www.blog.sandro-mathys.ch/2013/07/setting-user-password-when-launching.html
* http://cloudinit.readthedocs.org/en/latest/topics/examples.html
* http://salt-cloud.readthedocs.org/en/latest/topics/install/index.html
* http://docs.saltstack.com/topics/cloud/digitalocean.html
* http://salt-cloud.readthedocs.org/en/latest/topics/rackspace.html
* http://salt-cloud.readthedocs.org/en/latest/topics/map.html
* http://docs.saltstack.com/en/latest/topics/tutorials/multimaster.html