Merge pull request #12 from salt-formulas/pr/11
Pr/11 - updated
diff --git a/README.rst b/README.rst
index 1bf7c33..d9ccef7 100644
--- a/README.rst
+++ b/README.rst
@@ -4,20 +4,108 @@
============
This formula installs Helm client, installs Tiller on Kubernetes cluster and
-creates releases in it.
+manages releases in it.
+
+Availale States
+===============
+
+The default state applied by this formula (e.g. if just applying `helm`) will
+apply the `helm.releases_managed` state.
+
+.. note:: For backward compatibility till release 2.0 was kept the state client.sls,
+ which just re-call the helm.release_managed
+
+`kubectl_installed`
+------------------
+
+Optionally installs the kubectl binary per the configured pillar values,
+such as the version of `kubectl` to install and the path where the binary should
+be installed.
+
+`kubectl_configured`
+------------------
+
+Manages a kubectl configuration file and gce_token json file per the configured
+pillar values. Note that the available configuration values allow the path of
+the kube config file to be placed at a different location than the default
+installation path; this is recommended to avoid confusion if the kubectl
+binary on the minion might be manually used with multiple contexts.
+
+**includes**:
+* `kubectl_installed`
+
+`client_installed`
+------------------
+
+Installs the helm client binary per the configured pillar values, such as where
+helm home should be, which version of the helm binary to install and that path
+for the helm binary.
+
+**includes**:
+* `kubectl_installed
+
+`tiller_installed`
+------------------
+
+Optionally installs a Tiller deployment to the Kubernetes cluster per the
+`helm:client:tiller:install` pillar value. If the pillar value is set to
+install tiller to the cluster, the version of the tiller installation will
+match the version of the Helm client installed per the `helm:client:version`
+configuration parameter
+
+**includes**:
+* `client_installed`
+* `kubectl_configured`
+
+`repos_managed`
+------------------
+
+Ensures the repositories configured per the pillar (and only those repositories)
+are registered at the configured helm home, and synchronizes the local cache
+with the remote repository with each state execution.
+
+**includes**:
+* `client_installed`
+
+`releases_managed`
+------------------
+
+Ensures the releases configured with the pillar are in the expected state with
+the Kubernetes cluster. This state includes change detection to determine
+whether the pillar configurations match the release's state in the cluster.
+
+Note that changes to an existing release's namespace will trigger a deletion and
+re-installation of the release to the cluster.
+
+**includes**:
+* `client_installed`
+* `tiller_installed`
+* `kubectl_configured`
+* `repos_managed`
+
+Availale Modules
+===============
+
+To view documentation on the available modules, run:
+
+`salt '{{ tgt }}' sys.doc helm`
Sample pillars
==============
-Enable formula, install helm client on node and tiller on Kubernetes (assuming
-already configured kubectl config or local cluster endpoint):
+See the [default reclass pillar configuration](metadata/service/client.yml) for
+a documented example pillar file.
-.. code-block:: yaml
+Example Configurations
+======================
- helm:
- client:
- enabled: true
+_The following examples demonstrate configuring the formula for different
+use cases._
+
+The default pillar configuration will install the helm client on the target
+node, and Tiller to the Kubernetes cluster (assuming kubectl config or local
+cluster endpoint have already been configured.
Change version of helm being downloaded and installed:
@@ -25,7 +113,7 @@
helm:
client:
- version: 2.6.0 # defaults to 2.4.2 currently
+ version: 2.6.0 # defaults to 2.6.2 currently
download_hash: sha256=youneedtocalculatehashandputithere
Don't install tiller and use existing one exposed on some well-known address:
@@ -84,12 +172,16 @@
kubectl:
install: true # installs kubectl 1.6.7 by default
config:
- cluster: # directly translated to cluster definition in kubeconfig
+ # directly translated to cluster definition in kubeconfig
+ cluster:
server: https://kubernetes.example.com
certificate-authority-data: base64_of_ca_certificate
- user: # same for user
+ cluster_name: kubernetes.example
+ # directly translated to user definition in kubeconfig
+ user:
username: admin
password: uberadminpass
+ user_name: admin
Change kubectl download URL and use it with GKE-based cluster:
@@ -102,14 +194,41 @@
download_url: https://dl.k8s.io/v1.6.7/kubernetes-client-linux-amd64.tar.gz
download_hash: sha256=calculate_hash_here
config:
- cluster: # directly translated to cluster definition in kubeconfig
+ # directly translated to cluster definition in kubeconfig
+ cluster:
server: https://3.141.59.265
certificate-authority-data: base64_of_ca_certificate
+ # directly translated to user definition in kubeconfig
user:
auth-provider:
name: gcp
+ user_name: gce_user
gce_service_token: base64_of_json_token_downloaded_from_cloud_console
+Known Issues
+============
+
+1. Unable to remove all user supplied values
+
+If a release previously has had user supplied value overrides (via the
+release's `values` key in the pillar), subsequently removing all `values`
+overrides (so that there is no more `values` key for the release in the
+pillar) will not actually update the Helm deployment. To get around this,
+specify a fake key/value pair in the release's pillar; Tiller will override
+all previously user-supplied values with the new fake key and value. For
+example:
+
+
+.. code-block:: yaml
+ helm:
+ client:
+ releases:
+ zoo1:
+ enabled: true
+ ...
+ values:
+ fake_key: fake_value
+
More Information
================
diff --git a/_modules/helm.py b/_modules/helm.py
index 6493aa5..c51bfd3 100644
--- a/_modules/helm.py
+++ b/_modules/helm.py
@@ -1,106 +1,381 @@
import logging
+import re
from salt.serializers import yaml
+from salt.exceptions import CommandExecutionError
-HELM_HOME = '/srv/helm/home'
LOG = logging.getLogger(__name__)
-def ok_or_output(cmd, prefix=None):
- ret = __salt__['cmd.run_all'](**cmd)
- if ret['retcode'] == 0:
- return None
- msg = "Stdout:\n{0[stdout]}\nStderr:\n{0[stderr]}".format(ret)
- if prefix:
- msg = prefix + ':\n' + msg
- return msg
+class HelmExecutionError(CommandExecutionError):
+ def __init__(self, cmd, error):
+ self.cmd = cmd
+ self.error = error
-
-def _helm_cmd(*args, **tiller_kwargs):
- if tiller_kwargs['tiller_host']:
- tiller_args = ('--host', tiller_kwargs['tiller_host'])
+def _helm_cmd(*args, **kwargs):
+ if kwargs.get('tiller_host'):
+ addtl_args = ('--host', kwargs['tiller_host'])
+ elif kwargs.get('tiller_namespace'):
+ addtl_args = ('--tiller-namespace', kwargs['tiller_namespace'])
else:
- tiller_args = ('--tiller-namespace', tiller_kwargs['tiller_namespace'])
- env = {'HELM_HOME': HELM_HOME}
- if tiller_kwargs['kube_config']:
- env['KUBECONFIG'] = tiller_kwargs['kube_config']
- if tiller_kwargs['gce_service_token']:
+ addtl_args = ()
+
+ if kwargs.get('helm_home'):
+ addtl_args = addtl_args + ('--home', kwargs['helm_home'])
+
+ env = {}
+ if kwargs.get('kube_config'):
+ env['KUBECONFIG'] = kwargs['kube_config']
+ if kwargs.get('gce_service_token'):
env['GOOGLE_APPLICATION_CREDENTIALS'] = \
- tiller_kwargs['gce_service_token']
+ kwargs['gce_service_token']
return {
- 'cmd': ('helm',) + tiller_args + args,
+ 'cmd': ('helm',) + args + addtl_args,
'env': env,
}
+def _cmd_and_result(*args, **kwargs):
+ cmd = _helm_cmd(*args, **kwargs)
+ env_string = "".join(['%s="%s" ' % (k, v) for (k, v) in cmd.get('env', {}).items()])
+ cmd_string = env_string + " ".join(cmd['cmd'])
+ result = None
+ try:
+ result = __salt__['cmd.run_all'](**cmd)
+ if result['retcode'] != 0:
+ raise CommandExecutionError(result['stderr'])
+ return {
+ 'cmd': cmd_string,
+ 'stdout': result['stdout'],
+ 'stderr': result['stderr']
+ }
+ except CommandExecutionError as e:
+ raise HelmExecutionError(cmd_string, e)
-def release_exists(name, namespace='default',
- tiller_namespace='kube-system', tiller_host=None,
- kube_config=None, gce_service_token=None):
- cmd = _helm_cmd('list', '--short', '--all', '--namespace', namespace, name,
- tiller_namespace=tiller_namespace, tiller_host=tiller_host,
- kube_config=kube_config,
- gce_service_token=gce_service_token)
- return __salt__['cmd.run_stdout'](**cmd) == name
+def _parse_release(output):
+ result = {}
+ chart_match = re.search(r'CHART\: ([^0-9]+)-([^\s]+)', output)
+ if chart_match:
+ result['chart'] = chart_match.group(1)
+ result['version'] = chart_match.group(2)
+
+ user_values_match = re.search(r"(?<=USER-SUPPLIED VALUES\:\n)(\n*.+)+?(?=\n*COMPUTED VALUES\:)", output, re.MULTILINE)
+ if user_values_match:
+ result['values'] = yaml.deserialize(user_values_match.group(0))
+
+ computed_values_match = re.search(r"(?<=COMPUTED VALUES\:\n)(\n*.+)+?(?=\n*HOOKS\:)", output, re.MULTILINE)
+ if computed_values_match:
+ result['computed_values'] = yaml.deserialize(computed_values_match.group(0))
+
+ manifest_match = re.search(r"(?<=MANIFEST\:\n)(\n*(?!Release \".+\" has been upgraded).*)+", output, re.MULTILINE)
+ if manifest_match:
+ result['manifest'] = manifest_match.group(0)
+
+ namespace_match = re.search(r"(?<=NAMESPACE\: )(.*)", output)
+ if namespace_match:
+ result['namespace'] = namespace_match.group(0)
+
+ return result
+
+def _parse_repo(repo_string = None):
+ split_string = repo_string.split('\t')
+ return {
+ "name": split_string[0].strip(),
+ "url": split_string[1].strip()
+ }
+
+
+def _get_release_namespace(name, tiller_namespace="kube-system", **kwargs):
+ cmd = _helm_cmd("list", name, **kwargs)
+ result = __salt__['cmd.run_stdout'](**cmd)
+ if not result or len(result.split("\n")) < 2:
+ return None
+
+ return result.split("\n")[1].split("\t")[5]
+
+def list_repos(**kwargs):
+ '''
+ Get the result of running `helm repo list` on the target minion, formatted
+ as a list of dicts with two keys:
+
+ * name: the name with which the repository is registered
+ * url: the url registered for the repository
+ '''
+ cmd = _helm_cmd('repo', 'list', **kwargs)
+ result = __salt__['cmd.run_stdout'](**cmd)
+ if result is None:
+ return result
+
+ result = result.split("\n")
+ result.pop(0)
+ return {
+ repo['name']: repo['url'] for repo in [_parse_repo(line) for line in result]
+ }
+
+def add_repo(name, url, **kwargs):
+ '''
+ Register the repository located at the supplied url with the supplied name.
+ Note that re-using an existing name will overwrite the repository url for
+ that registered repository to point to the supplied url.
+
+ name
+ The name with which to register the repository with the Helm client.
+
+ url
+ The url for the chart repository.
+ '''
+ return _cmd_and_result('repo', 'add', name, url, **kwargs)
+
+def remove_repo(name, **kwargs):
+ '''
+ Remove the repository from the Helm client registered with the supplied
+ name.
+
+ name
+ The name (as registered with the Helm client) for the repository to remove
+ '''
+ return _cmd_and_result('repo', 'remove', name, **kwargs)
+
+def manage_repos(present={}, absent=[], exclusive=False, **kwargs):
+ '''
+ Manage the repositories registered with the Helm client's local cache.
+
+ *ensuring repositories are present*
+ Repositories that should be present in the helm client can be supplied via
+ the `present` dict parameter; each key in the dict is a release name, and the
+ value is the repository url that should be registered.
+
+ *ensuring repositories are absent*
+ Repository names supplied via the `absent` parameter must be a string. If the
+ `exclusive` flag is set to True, the `absent` parameter will be ignored, even
+ if it has been supplied.
+
+ This function returns a dict with the following keys:
+
+ * already_present: a listing of supplied repository definitions to add that
+ are already registered with the Helm client
+
+ * added: a list of repositories that are newly registered with the Helm
+ client. Each item in the list is a dict with the following keys:
+ * name: the repo name
+ * url: the repo url
+ * stdout: the output from the `helm repo add` command call for the repo
+
+ * already_absent: any repository name supplied via the `absent` parameter
+ that was already not registered with the Helm client
+
+ * removed: the result of attempting to remove any repositories
+
+ * failed: a list of repositores that were unable to be added. Each item in
+ the list is a dict with the following keys:
+ * type: the text "removal" or "addition", as appropriate
+ * name: the repo name
+ * url: the repo url (if appropriate)
+ * error: the output from add or remove command attempted for the
+ repository
+
+ present
+ The dict of repositories that should be registered with the Helm client.
+ Each dict key is the name with which the repository url (the corresponding
+ value) should be registered with the Helm client.
+
+ absent
+ The list of repositories to ensure are not registered with the Helm client.
+ Each entry in the list must be the (string) name of the repository.
+
+ exclusive
+ A flag indicating whether only the supplied repos should be available in
+ the target minion's Helm client. If configured to true, the `absent`
+ parameter will be ignored and only the repositories configured via the
+ `present` parameter will be registered with the Helm client. Defaults to
+ False.
+ '''
+ existing_repos = list_repos(**kwargs)
+ result = {
+ "already_present": [],
+ "added": [],
+ "already_absent": [],
+ "removed": [],
+ "failed": []
+ }
+
+ for name, url in present.iteritems():
+ if not name or not url:
+ raise CommandExecutionError(('Supplied repo to add must have a name (%s) '
+ 'and url (%s)' % (name, url)))
+
+ if name in existing_repos and existing_repos[name] == url:
+ result['already_present'].append({ "name": name, "url": url })
+ continue
+
+ try:
+ result['added'].append({
+ 'name': name,
+ 'url': url,
+ 'stdout': add_repo(name, url, **kwargs)['stdout']
+ })
+ existing_repos = {
+ n: u for (n, u) in existing_repos.iteritems() if name != n
+ }
+ except CommandExecutionError as e:
+ result['failed'].append({
+ "type": "addition",
+ "name": name,
+ 'url': url,
+ 'error': '%s' % e
+ })
+
+ #
+ # Handle removal of repositories configured to be absent (or not configured
+ # to be present if the `exclusive` flag is set)
+ #
+ existing_names = [name for (name, url) in existing_repos.iteritems()]
+ if exclusive:
+ present['stable'] = "exclude"
+ absent = [name for name in existing_names if not name in present]
+
+ for name in absent:
+ if not name or not isinstance(name, str):
+ raise CommandExecutionError(('Supplied repo name to be absent must be a '
+ 'string: %s' % name))
+
+ if name not in existing_names:
+ result['already_absent'].append(name)
+ continue
+
+ try:
+ result['removed'].append({
+ 'name': name,
+ 'stdout': remove_repo(name, **kwargs) ['stdout']
+ })
+ except CommandExecutionError as e:
+ result['failed'].append({
+ "type": "removal", "name": name, "error": '%s' % e
+ })
+
+ return result
+
+def update_repos(**kwargs):
+ '''
+ Ensures the local helm repository cache for each repository is up to date.
+ Proxies the `helm repo update` command.
+ '''
+ return _cmd_and_result('repo', 'update', **kwargs)
+
+def get_release(name, tiller_namespace="kube-system", **kwargs):
+ '''
+ Get the parsed release metadata from calling `helm get {{ release }}` for the
+ supplied release name, or None if no release is found. The following keys may
+ or may not be in the returned dict:
+
+ * chart
+ * version
+ * values
+ * computed_values
+ * manifest
+ * namespace
+ '''
+ kwargs['tiller_namespace'] = tiller_namespace
+ cmd = _helm_cmd('get', name, **kwargs)
+ result = __salt__['cmd.run_stdout'](**cmd)
+ if not result:
+ return None
+
+ release = _parse_release(result)
+
+ #
+ # `helm get {{ release }}` doesn't currently (2.6.2) return the namespace, so
+ # separately retrieve it if it's not available
+ #
+ if not 'namespace' in release:
+ release['namespace'] = _get_release_namespace(name, **kwargs)
+ return release
+
+def release_exists(name, tiller_namespace="kube-system", **kwargs):
+ '''
+ Determine whether a release exists in the cluster with the supplied name
+ '''
+ kwargs['tiller_namespace'] = tiller_namespace
+ return get_release(name, **kwargs) is not None
def release_create(name, chart_name, namespace='default',
- version=None, values=None,
- tiller_namespace='kube-system', tiller_host=None,
- kube_config=None, gce_service_token=None):
- tiller_args = {
- 'tiller_namespace': tiller_namespace,
- 'tiller_host': tiller_host,
- 'kube_config': kube_config,
- 'gce_service_token': gce_service_token,
- }
+ version=None, values_file=None,
+ tiller_namespace='kube-system', **kwargs):
+ '''
+ Install a release. There must not be a release with the supplied name
+ already installed to the Kubernetes cluster.
+
+ Note that if a release already exists with the specified name, you'll need
+ to use the release_upgrade function instead; unless the release is in a
+ different namespace, in which case you'll need to delete and purge the
+ existing release (using release_delete) and *then* use this function to
+ install a new release to the desired namespace.
+ '''
args = []
if version is not None:
args += ['--version', version]
- if values is not None:
- args += ['--values', '/dev/stdin']
- cmd = _helm_cmd('install', '--namespace', namespace,
- '--name', name, chart_name, *args, **tiller_args)
- if values is not None:
- cmd['stdin'] = yaml.serialize(values, default_flow_style=False)
- LOG.debug('Creating release with args: %s', cmd)
- return ok_or_output(cmd, 'Failed to create release "{}"'.format(name))
+ if values_file is not None:
+ args += ['--values', values_file]
+ return _cmd_and_result(
+ 'install', chart_name,
+ '--namespace', namespace,
+ '--name', name,
+ *args, **kwargs
+ )
-
-def release_delete(name, tiller_namespace='kube-system', tiller_host=None,
- kube_config=None, gce_service_token=None):
- cmd = _helm_cmd('delete', '--purge', name,
- tiller_namespace=tiller_namespace, tiller_host=tiller_host,
- kube_config=kube_config,
- gce_service_token=gce_service_token)
- return ok_or_output(cmd, 'Failed to delete release "{}"'.format(name))
+def release_delete(name, tiller_namespace='kube-system', **kwargs):
+ '''
+ Delete and purge any release found with the supplied name.
+ '''
+ kwargs['tiller_namespace'] = tiller_namespace
+ return _cmd_and_result('delete', '--purge', name, **kwargs)
def release_upgrade(name, chart_name, namespace='default',
- version=None, values=None,
- tiller_namespace='kube-system', tiller_host=None,
- kube_config=None, gce_service_token=None):
- tiller_args = {
- 'tiller_namespace': tiller_namespace,
- 'tiller_host': tiller_host,
- 'kube_config': kube_config,
- 'gce_service_token': gce_service_token,
- }
+ version=None, values_file=None,
+ tiller_namespace='kube-system', **kwargs):
+ '''
+ Upgrade an existing release. There must be a release with the supplied name
+ already installed to the Kubernetes cluster.
+
+ If attempting to change the namespace for the release, this function will
+ fail; you will need to first delete and purge the release and then use the
+ release_create function to create a new release in the desired namespace.
+ '''
+ kwargs['tiller_namespace'] = tiller_namespace
args = []
if version is not None:
- args += ['--version', version]
- if values is not None:
- args += ['--values', '/dev/stdin']
- cmd = _helm_cmd('upgrade', '--namespace', namespace,
- name, chart_name, *args, **tiller_args)
- if values is not None:
- cmd['stdin'] = yaml.serialize(values, default_flow_style=False)
- LOG.debug('Upgrading release with args: %s', cmd)
- return ok_or_output(cmd, 'Failed to upgrade release "{}"'.format(name))
+ args += ['--version', version]
+ if values_file is not None:
+ args += ['--values', values_file]
+ return _cmd_and_result(
+ 'upgrade', name, chart_name,
+ '--namespace', namespace,
+ *args, **kwargs
+ )
+def install_chart_dependencies(chart_path, **kwargs):
+ '''
+ Install the chart dependencies for the chart definition located at the
+ specified chart_path.
-def get_values(name, tiller_namespace='kube-system', tiller_host=None,
- kube_config=None, gce_service_token=None):
- cmd = _helm_cmd('get', 'values', '--all', name,
- tiller_namespace=tiller_namespace, tiller_host=tiller_host,
- kube_config=kube_config,
- gce_service_token=gce_service_token)
- return yaml.deserialize(__salt__['cmd.run_stdout'](**cmd))
+ chart_path
+ The path to the chart for which to install dependencies
+ '''
+ return _cmd_and_result('dependency', 'build', chart_path, **kwargs)
+
+def package(path, destination = None, **kwargs):
+ '''
+ Package a chart definition, optionally to a specific destination. Proxies the
+ `helm package` command on the target minion
+
+ path
+ The path to the chart definition to package.
+
+ destination : None
+ An optional alternative destination folder.
+ '''
+ args = []
+ if destination:
+ args += ["-d", destination]
+
+ return _cmd_and_result('package', path, *args, **kwargs)
diff --git a/_states/helm_release.py b/_states/helm_release.py
index d4a1b69..dae657a 100644
--- a/_states/helm_release.py
+++ b/_states/helm_release.py
@@ -1,88 +1,186 @@
import difflib
+import os
+import logging
+from salt.exceptions import CommandExecutionError
from salt.serializers import yaml
+LOG = logging.getLogger(__name__)
-def failure(name, message):
+def _get_values_from_file(values_file=None):
+ if values_file:
+ try:
+ with open(values_file) as values_stream:
+ values = yaml.deserialize(values_stream)
+ return values
+ except e:
+ raise CommandExecutionError("encountered error reading from values "
+ "file (%s): %s" % (values_file, e))
+ return None
+
+def _get_yaml_diff(new_yaml=None, old_yaml=None):
+ if not new_yaml and not old_yaml:
+ return None
+
+ old_str = yaml.serialize(old_yaml, default_flow_style=False)
+ new_str = yaml.serialize(new_yaml, default_flow_style=False)
+ return difflib.unified_diff(old_str.split('\n'), new_str.split('\n'))
+
+def _failure(name, message, changes={}):
return {
'name': name,
- 'changes': {},
+ 'changes': changes,
'result': False,
'comment': message,
}
+def present(name, chart_name, namespace, version=None, values_file=None,
+ tiller_namespace='kube-system', **kwargs):
+ '''
+ Ensure that a release with the supplied name is in the desired state in the
+ Tiller installation. This state will handle change detection to determine
+ whether an installation or update needs to be made.
-def present(name, chart_name, namespace, version=None, values=None,
- tiller_namespace='kube-system', tiller_host=None,
- kube_config=None, gce_service_token=None):
- tiller_args = {
- 'tiller_namespace': tiller_namespace,
- 'tiller_host': tiller_host,
- 'kube_config': kube_config,
- 'gce_service_token': gce_service_token,
- }
- exists = __salt__['helm.release_exists'](name, namespace, **tiller_args)
- if not exists:
- err = __salt__['helm.release_create'](
- name, chart_name, namespace, version, values, **tiller_args)
- if err:
- return failure(name, err)
+ In the event the namespace to which a release is installed changes, the
+ state will first delete and purge the release and the re-install it into
+ the new namespace, since Helm does not support updating a release into a
+ new namespace.
+
+ name
+ The name of the release to ensure is present
+
+ chart_name
+ The name of the chart to install, including the repository name as
+ applicable (such as `stable/mysql`)
+
+ namespace
+ The namespace to which the release should be (re-)installed
+
+ version
+ The version of the chart to install. Defaults to the latest version
+
+ values_file
+ The path to the a values file containing all the chart values that
+ should be applied to the release. Note that this should not be passed
+ if there are not chart value overrides required.
+
+ '''
+ kwargs['tiller_namespace'] = tiller_namespace
+ old_release = __salt__['helm.get_release'](name, **kwargs)
+ if not old_release:
+ try:
+ result = __salt__['helm.release_create'](
+ name, chart_name, namespace, version, values_file, **kwargs
+ )
return {
+ 'name': name,
+ 'changes': {
'name': name,
- 'changes': {name: 'CREATED'},
- 'result': True,
- 'comment': 'Release "{}" was created'.format(name),
+ 'chart_name': chart_name,
+ 'namespace': namespace,
+ 'version': version,
+ 'values': _get_values_from_file(values_file),
+ 'stdout': result.get('stdout')
+ },
+ 'result': True,
+ 'comment': ('Release "%s" was created' % name +
+ '\nExecuted command: %s' % result['cmd'])
}
+ except CommandExecutionError as e:
+ msg = (("Failed to create new release: %s" % e.error) +
+ "\nExecuted command: %s" % e.cmd)
+ return _failure(name, msg)
- old_values = __salt__['helm.get_values'](name, **tiller_args)
- err = __salt__['helm.release_upgrade'](
- name, chart_name, namespace, version, values, **tiller_args)
- if err:
- return failure(name, err)
+ changes = {}
+ warnings = []
+ if old_release.get('chart') != chart_name.split("/")[1]:
+ changes['chart'] = { 'old': old_release['chart'], 'new': chart_name }
- new_values = __salt__['helm.get_values'](name, **tiller_args)
- if new_values == old_values:
- return {
- 'name': name,
- 'changes': {},
- 'result': True,
- 'comment': 'Release "{}" already exists'.format(name),
- }
+ if old_release.get('version') != version:
+ changes['version'] = { 'old': old_release['version'], 'new': version }
- old_str = yaml.serialize(old_values, default_flow_style=False)
- new_str = yaml.serialize(new_values, default_flow_style=False)
- diff = difflib.unified_diff(
- old_str.split('\n'), new_str.split('\n'), lineterm='')
- return {
+ if old_release.get('namespace') != namespace:
+ changes['namespace'] = { 'old': old_release['namespace'], 'new': namespace }
+
+ if (not values_file and old_release.get("values") or
+ not old_release.get("values") and values_file):
+ changes['values'] = { 'old': old_release['values'], 'new': values_file }
+
+ values = _get_values_from_file(values_file)
+ diff = _get_yaml_diff(values, old_release.get('values'))
+
+ if diff:
+ diff_string = '\n'.join(diff)
+ if diff_string:
+ changes['values'] = diff_string
+
+ if not changes:
+ return {
'name': name,
- 'changes': {'values': '\n'.join(diff)},
'result': True,
- 'comment': 'Release "{}" was updated'.format(name),
- }
+ 'changes': {},
+ 'comment': 'Release "{}" is already in the desired state'.format(name)
+ }
+
+ module_fn = 'helm.release_upgrade'
+ if changes.get("namespace"):
+ LOG.debug("purging old release (%s) due to namespace change" % name)
+ try:
+ result = __salt__['helm.release_delete'](name, **kwargs)
+ except CommandExecutionError as e:
+ msg = ("Failed to delete release for namespace change: %s" % e.error +
+ "\nExecuted command: %s" % e.cmd)
+ return _failure(name, msg, changes)
+
+ module_fn = 'helm.release_create'
+ warnings.append('Release (%s) was replaced due to namespace change' % name)
+
+ try:
+ result = __salt__[module_fn](
+ name, chart_name, namespace, version, values_file, **kwargs
+ )
+ changes.update({ 'stdout': result.get('stdout') })
+ ret = {
+ 'name': name,
+ 'changes': changes,
+ 'result': True,
+ 'comment': 'Release "%s" was updated\nExecuted command: %s' % (name, result['cmd'])
+ }
+ if warnings:
+ ret['warnings'] = warnings
+
+ return ret
+ except CommandExecutionError as e:
+ msg = ("Failed to delete release for namespace change: %s" % e.error +
+ "\nExecuted command: %s" % e.cmd)
+ return _failure(name, msg, changes)
-def absent(name, namespace, tiller_namespace='kube-system', tiller_host=None,
- kube_config=None, gce_service_token=None):
- tiller_args = {
- 'tiller_namespace': tiller_namespace,
- 'tiller_host': tiller_host,
- 'kube_config': kube_config,
- 'gce_service_token': gce_service_token,
- }
- exists = __salt__['helm.release_exists'](name, namespace, **tiller_args)
+def absent(name, tiller_namespace='kube-system', **kwargs):
+ '''
+ Ensure that any release with the supplied release name is absent from the
+ tiller installation.
+
+ name
+ The name of the release to ensure is absent
+ '''
+ kwargs['tiller_namespace'] = tiller_namespace
+ exists = __salt__['helm.release_exists'](name, **kwargs)
if not exists:
return {
'name': name,
'changes': {},
'result': True,
- 'comment': 'Release "{}" doesn\'t exist'.format(name),
+ 'comment': 'Release "%s" doesn\'t exist' % name
}
- err = __salt__['helm.release_delete'](name, **tiller_args)
- if err:
- return failure(name, err)
- return {
+ try:
+ result = __salt__['helm.release_delete'](name, **kwargs)
+ return {
'name': name,
- 'changes': {name: 'DELETED'},
+ 'changes': { name: 'DELETED', 'stdout': result['stdout'] },
'result': True,
- 'comment': 'Release "{}" was deleted'.format(name),
- }
+ 'comment': 'Release "%s" was deleted\nExecuted command: %s' % (name, result['cmd'])
+ }
+ except CommandExecutionError as e:
+ return _failure(e.cmd, e.error)
+
diff --git a/_states/helm_repos.py b/_states/helm_repos.py
new file mode 100644
index 0000000..5c41c0b
--- /dev/null
+++ b/_states/helm_repos.py
@@ -0,0 +1,105 @@
+import re
+
+from salt.exceptions import CommandExecutionError
+
+def managed(name, present={}, absent=[], exclusive=False, helm_home=None):
+ '''
+ Ensure the supplied repositories are available to the helm client. If the
+ `exclusive` flag is set to a truthy value, any extra repositories in the
+ helm client will be removed.
+
+ name
+ The name of the state
+
+ present
+ A dict of repository names to urls to ensure are registered with the
+ Helm client
+
+ absent
+ A list of repository names to ensure are unregistered from the Helm client
+
+ exclusive
+ A boolean flag indicating whether the state should ensure only the
+ supplied repositories are availabe to the target minion.
+
+ helm_home
+ An optional path to the Helm home directory
+ '''
+ ret = {'name': name,
+ 'changes': {},
+ 'result': True,
+ 'comment': ''}
+
+ try:
+ result = __salt__['helm.manage_repos'](
+ present=present,
+ absent=absent,
+ exclusive=exclusive,
+ helm_home=helm_home
+ )
+
+ if result['failed']:
+ ret['comment'] = 'Failed to add or remove some repositories'
+ ret['changes'] = result
+ ret['result'] = False
+ return ret
+
+ if result['added'] or result['removed']:
+ ret['comment'] = 'Repositories were added or removed'
+ ret['changes'] = result
+ return ret
+
+ ret['comment'] = ("Repositories were in the desired state: "
+ "%s" % [name for (name, url) in present.iteritems()])
+ return ret
+ except CommandExecutionError as e:
+ ret['result'] = False
+ ret['comment'] = "Failed to add some repositories: %s" % e
+ return ret
+
+def updated(name, helm_home=None):
+ '''
+ Ensure the local Helm repository cache is up to date with each of the
+ helm client's configured remote chart repositories. Because the `helm repo
+ update` command doesn't indicate whether any changes were made to the local
+ cache, this will only indicate change if the Helm client failed to retrieve
+ an update from one or more of the repositories, regardless of whether an
+ update was made to the local Helm chart repository cache.
+
+ name
+ The name of the state
+
+ helm_home
+ An optional path to the Helm home directory
+ '''
+ ret = {'name': name,
+ 'changes': {},
+ 'result': True,
+ 'comment': 'Successfully synced repositories: ' }
+
+
+ try:
+ result = __salt__['helm.update_repos'](helm_home=helm_home)
+ cmd_str = "\nExecuted command: %s" % result['cmd']
+
+ success_repos = re.findall(
+ r'Successfully got an update from the \"([^\"]+)\"', result['stdout'])
+ failed_repos = re.findall(
+ r'Unable to get an update from the \"([^\"]+)\"', result['stdout'])
+
+ if failed_repos and len(failed_repos) > 0:
+ ret['result'] = False
+ ret['changes']['succeeded'] = success_repos
+ ret['changes']['failed'] = failed_repos
+ ret['comment'] = 'Failed to sync against some repositories' + cmd_str
+ else:
+ ret['comment'] += "%s" % success_repos + cmd_str
+
+ except CommandExecutionError as e:
+ ret['name'] = e.cmd
+ ret['result'] = False
+ ret['comment'] = ("Failed to update repos: %s" % e.error +
+ "\nExecuted command: %s" % e.cmd)
+ return ret
+
+ return ret
\ No newline at end of file
diff --git a/helm/client.sls b/helm/client.sls
index e37151f..e33031f 100644
--- a/helm/client.sls
+++ b/helm/client.sls
@@ -1,232 +1,6 @@
-{%- from "helm/map.jinja" import client with context %}
-{%- if client.enabled %}
-{%- set helm_tmp = "/tmp/helm-" + client.version %}
-{%- set helm_bin = "/usr/bin/helm-" + client.version %}
-{%- set kubectl_bin = "/usr/bin/kubectl" %}
-{%- set kube_config = "/srv/helm/kubeconfig.yaml" %}
+# NOTE: For backward compatibility till 2.0 release client.sls
+# This state file will be removed.
+include:
+ - .release_managed
-{%- if client.kubectl.config.gce_service_token %}
-{%- set gce_service_token = "/srv/helm/gce_token.json" %}
-{%- set gce_env_var = "- GOOGLE_APPLICATION_CREDENTIALS: \"{}\"".format(gce_service_token) %}
-{%- set gce_state_arg = "- gce_service_token: \"{}\"".format(gce_service_token) %}
-{%- set gce_require = "- file: \"{}\"".format(gce_service_token) %}
-{%- else %}
-{%- set gce_env_var = "" %}
-{%- set gce_state_arg = "" %}
-{%- set gce_require = "" %}
-{%- endif %}
-
-{%- set helm_home = "/srv/helm/home" %}
-{%- if client.tiller.host %}
-{%- set helm_run = "helm --host '{}'".format(client.tiller.host) %}
-{%- set tiller_arg = "- tiller_host: \"{}\"".format(client.tiller.host) %}
-{%- else %}
-{%- set helm_run = "helm --tiller-namespace '{}'".format(client.tiller.namespace) %}
-{%- set tiller_arg = "- tiller_namespace: \"{}\"".format(client.tiller.namespace) %}
-{%- endif %}
-
-{{ helm_tmp }}:
- file.directory:
- - user: root
- - group: root
- archive.extracted:
- - source: {{ client.download_url }}
- - source_hash: {{ client.download_hash }}
- - archive_format: tar
- {%- if grains['saltversioninfo'] < [2016, 11] %}
- - tar_options: v
- {%- else %}
- - options: v
- {%- endif %}
- - if_missing: {{ helm_tmp }}/linux-amd64/helm
- - require:
- - file: {{ helm_tmp }}
-
-{{ helm_bin }}:
- file.managed:
- - source: {{ helm_tmp }}/linux-amd64/helm
- - mode: 555
- - user: root
- - group: root
- - require:
- - archive: {{ helm_tmp }}
-
-/usr/bin/helm:
- file.symlink:
- - target: helm-{{ client.version }}
- - require:
- - file: {{ helm_bin }}
-
-prepare_client:
- cmd.run:
- - name: {{ helm_run }} init --client-only
- - env:
- - HELM_HOME: {{ helm_home }}
- - unless: test -d {{ helm_home }}
- - require:
- - file: /usr/bin/helm
-
-{{ kube_config }}:
- file.managed:
- - source: salt://helm/files/kubeconfig.yaml.j2
- - mode: 400
- - user: root
- - group: root
- - template: jinja
-
-{%- if client.kubectl.config.gce_service_token %}
-{{ gce_service_token }}:
- file.managed:
- - source: salt://helm/files/gce_token.json.j2
- - mode: 400
- - user: root
- - group: root
- - template: jinja
- - context:
- content: {{ client.kubectl.config.gce_service_token }}
-{%- endif %}
-
-helm_env_home_param:
- environ.setenv:
- - name: HELM_HOME
- - value: {{ helm_home }}
- - update_minion: True
-
-helm_env_kubeconfig_param:
- environ.setenv:
- - name: KUBECONFIG
- - value: {{ kube_config }}
- - update_minion: True
- - require:
- - environ: helm_env_home_param
-
-{%- if client.tiller.install %}
-install_tiller:
- cmd.run:
- - name: {{ helm_run }} init --upgrade
- - env:
- - HELM_HOME: {{ helm_home }}
- - KUBECONFIG: {{ kube_config }}
- {{ gce_env_var }}
- - unless: "{{ helm_run }} version --server --short | grep -E 'Server: v{{ client.version }}(\\+|$)'"
- - require:
- - cmd: prepare_client
- - file: {{ kube_config }}
- - environ: helm_env_kubeconfig_param
- {{ gce_require }}
-
-wait_for_tiller:
- cmd.run:
- - name: while ! {{ helm_run }} list; do sleep 3; done
- - timeout: 30
- - env:
- - HELM_HOME: {{ helm_home }}
- - KUBECONFIG: {{ kube_config }}
- {{ gce_env_var }}
- - require:
- - file: {{ kube_config }}
- {{ gce_require }}
- - onchanges:
- - cmd: install_tiller
-{%- endif %}
-
-{%- for repo_name, repo_url in client.repos.items() %}
-ensure_{{ repo_name }}_repo:
- cmd.run:
- - name: {{ helm_run }} repo add {{ repo_name }} {{ repo_url }}
- - env:
- - HELM_HOME: {{ helm_home }}
- - unless: {{ helm_run }} repo list | grep '^{{ repo_name }}[[:space:]]{{ repo_url|replace(".", "\.") }}'
- - require:
- - cmd: prepare_client
-{%- endfor %}
-
-{%- set namespaces = [] %}
-{%- for release_id, release in client.releases.items() %}
-{%- set release_name = release.get('name', release_id) %}
-{%- set namespace = release.get('namespace', 'default') %}
-{%- if release.get('enabled', True) %}
-ensure_{{ release_id }}_release:
- helm_release.present:
- - name: {{ release_name }}
- - chart_name: {{ release['chart'] }}
- - namespace: {{ namespace }}
- - kube_config: {{ kube_config }}
- {{ tiller_arg }}
- {{ gce_state_arg }}
- {%- if release.get('version') %}
- - version: {{ release['version'] }}
- {%- endif %}
- {%- if release.get('values') %}
- - values:
- {{ release['values']|yaml(False)|indent(8) }}
- {%- endif %}
- - require:
-{%- if client.tiller.install %}
- - cmd: wait_for_tiller
-{%- endif %}
- - cmd: ensure_{{ namespace }}_namespace
- {{ gce_require }}
- {%- do namespaces.append((namespace, None)) %}
-{%- else %}{# not release.enabled #}
-absent_{{ release_id }}_release:
- helm_release.absent:
- - name: {{ release_name }}
- - namespace: {{ namespace }}
- - kube_config: {{ kube_config }}
- {{ tiller_arg }}
- {{ gce_state_arg }}
- - require:
-{%- if client.tiller.install %}
- - cmd: wait_for_tiller
-{%- endif %}
- {{ gce_require }}
- - cmd: prepare_client
-{%- endif %}{# release.enabled #}
-{%- endfor %}{# release_id, release in client.releases #}
-
-{%- if client.kubectl.install %}
-extract_kubectl:
- archive.extracted:
- - name: {{ helm_tmp }}
- - source: {{ client.kubectl.download_url }}
- - source_hash: {{ client.kubectl.download_hash }}
- - archive_format: tar
- {%- if grains['saltversioninfo'] < [2016, 11] %}
- - tar_options: v
- {%- else %}
- - options: v
- {%- endif %}
- - if_missing: {{ helm_tmp }}/{{ client.kubectl.tarball_path }}
- - require:
- - file: {{ helm_tmp }}
-
-{{ kubectl_bin }}:
- file.managed:
- - source: {{ helm_tmp }}/{{ client.kubectl.tarball_path }}
- - mode: 555
- - user: root
- - group: root
- - require:
- - archive: extract_kubectl
-{%- endif %}{# client.kubectl.install #}
-
-{%- for namespace in dict(namespaces) %}
-ensure_{{ namespace }}_namespace:
- cmd.run:
- - name: kubectl create namespace {{ namespace }}
- - unless: kubectl get namespace {{ namespace }}
- - env:
- - KUBECONFIG: {{ kube_config }}
- {{ gce_env_var }}
- - require:
- - file: {{ kube_config }}
- - environ: helm_env_kubeconfig_param
- {{ gce_require }}
- {%- if client.kubectl.install %}
- - file: {{ kubectl_bin }}
- {%- endif %}
-{%- endfor %}
-
-{%- endif %}
diff --git a/helm/client_installed.sls b/helm/client_installed.sls
new file mode 100644
index 0000000..d9f7710
--- /dev/null
+++ b/helm/client_installed.sls
@@ -0,0 +1,49 @@
+{%- from slspath + "/map.jinja" import config, constants with context %}
+
+include:
+ - .kubectl_installed
+
+{%- set binary_source = "https://storage.googleapis.com/kubernetes-helm/helm-v" +
+ config.version + "-" + config.flavor + ".tar.gz" %}
+
+{%- set binary_source = config.get(
+ "download_url",
+ "https://storage.googleapis.com/kubernetes-helm" +
+ "/helm-v" + config.version + "-" + config.flavor + ".tar.gz"
+ ) %}
+
+{{ constants.helm.tmp }}:
+ file.directory:
+ - user: root
+ - group: root
+ archive.extracted:
+ - source: {{ binary_source }}
+ - source_hash: {{ config.download_hash }}
+ - archive_format: tar
+ {%- if grains['saltversioninfo'] < [2016, 11] %}
+ - tar_options: v
+ {%- else %}
+ - options: v
+ {%- endif %}
+ - onlyif:
+ - test "{{ config.version }}" -eq "canary" || test ! -e {{ constants.helm.tmp }}/{{ config.flavor }}/helm
+ - require:
+ - file: {{ constants.helm.tmp }}
+
+{{ config.bin }}:
+ file.copy:
+ - source: {{ constants.helm.tmp }}/{{ config.flavor }}/helm
+ - mode: 555
+ - user: root
+ - group: root
+ - force: true
+ - require:
+ - archive: {{ constants.helm.tmp }}
+ - unless: cmp -s {{ config.bin }} {{ constants.helm.tmp }}/{{ config.flavor }}/helm
+
+prepare_client:
+ cmd.run:
+ - name: {{ constants.helm.cmd }} init --client-only --home {{ config.helm_home }}
+ - unless: test -d {{ config.helm_home }}
+ - require:
+ - file: {{ config.bin }}
diff --git a/helm/files/kubeconfig.yaml.j2 b/helm/files/kubeconfig.yaml.j2
index 753362b..1e330ad 100644
--- a/helm/files/kubeconfig.yaml.j2
+++ b/helm/files/kubeconfig.yaml.j2
@@ -1,19 +1,40 @@
{%- from "helm/map.jinja" import client with context %}
{%- set config = client.kubectl.config %}
+{%- set cluster = config.get("cluster", None) %}
+{%- set cluster_name = config.get("cluster_name", "thecluster") %}
+{%- set user_name = config.get("user_name", "theuser") %}
+{%- set context_name = config.get('context_name', "\"\"") %}
+{%- set context = config.get("context", None) %}
+{%- set user = config.get("user", None) %}
apiVersion: v1
-clusters:
-- cluster:
- {{ config.cluster|yaml|indent(4) }}
- name: thecluster
-contexts:
-- context:
- cluster: thecluster
- user: theuser
- name: thecontext
-current-context: thecontext
+
+{%- if cluster is not none %}
+clusters:
+ - name: {{ cluster_name }}
+ cluster:
+ {{ cluster | yaml(False) |indent(6) }}
+{%- else %}
+clusters: []
+{%- endif %}
+
+{%- if context is not none %}
+contexts:
+ - name: {{ context_name }}
+ context:
+ cluster: {{ cluster_name }}
+ user: {{ user_name }}
+{%- else %}
+contexts: []
+{%- endif %}
+
+current-context: {{ context_name }}
kind: Config
preferences: {}
-users:
-- name: theuser
- user:
- {{ config.user|yaml|indent(4) }}
+
+{%- if user is not none %}
+users:
+ - name: {{ user_name }}
+ {{ config.get("user", "") | yaml(false) | indent(4) }}
+{%- else %}
+users: []
+{%- endif %}
\ No newline at end of file
diff --git a/helm/init.sls b/helm/init.sls
index 28c9200..8bf21e6 100644
--- a/helm/init.sls
+++ b/helm/init.sls
@@ -1,6 +1,4 @@
{%- if pillar.helm is defined %}
include:
-{%- if pillar.helm.client is defined %}
-- helm.client
-{%- endif %}
+ - .releases_managed
{%- endif %}
diff --git a/helm/kubectl_configured.sls b/helm/kubectl_configured.sls
new file mode 100644
index 0000000..d7b9fb0
--- /dev/null
+++ b/helm/kubectl_configured.sls
@@ -0,0 +1,34 @@
+{%- from slspath + "/map.jinja" import config, constants with context %}
+
+include:
+ - .kubectl_installed
+
+{{ config.kubectl.config_file }}:
+ file.managed:
+ - source: salt://helm/files/kubeconfig.yaml.j2
+ - mode: 400
+ - user: root
+ - group: root
+ - makedirs: true
+ - template: jinja
+ {%- if config.kubectl.install %}
+ - require:
+ - sls: {{ slspath }}.kubectl_installed
+ {%- endif %}
+
+{%- if config.kubectl.config.gce_service_token %}
+{{ constants.kubectl.gce_service_token_path }}:
+ file.managed:
+ - source: salt://helm/files/gce_token.json.j2
+ - mode: 400
+ - user: root
+ - group: root
+ - makedirs: true
+ - template: jinja
+ - context:
+ content: {{ config.kubectl.config.gce_service_token }}
+ {%- if config.kubectl.install %}
+ - require:
+ - sls: {{ slspath }}.kubectl_installed
+ {%- endif %}
+{%- endif %}{# gce_service_token #}
\ No newline at end of file
diff --git a/helm/kubectl_installed.sls b/helm/kubectl_installed.sls
new file mode 100644
index 0000000..64fe8e2
--- /dev/null
+++ b/helm/kubectl_installed.sls
@@ -0,0 +1,37 @@
+{%- from slspath + "/map.jinja" import config, constants with context %}
+{%- set extraction_path = constants.helm.tmp +
+ "/kubectl/v" + config.kubectl.version %}
+{%- set extracted_binary_path = extraction_path +
+ "/kubernetes/client/bin/kubectl" %}
+
+{%- set binary_source = config.kubectl.get(
+ "download_url",
+ "https://dl.k8s.io/v" + config.kubectl.version +
+ "/kubernetes-client-" + config.flavor + ".tar.gz"
+ ) %}
+
+{%- if config.kubectl.install %}
+extract_kubectl:
+ archive.extracted:
+ - name: {{ extraction_path }}
+ - source: {{ binary_source }}
+ - source_hash: {{ config.kubectl.download_hash }}
+ - archive_format: tar
+ {%- if grains['saltversioninfo'] < [2016, 11] %}
+ - tar_options: v
+ {%- else %}
+ - options: v
+ {%- endif %}
+ - onlyif:
+ - test ! -e {{ extracted_binary_path }}
+
+{{ config.kubectl.bin }}:
+ file.managed:
+ - source: {{ extracted_binary_path }}
+ - mode: 555
+ - user: root
+ - group: root
+ - require:
+ - archive: extract_kubectl
+ - unless: cmp -s {{ config.kubectl.bin }} {{ extracted_binary_path }}
+{%- endif %}
\ No newline at end of file
diff --git a/helm/map.jinja b/helm/map.jinja
index 5aa27e2..82087aa 100644
--- a/helm/map.jinja
+++ b/helm/map.jinja
@@ -1,4 +1,3 @@
-
{%- set source_engine = salt['pillar.get']('helm:client:source:engine') %}
{%- load_yaml as base_defaults %}
@@ -26,4 +25,66 @@
{%- endif %}
{%- endload %}
-{%- set client = salt['grains.filter_by'](base_defaults, merge=salt['pillar.get']('helm:client')) %}
\ No newline at end of file
+{%- load_yaml as base_config %}
+helm:
+ client:
+ version: 2.6.2
+ flavor: linux-amd64
+ download_hash: sha256=ba807d6017b612a0c63c093a954c7d63918d3e324bdba335d67b7948439dbca8
+ bin: /usr/bin/helm
+ helm_home: /srv/helm/home
+ values_dir: /srv/helm/values
+ tiller:
+ install: true
+ namespace: kube-system
+ kubectl:
+ install: false
+ version: 1.6.7
+ download_hash: sha256=54947ef84181e89f9dbacedd54717cbed5cc7f9c36cb37bc8afc9097648e2c91
+ bin: /usr/bin/kubectl
+ config_file: /srv/helm/kubeconfig.yaml
+ config:
+ gce_service_token:
+{%- endload %}
+
+{%- set config = salt['pillar.get']('helm:client', base_config.helm.client, merge=true) %}
+{%- set client = salt['grains.filter_by'](base_defaults, merge=config) %}
+
+
+{%- set constants = {
+ "helm": {
+ "tmp": "/tmp/helm-v" + config.version,
+ "cmd": config.bin + " --home '{}' --tiller-namespace '{}'".format(
+ config.helm_home,
+ config.tiller.namespace
+ ),
+ "tiller_arg": "- tiller_namespace: \"{}\"".format(config.tiller.namespace),
+ "gce_state_arg": "",
+ },
+ "tiller": {
+ "gce_env_var": "",
+ },
+ "kubectl": {
+ "gce_service_token_path": "/srv/helm/gce_token.json",
+ }
+ }
+%}
+
+{%- if "host" in config.tiller %}
+ {%- do constants.helm.update({
+ "cmd": config.bin + " --host '{}'".format(config.tiller.host),
+ "tiller_arg": "- tiller_host: \"{}\"".format(config.tiller.host)
+ })
+ %}
+{%- endif %}
+
+{%- if config.kubectl.config.gce_service_token %}
+ {%- do constants.helm.update({
+ "gce_state_arg": "- gce_service_token: \"{}\"".format(constants.kubectl.gce_service_token_path),
+ })
+ %}
+ {%- do constants.tiller.update({
+ "gce_env_var": "- GOOGLE_APPLICATION_CREDENTIALS: \"{}\"".format(constants.kubectl.gce_service_token_path)
+ })
+ %}
+{%- endif %}
\ No newline at end of file
diff --git a/helm/releases_managed.sls b/helm/releases_managed.sls
new file mode 100644
index 0000000..b0a84a6
--- /dev/null
+++ b/helm/releases_managed.sls
@@ -0,0 +1,83 @@
+{%- from slspath + "/map.jinja" import config, constants with context %}
+
+include:
+ - .client_installed
+ - .tiller_installed
+ - .kubectl_configured
+ - .repos_managed
+
+{%- if "releases" in config %}
+{%- for release_id, release in config.releases.items() %}
+{%- set release_name = release.get('name', release_id) %}
+{%- set namespace = release.get('namespace', 'default') %}
+{%- set values_file = config.values_dir + "/" + release_name + ".yaml" %}
+
+{%- if release.get('enabled', True) %}
+
+{%- if release.get("values") %}
+{{ values_file }}:
+ file.managed:
+ - makedirs: True
+ - contents: |
+ {{ release['values'] | yaml(false) | indent(8) }}
+{%- else %}
+{{ values_file }}:
+ file.absent
+{%- endif %}
+
+ensure_{{ release_id }}_release:
+ helm_release.present:
+ - name: {{ release_name }}
+ - chart_name: {{ release['chart'] }}
+ - namespace: {{ namespace }}
+ - kube_config: {{ config.kubectl.config_file }}
+ - helm_home: {{ config.helm_home }}
+ {{ constants.helm.tiller_arg }}
+ {{ constants.helm.gce_state_arg }}
+ {%- if release.get('version') %}
+ - version: {{ release['version'] }}
+ {%- endif %}
+ {%- if release.get("values") %}
+ - values_file: {{ values_file }}
+ {%- endif %}
+ - require:
+ {%- if config.tiller.install %}
+ - sls: {{ slspath }}.tiller_installed
+ {%- endif %}
+ - sls: {{ slspath }}.client_installed
+ - sls: {{ slspath }}.kubectl_configured
+ #
+ # note: intentionally don't fail if one or more repos fail to synchronize,
+ # since there should be a local repo cache anyways.
+ #
+
+{%- else %}{# not release.enabled #}
+
+{%- if release.get("values") %}
+{{ values_file }}:
+ file.absent
+{%- endif %}
+
+
+absent_{{ release_id }}_release:
+ helm_release.absent:
+ - name: {{ release_name }}
+ - namespace: {{ namespace }}
+ - kube_config: {{ config.kubectl.config_file }}
+ - helm_home: {{ config.helm_home }}
+ {{ constants.helm.tiller_arg }}
+ {{ constants.helm.gce_state_arg }}
+ - require:
+ {%- if config.tiller.install %}
+ - sls: {{ slspath }}.tiller_installed
+ {%- endif %}
+ - sls: {{ slspath }}.client_installed
+ - sls: {{ slspath }}.kubectl_configured
+ #
+ # note: intentionally don't fail if one or more repos fail to synchronize,
+ # since there should be a local repo cache anyways.
+ #
+
+{%- endif %}{# release.enabled #}
+{%- endfor %}{# release_id, release in client.releases #}
+{%- endif %}{# "releases" in client #}
diff --git a/helm/repos_managed.sls b/helm/repos_managed.sls
new file mode 100644
index 0000000..7cee1d2
--- /dev/null
+++ b/helm/repos_managed.sls
@@ -0,0 +1,21 @@
+{%- from slspath + "/map.jinja" import config, constants with context %}
+
+include:
+ - .client_installed
+
+{%- if "repos" in config %}
+repos_managed:
+ helm_repos.managed:
+ - present:
+ {{ config.repos | yaml(false) | indent(8) }}
+ - exclusive: true
+ - helm_home: {{ config.helm_home }}
+ - require:
+ - sls: {{ slspath }}.client_installed
+{%- endif %}
+
+repos_updated:
+ helm_repos.updated:
+ - helm_home: {{ config.helm_home }}
+ - require:
+ - sls: {{ slspath }}.client_installed
\ No newline at end of file
diff --git a/helm/tiller_installed.sls b/helm/tiller_installed.sls
new file mode 100644
index 0000000..e9c7f67
--- /dev/null
+++ b/helm/tiller_installed.sls
@@ -0,0 +1,31 @@
+{%- from slspath + "/map.jinja" import config, constants with context %}
+
+include:
+ - .client_installed
+ - .kubectl_configured
+
+{%- if config.tiller.install %}
+install_tiller:
+ cmd.run:
+ - name: {{ constants.helm.cmd }} init --upgrade
+ - env:
+ - KUBECONFIG: {{ config.kubectl.config_file }}
+ {{ constants.tiller.gce_env_var }}
+ - unless: "{{ constants.helm.cmd }} version --server --short | grep -E 'Server: v{{ config.version }}(\\+|$)'"
+ - require:
+ - sls: {{ slspath }}.client_installed
+ - sls: {{ slspath }}.kubectl_configured
+
+wait_for_tiller:
+ cmd.run:
+ - name: while ! {{ constants.helm.cmd }} list; do sleep 3; done
+ - timeout: 30
+ - env:
+ - KUBECONFIG: {{ config.kubectl.config_file }}
+ {{ constants.tiller.gce_env_var }}
+ - require:
+ - sls: {{ slspath }}.client_installed
+ - sls: {{ slspath }}.kubectl_configured
+ - onchanges:
+ - cmd: install_tiller
+{%- endif %}
diff --git a/metadata/service/client.yml b/metadata/service/client.yml
index ccd42f7..b807be4 100644
--- a/metadata/service/client.yml
+++ b/metadata/service/client.yml
@@ -5,22 +5,204 @@
parameters:
helm:
client:
- enabled: true
+
+ #
+ # The version of the Helm client to install
+ #
version: 2.4.2
- download_url: https://storage.googleapis.com/kubernetes-helm/helm-v${helm:client:version}-linux-amd64.tar.gz
+
+ #
+ # The hash for the helm client binary. You must calculate the hash for the
+ # version and flavor of the binary you install (per the helm:client:flavor
+ # configuration value)
+ # Defaults to the SHA 256 hash for the helm-v2.6.2-linux-amd64.tar.gz binary
+ #
+ # The binary is downloaded from:
+ #
+ # https://storage.googleapis.com/kubernetes-helm/helm-v[[ client.version ]]-[[ client.flavor ]].tar.gz
+ #
+ # Here is an example command you can use to calculate the sha256 hash for
+ # the binary:
+ #
+ # ```
+ # shasum -a 256 /path/to/helm-v[[ client.version ]]-linux.amd64.tar.gz
+ # ```
+ #
download_hash: sha256=96f74ff04ec7eb38e5f53aba73132bfe4d6b81168f20574dad25a9bcaceec81b
+
+ #
+ # Optional alternative download URL from which to retrieve the tarred
+ # Helm binary. If specified, this URL will be used instead of the url
+ # computed from the configured helm:client:flavor and helm:client:version
+ # keys.
+ #
+ # download_url: https://storage.googleapis.com/kubernetes-helm/helm-v2.6.2-linux-amd64.tar.gz
+
+ #
+ # The flavor of the helm or kubectl binary to install, as informed by the
+ # target minion's OS. For available flavor names, peruse the listing of
+ # Helm binaries exposed at:
+ #
+ # https://storage.googleapis.com/kubernetes-helm
+ #
+ # Defaults to linux-amd64
+ #
+ # flavor: linux-amd64
+
+ #
+ # The path to which the Helm binary should be installed. Defaults to
+ # /usr/bin/helm
+ #
+ # bin: /usr/bin/helm
+
+ #
+ # The path this formula should use as helm home. Defaults to /srv/helm/home;
+ # it is recommended to set this to /root/.helm if users will be calling
+ # helm from the command line directly on the target minion
+ #
+ # helm_home: /srv/helm/home
+
+ #
+ # The path where this formula places configuration values files on the
+ # target minion. Defaults to /srv/helm/values
+ #
+ # values_dir: /srv/helm/values
+
+ #
+ # Configurations to manage the cluster's Tiller installation
+ #
tiller:
+ #
+ # Whether Tiller should be deployed to the kubernetes cluster as part of
+ # this formaul. Defaults to true.
+ #
install: true
- namespace: kube-system
- host:
+
+ #
+ # The namespace to which Tiller should be installed (only used if
+ # `helm:client:tiller:install` is set to true).
+ # Defaults to `kube-system`
+ #
+ naamespace: kube-system
+
+ #
+ # The host IP or name and port for an existing tiller installation that
+ # should be used by the Helm client. Defaults to Helm's default if
+ # unspecified.
+ #
+ # host:
+
+ #
+ # Configurations defined to manage the target minion's kubectl installation
+ #
kubectl:
+ #
+ # Whether kubectl should be installed as part of this formula.
+ # Defaults to false
+ #
install: false
- download_url: https://dl.k8s.io/v1.6.7/kubernetes-client-linux-amd64.tar.gz
+
+ #
+ # The hash for the kubectl binary version to install. You must calculate
+ # the hash for the version and flavor of the binary you install (per the
+ # helm:client:flavor configuration value)
+ #
+ #
+ # The binary is downloaded from:
+ #
+ # https://dl.k8s.io/v[[ client.kubectl.version ]]/kubernetes-client-[[ client.flavor ]].tar.gz
+ #
+ #
+ # Defaults to the SHA 256 hash for the Linux distribution of version 1.6.7
+ #
+ # Here is an example command you can use to calculate the sha256 hash for
+ # the binary:
+ #
+ # ```
+ # shasum -a 256 /path/to/kubernetes-client-[[ client.flavor ]].tar.gz
+ # ```
+ #
download_hash: sha256=54947ef84181e89f9dbacedd54717cbed5cc7f9c36cb37bc8afc9097648e2c91
- tarball_path: kubernetes/client/bin/kubectl
+
+ #
+ # The version of the kubectl binary to install.
+ # Defaults to 1.6.7
+ #
+ # version: 1.6.7
+
+ #
+ # Optional alternative download URL from which to retrieve the tarred
+ # kubectl binary. If specified, this URL will be used instead of the url
+ # computed from the configured helm:client:flavor and
+ # helm:client:kubectl:version keys.
+ #
+ # download_url: https://dl.k8s.io/v1.6.7/kubernetes-client-linux-amd64.tar.gz
+
+ #
+ # The path to which the kubectl binary should be installed. Defaults to
+ # /usr/bin/kubectl
+ #
+ # bin: /usr/bin/kubectl
+
+ #
+ # Configuration parameters that should be applied to the kubectl
+ # installation's kubeconfig. Not that this will only be applied to the
+ # kubectl installation managed by this formula.
+ #
+ # While the kubectl tool can be configured to connect to multiple
+ # clusters and allow switching between cluster contexts, this kubectl
+ # configuration managed by this formula will only be configured with
+ # the cluster context details used by this formula.
+ #
config:
- cluster: {}
+ cluster:
+ cluster_name: kube-cluster
+ user_name: kube-user
user: {}
gce_service_token:
+
+ #
+ # The mapping of repository names to urls that should be registered and
+ # kept up-to-date with the helm client
+ #
repos: {}
+ # mirantisworkloads: https://mirantisworkloads.storage.googleapis.com/
+ # incubator: https://kubernetes-charts-incubator.storage.googleapis.com/
+
+ #
+ # The listing of releases that should be managed by the formula. Note that
+ # if configured, the releases listed under this `helm:client:releases` key
+ # will be used as an authoritative, exclusive listing of the releases that
+ # should be configured and deployed to the Tiller installation; any
+ # release existing in the tiller cluster that is not configured here
+ # **will be deleted**
+ #
releases: {}
+ # zoo1:
+
+ #
+ # The name of the release
+ #
+ # name: my-zookeeper
+
+ #
+ # The repository name and chart name combination for the chart to
+ # release
+ #
+ # chart: mirantisworkloads/zookeeper
+
+ #
+ # The version of the helm chart to install
+ #
+ # version: 1.2.0
+
+ #
+ # The namespace to which the release should be deployed
+ #
+ # namespace: helm-example-namespace
+
+ #
+ # Configuration values that should be supplied to the chart.
+ #
+ # values:
+ # logLevel: INFO
\ No newline at end of file
diff --git a/tests/pillar/single.sls b/tests/pillar/single.sls
index e8f824c..d2ae52c 100644
--- a/tests/pillar/single.sls
+++ b/tests/pillar/single.sls
@@ -2,18 +2,11 @@
client:
enabled: true
version: 2.6.0
- download_url: https://storage.googleapis.com/kubernetes-helm/helm-v2.6.0-linux-amd64.tar.gz
download_hash: sha256=506e477a9eb61730a2fb1af035357d35f9581a4ffbc093b59e2c2af7ea3beb41
- bind:
- address: 0.0.0.0
tiller:
install: false
host: 10.11.12.13:14151
kubectl:
- install: true # installs kubectl 1.6.7 by default
- download_url: https://dl.k8s.io/v1.6.7/kubernetes-client-linux-amd64.tar.gz
- download_hash: sha256=54947ef84181e89f9dbacedd54717cbed5cc7f9c36cb37bc8afc9097648e2c91
- tarball_path: kubernetes/client/bin/kubectl
config:
cluster: # directly translated to cluster definition in kubeconfig
server: https://kubernetes.example.com