Stacklight (#29)
* Split plugins
* Fix
* Fix config
* Reverse merge dicts
* Remote collecting
* Use Node instead of URL in plugin http_write
Configuring an http_write plugin with a <URL> block is deprecated. <Node> is to
be used instead.
This commit fixes this, removing this message in the collectd logs:
write_http plugin: Legacy <URL> block found. Please use <Node> instead.
* Salt-mine remote_check support fixed
* Make the RabbitMQ collectd plugin more robust
The plugin crashed when it was running before the RabbitMQ server was
provisioned with queues, exchanges and so on.
* Revert "Use Node instead of URL in plugin http_write"
* Install the python-simplejson package
This package is required to use collectd Python plugins.
* Make hostname configurable (#9)
Make hostname configurable
* Fix include statements in the Python template
* Docs fix
* Docs fixes
* Improve Elasticsearch collectd plugin
This change modifies the Elastcisearch plugin to retrieve the cluster
metrics only from the node that is the elected master. This avoids
sending and storing duplicated metrics into InfluxDB.
* Fix configuration of the local endpoint checks
As the name tells it, the checks are executed locally.
* Fix monitoring of the collectd process itself
* Add vrrp Python plugin
* Fix Python plugins launching external processes
Without this change, Python plugins running external processes never
get the return code. See the collectd code [1] for the details.
[1] https://github.com/collectd/collectd/blob/master/contrib/python/getsigchld.py
* Make haproxy emit backend_servers_percent metrics
This commit changes the haproxy plugin code to emit backend_servers_percent
metrics. The code from StackLight MOS is used for that.
* Add Glusterfs Python plugin
* Collect more metrics from glusterfs
* Fix the execution of gluster command
* Rename glusterfs_peer metric to glusterfs_peer_state
* Extend GlusterFS metrics
This change collects volume-based metrics from GlusterFS.
* Add nginx check plugin
* Add Contrail Python modules
Now using normalized attributes names
* Revert to camel-case attributes names
* Implement the remote collectd service
This change refactors the collectd formula to be able to install
another collectd instance in charge of running the remote plugins.
* Support remote_collector in cluster mode
diff --git a/collectd/_common.sls b/collectd/_common.sls
new file mode 100644
index 0000000..7725520
--- /dev/null
+++ b/collectd/_common.sls
@@ -0,0 +1,34 @@
+{%- from "collectd/map.jinja" import client with context %}
+
+{%- if grains.os == 'Ubuntu' and (grains.osrelease in ['10.04', '12.04']) %}
+
+collectd_repo:
+ pkgrepo.managed:
+ - human_name: Collectd
+ - ppa: nikicat/collectd
+ - file: /etc/apt/sources.list.d/collectd.list
+ - require_in:
+ - pkg: collectd_client_packages
+
+collectd_amqp_packages:
+ pkg.installed:
+ - names:
+ - librabbitmq0
+
+{%- endif %}
+
+collectd_client_packages:
+ pkg.installed:
+ - names: {{ client.pkgs }}
+
+
+/usr/lib/collectd-python:
+ file.recurse:
+ - source: salt://collectd/files/plugin
+
+collectd_client_grains_dir:
+ file.directory:
+ - name: /etc/salt/grains.d
+ - mode: 700
+ - makedirs: true
+ - user: root
diff --git a/collectd/_service.sls b/collectd/_service.sls
new file mode 100644
index 0000000..cef78e2
--- /dev/null
+++ b/collectd/_service.sls
@@ -0,0 +1,145 @@
+{%- if client.enabled %}
+
+{{ client.service }}_client_conf_dir:
+ file.directory:
+ - name: {{ client.config_dir }}
+ - user: root
+ - mode: 750
+ - makedirs: true
+
+{{ client.service }}_client_conf_dir_clean:
+ file.directory:
+ - name: {{ client.config_dir }}
+ - clean: true
+
+{%- for plugin_name, plugin in plugins.iteritems() %}
+
+{%- if plugin.get('plugin', 'native') not in ['python'] %}
+
+{{ client.config_dir }}/{{ plugin_name }}.conf:
+ file.managed:
+ {%- if plugin.template is defined %}
+ - source: salt://{{ plugin.template }}
+ - template: jinja
+ - defaults:
+ plugin: {{ plugin|yaml }}
+ {%- else %}
+ - contents: "<LoadPlugin {{ plugin.plugin }}>\n Globals false\n</LoadPlugin>\n"
+ {%- endif %}
+ - user: root
+ - mode: 660
+ - require:
+ - file: {{ client.service }}_client_conf_dir
+ - require_in:
+ - file: {{ client.service }}_client_conf_dir_clean
+
+{%- endif %}
+
+{%- endfor %}
+
+{%- if client.file_logging %}
+
+{{ client.config_dir }}/00_collectd_logfile.conf:
+ file.managed:
+ - source: salt://collectd/files/collectd_logfile.conf
+ - template: jinja
+ - defaults:
+ service_name: {{ client.service }}
+ - user: root
+ - group: root
+ - mode: 660
+ - require:
+ - file: {{ client.service }}_client_conf_dir
+ - require_in:
+ - file: {{ client.service }}_client_conf_dir_clean
+
+{%- endif %}
+
+{{ client.config_dir }}/collectd_python.conf:
+ file.managed:
+ - source: salt://collectd/files/collectd_python.conf
+ - template: jinja
+ - user: root
+ - group: root
+ - mode: 660
+ - defaults:
+ plugin: {{ plugins|yaml }}
+ - require:
+ - file: {{ client.service }}_client_conf_dir
+ - require_in:
+ - file: {{ client.service }}_client_conf_dir_clean
+
+{{ client.config_file }}:
+ file.managed:
+ - source: salt://collectd/files/collectd.conf
+ - template: jinja
+ - user: root
+ - group: root
+ - mode: 640
+ - defaults:
+ plugin: {{ plugins|yaml }}
+ client: {{ client|yaml }}
+ - require:
+ - file: {{ client.service }}_client_conf_dir
+ - require_in:
+ - file: {{ client.service }}_client_conf_dir_clean
+
+{%- set network_backend = {} %}
+{%- for backend_name, backend in client.backend.iteritems() %}
+
+{%- if backend.engine not in ['network'] %}
+
+{{ client.config_dir }}/collectd_writer_{{ backend_name }}.conf:
+ file.managed:
+ - source: salt://collectd/files/backend/{{ backend.engine }}.conf
+ - template: jinja
+ - user: root
+ - group: root
+ - mode: 660
+ - defaults:
+ backend: {{ backend|yaml }}
+ - require:
+ - file: {{ client.service }}_client_conf_dir
+ - require_in:
+ - file: {{ client.service }}_client_conf_dir_clean
+
+{%- else %}
+
+{%- set network_backend = salt['grains.filter_by']({'default': network_backend}, merge={backend_name: backend}) %}
+
+{%- endif %}
+
+{%- endfor %}
+
+{%- if network_backend|length > 0 %}
+
+{{ client.config_dir }}/collectd_writer_network.conf:
+ file.managed:
+ - source: salt://collectd/files/backend/network.conf
+ - template: jinja
+ - user: root
+ - group: root
+ - mode: 660
+ - defaults:
+ backend: {{ backend|yaml }}
+ - require:
+ - file: {{ client.service }}_client_conf_dir
+ - require_in:
+ - file: {{ client.service }}_client_conf_dir_clean
+
+{%- endif %}
+
+
+{{ client.service }}_service:
+{%- if client.automatic_starting %}
+ service.running:
+ - enable: true
+ - watch:
+ - file: {{ client.config_file }}
+ - file: {{ client.config_dir }}/*
+{%- else %}
+ service.disabled:
+{%- endif %}
+ - name: {{ client.service }}
+
+{%- endif %}
diff --git a/collectd/client.sls b/collectd/client.sls
index d81c660..eca7071 100644
--- a/collectd/client.sls
+++ b/collectd/client.sls
@@ -1,26 +1,8 @@
{%- from "collectd/map.jinja" import client with context %}
{%- if client.enabled %}
-{%- if grains.os == 'Ubuntu' and (grains.osrelease in ['10.04', '12.04']) %}
-
-collectd_repo:
- pkgrepo.managed:
- - human_name: Collectd
- - ppa: nikicat/collectd
- - file: /etc/apt/sources.list.d/collectd.list
- - require_in:
- - pkg: collectd_client_packages
-
-collectd_amqp_packages:
- pkg.installed:
- - names:
- - librabbitmq0
-
-{%- endif %}
-
-collectd_client_packages:
- pkg.installed:
- - names: {{ client.pkgs }}
+include:
+- collectd._common
/etc/collectd:
file.directory:
@@ -30,38 +12,19 @@
- require:
- pkg: collectd_client_packages
-collectd_client_conf_dir:
- file.directory:
- - name: {{ client.config_dir }}
- - user: root
- - mode: 750
- - makedirs: true
- - require:
- - pkg: collectd_client_packages
+{%- set service_grains = {'collectd': {'remote_plugin': {}, 'local_plugin': {}}} %}
-collectd_client_conf_dir_clean:
- file.directory:
- - name: {{ client.config_dir }}
- - clean: true
-
-collectd_client_grains_dir:
- file.directory:
- - name: /etc/salt/grains.d
- - mode: 700
- - makedirs: true
- - user: root
-
-/usr/lib/collectd-python:
- file.recurse:
- - source: salt://collectd/files/plugin
-
-{%- set service_grains = {'collectd': {'plugin': {}}} %}
{%- for service_name, service in pillar.items() %}
{%- if service.get('_support', {}).get('collectd', {}).get('enabled', False) %}
+
{%- set grains_fragment_file = service_name+'/meta/collectd.yml' %}
-{%- macro load_grains_file() %}{% include grains_fragment_file %}{% endmacro %}
+{%- macro load_grains_file() %}{% include grains_fragment_file ignore missing %}{% endmacro %}
{%- set grains_yaml = load_grains_file()|load_yaml %}
-{%- set _dummy = service_grains.collectd.plugin.update(grains_yaml.plugin) %}
+
+{%- if grains_yaml is mapping %}
+{%- set service_grains = salt['grains.filter_by']({'default': service_grains}, merge={'collectd': grains_yaml}) %}
+{%- endif %}
+
{%- endif %}
{%- endfor %}
@@ -86,135 +49,7 @@
- watch:
- file: collectd_client_grain
-{%- for plugin_name, plugin in service_grains.collectd.get('plugin', {}).iteritems() %}
-
-{%- if (plugin.get('execution', 'local') == 'local' or client.remote_collector) and plugin.get('plugin', 'native') not in ['python'] %}
-
-{{ client.config_dir }}/{{ plugin_name }}.conf:
- file.managed:
- {%- if plugin.template is defined %}
- - source: salt://{{ plugin.template }}
- - template: jinja
- - defaults:
- plugin: {{ plugin|yaml }}
- {%- else %}
- - contents: "<LoadPlugin {{ plugin.plugin }}>\n Globals false\n</LoadPlugin>\n"
- {%- endif %}
- - user: root
- - mode: 660
- - require:
- - file: collectd_client_conf_dir
- - require_in:
- - file: collectd_client_conf_dir_clean
- - watch_in:
- - service: collectd_service
-
-{%- endif %}
-
-{%- endfor %}
-
-{%- if client.file_logging %}
-
-/etc/collectd/conf.d/00_collectd_logfile.conf:
- file.managed:
- - source: salt://collectd/files/collectd_logfile.conf
- - user: root
- - group: root
- - mode: 660
- - watch_in:
- - service: collectd_service
- - require:
- - file: collectd_client_conf_dir
- - require_in:
- - file: collectd_client_conf_dir_clean
-
-{%- endif %}
-
-/etc/collectd/conf.d/collectd_python.conf:
- file.managed:
- - source: salt://collectd/files/collectd_python.conf
- - template: jinja
- - user: root
- - group: root
- - mode: 660
- - defaults:
- plugin: {{ service_grains.collectd.plugin|yaml }}
- - watch_in:
- - service: collectd_service
- - require:
- - file: collectd_client_conf_dir
- - require_in:
- - file: collectd_client_conf_dir_clean
-
-/etc/collectd/filters.conf:
- file.managed:
- - source: salt://collectd/files/filters.conf
- - template: jinja
- - user: root
- - group: root
- - mode: 660
- - watch_in:
- - service: collectd_service
- - require:
- - file: collectd_client_conf_dir
- - require_in:
- - file: collectd_client_conf_dir_clean
-
-/etc/collectd/thresholds.conf:
- file.managed:
- - source: salt://collectd/files/thresholds.conf
- - template: jinja
- - user: root
- - group: root
- - mode: 660
- - watch_in:
- - service: collectd_service
- - require:
- - file: collectd_client_conf_dir
- - require_in:
- - file: collectd_client_conf_dir_clean
-
-{{ client.config_file }}:
- file.managed:
- - source: salt://collectd/files/collectd.conf
- - template: jinja
- - user: root
- - group: root
- - mode: 640
- - defaults:
- service_grains: {{ service_grains|yaml }}
- - require:
- - file: collectd_client_conf_dir
- - require_in:
- - file: collectd_client_conf_dir_clean
- - watch_in:
- - service: collectd_service
-
-{%- for backend_name, backend in client.backend.iteritems() %}
-
-{{ client.config_dir }}/collectd_writer_{{ backend_name }}.conf:
- file.managed:
- - source: salt://collectd/files/backend/{{ backend.engine }}.conf
- - template: jinja
- - user: root
- - group: root
- - mode: 660
- - defaults:
- backend_name: "{{ backend_name }}"
- - require:
- - file: collectd_client_conf_dir
- - require_in:
- - file: collectd_client_conf_dir_clean
- - watch_in:
- - service: collectd_service
-
-{%- endfor %}
-
-collectd_service:
- service.running:
- - name: collectd
- - enable: true
- - require:
- - pkg: collectd_client_packages
+{%- set plugins = service_grains.collectd.local_plugin %}
+{%- include "collectd/_service.sls" %}
{%- endif %}
diff --git a/collectd/files/backend/amqp.conf b/collectd/files/backend/amqp.conf
index 606f446..ed2f968 100644
--- a/collectd/files/backend/amqp.conf
+++ b/collectd/files/backend/amqp.conf
@@ -1,4 +1,3 @@
-{%- set backend = salt['pillar.get']('collectd:client:backend:'+backend_name) %}
<LoadPlugin amqp>
Globals false
</LoadPlugin>
diff --git a/collectd/files/backend/carbon.conf b/collectd/files/backend/carbon.conf
index d39f4a4..7758f97 100644
--- a/collectd/files/backend/carbon.conf
+++ b/collectd/files/backend/carbon.conf
@@ -1,4 +1,3 @@
-{%- set backend = salt['pillar.get']('collectd:client:backend:'+backend_name) %}
<LoadPlugin write_graphite>
Globals false
</LoadPlugin>
diff --git a/collectd/files/backend/http.conf b/collectd/files/backend/http.conf
index 66401be..752df40 100644
--- a/collectd/files/backend/http.conf
+++ b/collectd/files/backend/http.conf
@@ -1,4 +1,3 @@
-{%- set backend = salt['pillar.get']('collectd:client:backend:'+backend_name) %}
<LoadPlugin write_http>
Globals false
</LoadPlugin>
diff --git a/collectd/files/backend/network.conf b/collectd/files/backend/network.conf
index 08fcd63..e97c530 100644
--- a/collectd/files/backend/network.conf
+++ b/collectd/files/backend/network.conf
@@ -1,9 +1,8 @@
-{%- from "collectd/map.jinja" import client with context %}
<LoadPlugin network>
Globals false
</LoadPlugin>
-{%- for backend_name in client.backend.iteritems() %}
+{%- for _, backend in client.backend.iteritems() %}
<Plugin network>
{%- if backend.mode == 'client' %}
Server "{{ backend.host }}" "{{ backend.port }}"
diff --git a/collectd/files/collectd.conf b/collectd/files/collectd.conf
index 03175e2..d815eb1 100644
--- a/collectd/files/collectd.conf
+++ b/collectd/files/collectd.conf
@@ -1,4 +1,3 @@
-{%- from "collectd/map.jinja" import client with context %}
{%- from "linux/map.jinja" import system with context %}
# Config file for collectd(1).
@@ -15,7 +14,11 @@
# Global settings for the daemon. #
##############################################################################
+{%- if client.use_fqdn %}
Hostname "{{ system.name }}.{{ system.domain }}"
+{%- else %}
+Hostname "{{ system.name }}"
+{%- endif %}
FQDNLookup false
#BaseDir "/var/lib/collectd"
@@ -42,10 +45,12 @@
# accessed. #
##############################################################################
+{%- if client.syslog_logging %}
LoadPlugin syslog
<Plugin syslog>
LogLevel info
</Plugin>
+{%- endif %}
##############################################################################
# LoadPlugin section #
@@ -879,10 +884,10 @@
#</Plugin>
{%- if client.file_logging %}
-Include "/etc/collectd/conf.d/00_collectd_logfile.conf"
+Include "{{ client.config_dir }}/00_collectd_logfile.conf"
{%- endif %}
-{%- for plugin_name, plugin in service_grains.collectd.get('plugin', {}).iteritems() %}
-{%- if (plugin.get('execution', 'local') == 'local' or client.remote_collector) and plugin.get('plugin', 'native') not in ['python'] %}
+{%- for plugin_name, plugin in plugin.iteritems() %}
+{%- if plugin.get('plugin', 'native') not in ['python'] %}
Include "{{ client.config_dir }}/{{ plugin_name }}.conf"
{%- endif %}
{%- endfor %}
@@ -890,6 +895,3 @@
{%- for backend_name, backend in client.backend.iteritems() %}
Include "{{ client.config_dir }}/collectd_writer_{{ backend_name }}.conf"
{%- endfor %}
-
-Include "/etc/collectd/filters.conf"
-Include "/etc/collectd/thresholds.conf"
diff --git a/collectd/files/collectd_check_local_endpoint.conf b/collectd/files/collectd_check_local_endpoint.conf
index 4144e96..b154564 100644
--- a/collectd/files/collectd_check_local_endpoint.conf
+++ b/collectd/files/collectd_check_local_endpoint.conf
@@ -1,6 +1,4 @@
-
{%- if plugin.get('endpoint', {})|length > 0 %}
-
Import "check_local_endpoint"
<Module "check_local_endpoint">
@@ -13,5 +11,4 @@
Url "{{ endpoint_name }}" "{{ endpoint.url }}"
{%- endfor %}
</Module>
-
-{%- endif %}
\ No newline at end of file
+{%- endif %}
diff --git a/collectd/files/collectd_logfile.conf b/collectd/files/collectd_logfile.conf
index 55bfbff..6d4fb6e 100644
--- a/collectd/files/collectd_logfile.conf
+++ b/collectd/files/collectd_logfile.conf
@@ -4,7 +4,7 @@
<Plugin logfile>
LogLevel warning
- File "/var/log/collectd.log"
+ File "/var/log/{{ service_name }}.log"
Timestamp true
PrintSeverity false
</Plugin>
diff --git a/collectd/files/collectd_python.conf b/collectd/files/collectd_python.conf
index a30ee36..53ad6f4 100644
--- a/collectd/files/collectd_python.conf
+++ b/collectd/files/collectd_python.conf
@@ -1,4 +1,3 @@
-{%- from "collectd/map.jinja" import client with context %}
<LoadPlugin python>
Globals false
</LoadPlugin>
@@ -7,12 +6,12 @@
ModulePath "/usr/lib/collectd-python"
LogTraces false
Interactive false
- {%- if plugin is mapping %}
+
{%- for plugin_name, plugin in plugin.iteritems() %}
- {%- if (plugin.get('execution', 'local') == 'local' or client.remote_collector) and plugin.get('plugin', 'native') == 'python' %}
- {%- include plugin.template %}
+ {%- if plugin.get('plugin', 'native') == 'python' %}
+ {% include plugin.template %}
{%- endif %}
+
{%- endfor %}
- {%- endif %}
</Plugin>
diff --git a/collectd/files/collectd_systemd.service b/collectd/files/collectd_systemd.service
new file mode 100644
index 0000000..ddb0a62
--- /dev/null
+++ b/collectd/files/collectd_systemd.service
@@ -0,0 +1,22 @@
+[Unit]
+Description=Statistics collection and monitoring daemon
+After=local-fs.target network.target
+Requires=local-fs.target network.target
+ConditionPathExists={{ config_file }}
+Documentation=man:collectd(1)
+Documentation=man:collectd.conf(5)
+Documentation=https://collectd.org
+
+[Service]
+Type=notify
+NotifyAccess=main
+EnvironmentFile=-/etc/default/{{ service_name }}
+ExecStartPre=/usr/sbin/collectd -t -C {{ config_file }}
+ExecStart=/usr/sbin/collectd -C {{ config_file }}
+Restart=always
+RestartSec=10
+
+{%- if automatic_starting %}
+[Install]
+WantedBy=multi-user.target
+{%- endif %}
diff --git a/collectd/files/collectd_upstart.service b/collectd/files/collectd_upstart.service
new file mode 100644
index 0000000..f728999
--- /dev/null
+++ b/collectd/files/collectd_upstart.service
@@ -0,0 +1,12 @@
+# {{ service_name }}
+
+description "{{ service_name }}"
+
+{%- if automatic_starting %}
+start on runlevel [2345]
+stop on runlevel [!2345]
+{%- endif %}
+
+respawn
+
+exec /usr/bin/collectd -f -C {{ config_file }}
diff --git a/collectd/files/plugin/collectd_apache_check.py b/collectd/files/plugin/collectd_apache_check.py
index 63ef855..790a60f 100644
--- a/collectd/files/plugin/collectd_apache_check.py
+++ b/collectd/files/plugin/collectd_apache_check.py
@@ -50,10 +50,6 @@
plugin = ApacheCheckPlugin(collectd)
-def init_callback():
- plugin.restore_sigchld()
-
-
def config_callback(conf):
plugin.config_callback(conf)
@@ -61,6 +57,5 @@
def read_callback():
plugin.read_callback()
-collectd.register_init(init_callback)
collectd.register_config(config_callback)
collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/collectd_base.py b/collectd/files/plugin/collectd_base.py
index f042628..28cea12 100644
--- a/collectd/files/plugin/collectd_base.py
+++ b/collectd/files/plugin/collectd_base.py
@@ -17,7 +17,6 @@
import json
import signal
import subprocess
-import sys
import time
import traceback
@@ -169,8 +168,8 @@
("foobar\n", "")
- None if the command couldn't be executed or returned a non-zero
- status code
+ (None, None) if the command couldn't be executed or returned a
+ non-zero status code
"""
start_time = time.time()
try:
@@ -186,14 +185,14 @@
except Exception as e:
self.logger.error("Cannot execute command '%s': %s : %s" %
(cmd, str(e), traceback.format_exc()))
- return None
+ return (None, None)
returncode = proc.returncode
if returncode != 0:
self.logger.error("Command '%s' failed (return code %d): %s" %
(cmd, returncode, stderr))
- return None
+ return (None, None)
if self.debug:
elapsedtime = time.time() - start_time
self.logger.info("Command '%s' returned %s in %0.3fs" %
@@ -222,18 +221,16 @@
@staticmethod
def restore_sigchld():
- """Restores the SIGCHLD handler for Python <= v2.6.
+ """Restores the SIGCHLD handler.
This should be provided to collectd as the init callback by plugins
- that execute external programs.
+ that execute external programs and want to check the return code.
Note that it will BREAK the exec plugin!!!
- See https://github.com/deniszh/collectd-iostat-python/issues/2 for
- details.
+ See contrib/python/getsigchld.py in the collectd project for details.
"""
- if sys.version_info[0] == 2 and sys.version_info[1] <= 6:
- signal.signal(signal.SIGCHLD, signal.SIG_DFL)
+ signal.signal(signal.SIGCHLD, signal.SIG_DFL)
def notification_callback(self, notification):
if not self.depends_on_resource:
diff --git a/collectd/files/plugin/collectd_contrail_apis.py b/collectd/files/plugin/collectd_contrail_apis.py
new file mode 100644
index 0000000..ad2260e
--- /dev/null
+++ b/collectd/files/plugin/collectd_contrail_apis.py
@@ -0,0 +1,134 @@
+#!/usr/bin/python
+#
+# Copyright 2016 Mirantis, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import collectd
+import requests
+import xml.dom.minidom
+import xml
+
+import collectd_base as base
+
+
+NAME = 'contrail'
+# Default sampling interval
+INTERVAL = 60
+
+
+def check_state(item, state):
+ return item.getElementsByTagName(
+ "state")[0].childNodes[0].toxml() == state
+
+
+class ContrailApiPlugin(base.Base):
+
+ def __init__(self, *args, **kwargs):
+ super(ContrailApiPlugin, self).__init__(*args, **kwargs)
+ self.plugin = NAME
+ self.session = requests.Session()
+ self.session.mount(
+ 'http://',
+ requests.adapters.HTTPAdapter(max_retries=self.max_retries)
+ )
+ self.session.mount(
+ 'https://',
+ requests.adapters.HTTPAdapter(max_retries=self.max_retries)
+ )
+ self.urls = {}
+ self.xml_element = {}
+ self.result_type = {}
+ self.state = {}
+
+ def config_callback(self, config):
+ super(ContrailApiPlugin, self).config_callback(config)
+ for node in config.children:
+ self.logger.debug("Got config request for '{}': {} {}".format(
+ node.key.lower(), node.values[0], node.values[1])
+ )
+ if node.key.lower() == "url":
+ self.urls[node.values[0]] = node.values[1]
+ elif node.key.lower() == 'xmlelement':
+ self.xml_element[node.values[0]] = node.values[1]
+ elif node.key.lower() == 'resulttype':
+ self.result_type[node.values[0]] = node.values[1]
+ elif node.key.lower() == 'state':
+ self.state[node.values[0]] = node.values[1]
+
+ def itermetrics(self):
+ for name, url in self.urls.items():
+ self.logger.debug("Requesting {} URL {}".format(
+ name, url)
+ )
+ try:
+ r = self.session.get(url, timeout=self.timeout)
+ except Exception as e:
+ msg = "Got exception for '{}': {}".format(name, e)
+ raise base.CheckException(msg)
+ else:
+ if r.status_code != 200:
+ self.logger.error(
+ ("{} ({}) responded with code {} "
+ "").format(name, url,
+ r.status_code))
+ yield {'type_instance': name, 'values': self.FAIL}
+ else:
+ try:
+ self.logger.debug(
+ "Got response from {}: '{}'"
+ "".format(url, r.text))
+ px = xml.dom.minidom.parseString(r.text)
+ itemlist = px.getElementsByTagName(
+ self.xml_element[name]
+ )
+ if name not in self.result_type:
+ count = 0
+ state = self.state.get('name')
+ for i in itemlist:
+ if state is None or check_state(i, state):
+ count = count + 1
+ self.logger.debug(
+ "Got count for {}: '{}'".format(name, count))
+ yield {'type_instance': name, 'values': count}
+ else:
+ rval = itemlist[0].getElementsByTagName(
+ self.result_type[name]
+ )[0].childNodes[0].toxml()
+ self.logger.debug(
+ "Got val for {}: '{}'".format(name, rval))
+ yield {'type_instance': name, 'values': rval}
+ except Exception as e:
+ msg = ("Got exception while parsing "
+ "response for '{}': {}").format(name, e)
+ raise base.CheckException(msg)
+
+
+plugin = ContrailApiPlugin(collectd)
+
+
+def config_callback(conf):
+ plugin.config_callback(conf)
+
+
+def notification_callback(notification):
+ plugin.notification_callback(notification)
+
+
+def read_callback():
+ plugin.conditional_read_callback()
+
+collectd.register_config(config_callback)
+collectd.register_notification(notification_callback)
+collectd.register_read(read_callback, INTERVAL)
diff --git a/collectd/files/plugin/collectd_glusterfs.py b/collectd/files/plugin/collectd_glusterfs.py
new file mode 100644
index 0000000..fbcfc1f
--- /dev/null
+++ b/collectd/files/plugin/collectd_glusterfs.py
@@ -0,0 +1,222 @@
+#!/usr/bin/python
+# Copyright 2016 Mirantis, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import collectd
+import re
+
+import collectd_base as base
+
+NAME = 'glusterfs'
+GLUSTER_BINARY = '/usr/sbin/gluster'
+
+peer_re = re.compile(r'^Hostname: (?P<peer>.+)$', re.MULTILINE)
+state_re = re.compile(r'^State: (?P<state>.+)$', re.MULTILINE)
+
+vol_status_re = re.compile(r'\n\s*\n', re.MULTILINE)
+vol_block_re = re.compile(r'^-+', re.MULTILINE)
+volume_re = re.compile(r'^Status of volume:\s+(?P<volume>.+)', re.MULTILINE)
+brick_server_re = re.compile(r'^Brick\s*:\s*Brick\s*(?P<peer>[^:]+)',
+ re.MULTILINE)
+disk_free_re = re.compile(
+ r'^Disk Space Free\s*:\s+(?P<disk_free>[.\d]+)(?P<unit>\S+)',
+ re.MULTILINE)
+disk_total_re = re.compile(
+ r'^Total Disk Space\s*:\s+(?P<disk_total>[.\d]+)(?P<unit>\S+)',
+ re.MULTILINE)
+inode_free_re = re.compile(r'^Free Inodes\s*:\s+(?P<inode_free>\d+)',
+ re.MULTILINE)
+inode_count_re = re.compile(r'^Inode Count\s*:\s+(?P<inode_count>\d+)',
+ re.MULTILINE)
+
+
+def convert_to_bytes(v, unit):
+ try:
+ i = ('B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB').index(unit)
+ except ValueError:
+ i = 1
+ return float(v) * (1024 ** i)
+
+
+class GlusterfsPlugin(base.Base):
+
+ def __init__(self, *args, **kwargs):
+ super(GlusterfsPlugin, self).__init__(*args, **kwargs)
+ self.plugin = NAME
+
+ def itermetrics(self):
+ # Collect peers' metrics
+ out, err = self.execute([GLUSTER_BINARY, 'peer', 'status'],
+ shell=False)
+ if not out:
+ raise base.CheckException("Failed to execute 'gluster peer'")
+
+ total = 0
+ total_by_state = {
+ 'up': 0,
+ 'down': 0
+ }
+
+ for line in out.split('\n\n'):
+ peer_m = peer_re.search(line)
+ state_m = state_re.search(line)
+ if peer_m and state_m:
+ total += 1
+ if state_m.group('state') == 'Peer in Cluster (Connected)':
+ v = 1
+ total_by_state['up'] += 1
+ else:
+ v = 0
+ total_by_state['down'] += 1
+ yield {
+ 'type_instance': 'peer_state',
+ 'values': v,
+ 'meta': {
+ 'peer': peer_m.group('peer')
+ }
+ }
+
+ for state, count in total_by_state.items():
+ yield {
+ 'type_instance': 'peers_count',
+ 'values': count,
+ 'meta': {
+ 'state': state
+ }
+ }
+ yield {
+ 'type_instance': 'peers_percent',
+ 'values': 100.0 * count / total,
+ 'meta': {
+ 'state': state
+ }
+ }
+
+ # Collect volumes' metrics
+ out, err = self.execute(
+ [GLUSTER_BINARY, 'volume', 'status', 'all', 'detail'],
+ shell=False)
+ if not out:
+ raise base.CheckException("Failed to execute 'gluster volume'")
+
+ for vol_block in vol_status_re.split(out):
+ volume_m = volume_re.search(vol_block)
+ if not volume_m:
+ continue
+ volume = volume_m.group('volume')
+ for line in vol_block_re.split(vol_block):
+ peer_m = brick_server_re.search(line)
+ if not peer_m:
+ continue
+ volume = volume_m.group('volume')
+ peer = peer_m.group('peer')
+ disk_free_m = disk_free_re.search(line)
+ disk_total_m = disk_total_re.search(line)
+ inode_free_m = inode_free_re.search(line)
+ inode_count_m = inode_count_re.search(line)
+ if disk_free_m and disk_total_m:
+ free = convert_to_bytes(
+ disk_free_m.group('disk_free'),
+ disk_free_m.group('unit'))
+ total = convert_to_bytes(
+ disk_total_m.group('disk_total'),
+ disk_total_m.group('unit'))
+ used = total - free
+ yield {
+ 'type_instance': 'space_free',
+ 'values': free,
+ 'meta': {
+ 'volume': volume,
+ 'peer': peer,
+ }
+ }
+ yield {
+ 'type_instance': 'space_percent_free',
+ 'values': free * 100.0 / total,
+ 'meta': {
+ 'volume': volume,
+ 'peer': peer,
+ }
+ }
+ yield {
+ 'type_instance': 'space_used',
+ 'values': used,
+ 'meta': {
+ 'volume': volume,
+ 'peer': peer,
+ }
+ }
+ yield {
+ 'type_instance': 'space_percent_used',
+ 'values': used * 100.0 / total,
+ 'meta': {
+ 'volume': volume,
+ 'peer': peer,
+ }
+ }
+ if inode_free_m and inode_count_m:
+ free = int(inode_free_m.group('inode_free'))
+ total = int(inode_count_m.group('inode_count'))
+ used = total - free
+ yield {
+ 'type_instance': 'inodes_free',
+ 'values': free,
+ 'meta': {
+ 'volume': volume,
+ 'peer': peer,
+ }
+ }
+ yield {
+ 'type_instance': 'inodes_percent_free',
+ 'values': free * 100.0 / total,
+ 'meta': {
+ 'volume': volume,
+ 'peer': peer,
+ }
+ }
+ yield {
+ 'type_instance': 'inodes_used',
+ 'values': used,
+ 'meta': {
+ 'volume': volume,
+ 'peer': peer,
+ }
+ }
+ yield {
+ 'type_instance': 'inodes_percent_used',
+ 'values': used * 100.0 / total,
+ 'meta': {
+ 'volume': volume,
+ 'peer': peer,
+ }
+ }
+
+
+plugin = GlusterfsPlugin(collectd)
+
+
+def init_callback():
+ plugin.restore_sigchld()
+
+
+def config_callback(conf):
+ plugin.config_callback(conf)
+
+
+def read_callback():
+ plugin.read_callback()
+
+collectd.register_init(init_callback)
+collectd.register_config(config_callback)
+collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/collectd_libvirt_check.py b/collectd/files/plugin/collectd_libvirt_check.py
index 4660609..d0df216 100644
--- a/collectd/files/plugin/collectd_libvirt_check.py
+++ b/collectd/files/plugin/collectd_libvirt_check.py
@@ -49,10 +49,6 @@
plugin = LibvirtCheckPlugin(collectd)
-def init_callback():
- plugin.restore_sigchld()
-
-
def config_callback(conf):
plugin.config_callback(conf)
@@ -60,6 +56,5 @@
def read_callback():
plugin.read_callback()
-collectd.register_init(init_callback)
collectd.register_config(config_callback)
collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/collectd_memcached_check.py b/collectd/files/plugin/collectd_memcached_check.py
index fb44aeb..5d0dd26 100644
--- a/collectd/files/plugin/collectd_memcached_check.py
+++ b/collectd/files/plugin/collectd_memcached_check.py
@@ -60,10 +60,6 @@
plugin = MemcachedCheckPlugin(collectd)
-def init_callback():
- plugin.restore_sigchld()
-
-
def config_callback(conf):
plugin.config_callback(conf)
@@ -71,6 +67,5 @@
def read_callback():
plugin.read_callback()
-collectd.register_init(init_callback)
collectd.register_config(config_callback)
collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/collectd_mysql_check.py b/collectd/files/plugin/collectd_mysql_check.py
index a42414c..3f59896 100644
--- a/collectd/files/plugin/collectd_mysql_check.py
+++ b/collectd/files/plugin/collectd_mysql_check.py
@@ -103,10 +103,6 @@
plugin = MySQLCheckPlugin(collectd)
-def init_callback():
- plugin.restore_sigchld()
-
-
def config_callback(conf):
plugin.config_callback(conf)
@@ -114,6 +110,5 @@
def read_callback():
plugin.read_callback()
-collectd.register_init(init_callback)
collectd.register_config(config_callback)
collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/collectd_nginx_check.py b/collectd/files/plugin/collectd_nginx_check.py
new file mode 100644
index 0000000..9b8210e
--- /dev/null
+++ b/collectd/files/plugin/collectd_nginx_check.py
@@ -0,0 +1,62 @@
+#!/usr/bin/python
+# Copyright 2016 Mirantis, Inc.
+#
+# Licensed under the nginx License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.nginx.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import collectd
+import collectd_base as base
+import requests
+
+NAME = 'nginx'
+
+
+class NginxCheckPlugin(base.Base):
+
+ def __init__(self, *args, **kwargs):
+ super(NginxCheckPlugin, self).__init__(*args, **kwargs)
+ self.plugin = NAME
+ self.url = None
+
+ def config_callback(self, conf):
+ super(NginxCheckPlugin, self).config_callback(conf)
+
+ for node in conf.children:
+ if node.key == 'Url':
+ self.url = node.values[0]
+ break
+
+ if self.url is None:
+ self.logger.error("{}: Missing Url parameter".format(NAME))
+
+ def read_callback(self):
+ try:
+ requests.get(self.url, timeout=self.timeout)
+ self.dispatch_check_metric(self.OK)
+ except Exception as err:
+ msg = "{}: Failed to check service: {}".format(NAME, err)
+ self.logger.error(msg)
+ self.dispatch_check_metric(self.FAIL, msg)
+
+
+plugin = NginxCheckPlugin(collectd)
+
+
+def config_callback(conf):
+ plugin.config_callback(conf)
+
+
+def read_callback():
+ plugin.read_callback()
+
+collectd.register_config(config_callback)
+collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/collectd_vrrp.py b/collectd/files/plugin/collectd_vrrp.py
new file mode 100644
index 0000000..b020ec2
--- /dev/null
+++ b/collectd/files/plugin/collectd_vrrp.py
@@ -0,0 +1,84 @@
+#!/usr/bin/python
+# Copyright 2016 Mirantis, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import collectd
+
+import collectd_base as base
+
+from pyroute2 import IPRoute
+
+NAME = 'vrrp'
+
+
+class VrrpPlugin(base.Base):
+
+ def __init__(self, *args, **kwargs):
+ super(VrrpPlugin, self).__init__(*args, **kwargs)
+ self.plugin = NAME
+ self.ip_addresses = []
+ self.ipr = IPRoute()
+
+ def config_callback(self, conf):
+ """Parse the plugin configuration.
+
+ Example:
+
+ <Module "collectd_vrrp">
+ <IPAddress>
+ address "172.16.10.254"
+ label "Foo"
+ </IPAddress>
+ <IPAddress>
+ address "172.16.10.253"
+ </IPAddress>
+ </Module>
+ """
+ super(VrrpPlugin, self).config_callback(conf)
+
+ for node in conf.children:
+ if node.key == 'IPAddress':
+ item = {}
+ for child_node in node.children:
+ if child_node.key not in ('address', 'label'):
+ continue
+ item[child_node.key] = child_node.values[0]
+ if 'address' not in item:
+ self.logger.error("vrrp: Missing 'address' parameter")
+ self.ip_addresses.append(item)
+
+ if len(self.ip_addresses) == 0:
+ self.logger.error("vrrp: Missing 'IPAddress' parameter")
+
+ def itermetrics(self):
+ for ip_address in self.ip_addresses:
+ v = 1 if self.ipr.get_addr(address=ip_address['address']) else 0
+ data = {'values': v, 'meta': {'ip_address': ip_address['address']}}
+ if 'label' in ip_address:
+ data['meta']['label'] = ip_address['label']
+ yield data
+
+
+plugin = VrrpPlugin(collectd)
+
+
+def config_callback(conf):
+ plugin.config_callback(conf)
+
+
+def read_callback():
+ plugin.read_callback()
+
+collectd.register_config(config_callback)
+collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/contrail_ifmap_elements_count.py b/collectd/files/plugin/contrail_ifmap_elements_count.py
new file mode 100644
index 0000000..a9831d2
--- /dev/null
+++ b/collectd/files/plugin/contrail_ifmap_elements_count.py
@@ -0,0 +1,79 @@
+#!/usr/bin/python
+#
+# Copyright 2016 Mirantis, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+#
+
+import time
+import signal
+import string
+import subprocess
+import sys
+
+plugin_name = "contrail"
+plugin_instance = "ifmap-elements-count"
+plugin_interval = 90
+plugin_type = 'gauge'
+
+command = "/usr/bin/ifmap-view visual visual 2>&1 | wc -l"
+
+
+def restore_sigchld():
+ signal.signal(signal.SIGCHLD, signal.SIG_DFL)
+
+
+def log_verbose(msg):
+ collectd.info('%s plugin [verbose]: %s' % (plugin_name, msg))
+
+
+def payload():
+ ifmap_view_number_of_elements = subprocess.check_output(
+ command, shell=True)
+ return ifmap_view_number_of_elements
+
+
+def configure_callback(conf):
+ for node in conf.children:
+ val = str(node.values[0])
+
+
+def payload_callback():
+ value = payload()
+ # log_verbose(
+ # 'Sending value: %s.%s=%s' % (
+ # plugin_name, '-'.join([val.plugin, val.type]), value))
+ val = collectd.Values(
+ plugin=plugin_name, # metric source
+ plugin_instance=plugin_instance,
+ type=plugin_type,
+ type_instance=plugin_name,
+ interval=plugin_interval,
+ meta={'0': True},
+ values=[value]
+ )
+
+ val.dispatch()
+
+
+if __name__ == '__main__':
+ print "Plugin: " + plugin_name
+ payload = payload()
+ print("%s" % (payload))
+ sys.exit(0)
+else:
+ import collectd
+
+ collectd.register_init(restore_sigchld)
+ collectd.register_config(configure_callback)
+ collectd.register_read(payload_callback, plugin_interval)
diff --git a/collectd/files/plugin/elasticsearch_cluster.py b/collectd/files/plugin/elasticsearch_cluster.py
index c3dcf37..e08d08a 100644
--- a/collectd/files/plugin/elasticsearch_cluster.py
+++ b/collectd/files/plugin/elasticsearch_cluster.py
@@ -35,6 +35,7 @@
self.plugin = NAME
self.address = '127.0.0.1'
self.port = 9200
+ self._node_id = None
self.session = requests.Session()
self.url = None
self.session.mount(
@@ -51,25 +52,41 @@
if node.key == 'Port':
self.port = node.values[0]
- self.url = "http://{address}:{port}/_cluster/health".format(
+ self.url = "http://{address}:{port}/".format(
**{
'address': self.address,
'port': int(self.port),
})
- def itermetrics(self):
+ def query_api(self, resource):
+ url = "{}{}".format(self.url, resource)
try:
- r = self.session.get(self.url)
+ r = self.session.get(url)
except Exception as e:
- msg = "Got exception for '{}': {}".format(self.url, e)
+ msg = "Got exception for '{}': {}".format(url, e)
raise base.CheckException(msg)
if r.status_code != 200:
- msg = "{} responded with code {}".format(
- self.url, r.status_code)
+ msg = "{} responded with code {}".format(url, r.status_code)
raise base.CheckException(msg)
- data = r.json()
+ return r.json()
+
+ @property
+ def node_id(self):
+ if self._node_id is None:
+ local_node = self.query_api('_nodes/_local')
+ self._node_id = local_node.get('nodes', {}).keys()[0]
+
+ return self._node_id
+
+ def itermetrics(self):
+ # Collect cluster metrics only from the elected master
+ master_node = self.query_api('_cluster/state/master_node')
+ if master_node.get('master_node', '') != self.node_id:
+ return
+
+ data = self.query_api('_cluster/health')
self.logger.debug("Got response from Elasticsearch: '%s'" % data)
yield {
@@ -92,10 +109,6 @@
plugin = ElasticsearchClusterHealthPlugin(collectd, 'elasticsearch')
-def init_callback():
- plugin.restore_sigchld()
-
-
def config_callback(conf):
plugin.config_callback(conf)
@@ -103,6 +116,5 @@
def read_callback():
plugin.read_callback()
-collectd.register_init(init_callback)
collectd.register_config(config_callback)
collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/haproxy.py b/collectd/files/plugin/haproxy.py
index 17514e2..2299d6e 100644
--- a/collectd/files/plugin/haproxy.py
+++ b/collectd/files/plugin/haproxy.py
@@ -248,6 +248,7 @@
# NOLB/MAINT/MAINT(via)...
if status in STATUS_MAP:
backend_server_states[pxname][status] += 1
+ backend_server_states[pxname]['_count'] += 1
# Emit metric for the backend server
yield {
'type_instance': 'backend_server',
@@ -261,9 +262,24 @@
for pxname, states in backend_server_states.iteritems():
for s in STATUS_MAP.keys():
+ val = states.get(s, 0)
yield {
'type_instance': 'backend_servers',
- 'values': states.get(s, 0),
+ 'values': val,
+ 'meta': {
+ 'backend': pxname,
+ 'state': s.lower()
+ }
+ }
+
+ if backend_server_states[pxname]['_count'] == 0:
+ prct = 0
+ else:
+ prct = (100.0 * val) / \
+ backend_server_states[pxname]['_count']
+ yield {
+ 'type_instance': 'backend_servers_percent',
+ 'values': prct,
'meta': {
'backend': pxname,
'state': s.lower()
@@ -291,10 +307,6 @@
plugin = HAProxyPlugin(collectd)
-def init_callback():
- plugin.restore_sigchld()
-
-
def config_callback(conf):
plugin.config_callback(conf)
@@ -302,6 +314,5 @@
def read_callback():
plugin.read_callback()
-collectd.register_init(init_callback)
collectd.register_config(config_callback)
collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/influxdb.py b/collectd/files/plugin/influxdb.py
index 43cd82d..4b7426b 100644
--- a/collectd/files/plugin/influxdb.py
+++ b/collectd/files/plugin/influxdb.py
@@ -129,10 +129,6 @@
plugin = InfluxDBClusterPlugin(collectd)
-def init_callback():
- plugin.restore_sigchld()
-
-
def config_callback(conf):
plugin.config_callback(conf)
@@ -140,6 +136,5 @@
def read_callback():
plugin.read_callback()
-collectd.register_init(init_callback)
collectd.register_config(config_callback)
collectd.register_read(read_callback)
diff --git a/collectd/files/plugin/rabbitmq_info.py b/collectd/files/plugin/rabbitmq_info.py
index c92ce0e..d78b6cb 100644
--- a/collectd/files/plugin/rabbitmq_info.py
+++ b/collectd/files/plugin/rabbitmq_info.py
@@ -79,14 +79,14 @@
self.api_overview_url, r.status_code)
raise base.CheckException(msg)
- objects = overview['object_totals']
- stats['queues'] = objects['queues']
- stats['consumers'] = objects['consumers']
- stats['connections'] = objects['connections']
- stats['exchanges'] = objects['exchanges']
- stats['channels'] = objects['channels']
- stats['messages'] = overview['queue_totals']['messages']
- stats['running_nodes'] = len(overview['contexts'])
+ objects = overview.get('object_totals', {})
+ stats['queues'] = objects.get('queues', 0)
+ stats['consumers'] = objects.get('consumers', 0)
+ stats['connections'] = objects.get('connections', 0)
+ stats['exchanges'] = objects.get('exchanges', 0)
+ stats['channels'] = objects.get('channels', 0)
+ stats['messages'] = overview.get('queue_totals', {}).get('messages', 0)
+ stats['running_nodes'] = len(overview.get('contexts', []))
for k, v in stats.iteritems():
yield {'type_instance': k, 'values': v}
diff --git a/collectd/init.sls b/collectd/init.sls
index b794478..febb675 100644
--- a/collectd/init.sls
+++ b/collectd/init.sls
@@ -1,4 +1,9 @@
{%- if pillar.collectd is defined %}
include:
+{%- if pillar.collectd.client is defined %}
- collectd.client
-{%- endif %}
\ No newline at end of file
+{%- endif %}
+{%- if pillar.collectd.remote_client is defined %}
+- collectd.remote_client
+{%- endif %}
+{%- endif %}
diff --git a/collectd/map.jinja b/collectd/map.jinja
index a0156c5..ace3a0b 100644
--- a/collectd/map.jinja
+++ b/collectd/map.jinja
@@ -7,16 +7,20 @@
'config_dir': '/etc/collectd.d',
'read_interval': 60,
'file_logging': True,
- 'remote_collector': False
+ 'syslog_logging': True,
+ 'use_fqdn': True,
+ 'automatic_starting': True,
},
'Debian': {
- 'pkgs': ['collectd-core', 'snmp', 'python-yaml', 'libpython2.7'],
+ 'pkgs': ['collectd-core', 'snmp', 'python-yaml', 'libpython2.7', 'python-simplejson'],
'service': 'collectd',
'config_file': '/etc/collectd/collectd.conf',
'config_dir': '/etc/collectd/conf.d',
'read_interval': 60,
'file_logging': True,
- 'remote_collector': False
+ 'syslog_logging': True,
+ 'use_fqdn': True,
+ 'automatic_starting': True,
},
'RedHat': {
'pkgs': ['collectd', 'collectd-ping', 'net-snmp', 'PyYAML'],
@@ -25,6 +29,21 @@
'config_dir': '/etc/collectd.d',
'read_interval': 60,
'file_logging': True,
- 'remote_collector': False
+ 'syslog_logging': True,
+ 'use_fqdn': True,
+ 'automatic_starting': True,
},
}, merge=salt['pillar.get']('collectd:client')) %}
+
+{% set remote_client = salt['grains.filter_by']({
+ 'default': {
+ 'service': 'remote_collectd',
+ 'config_file': '/etc/remote_collectd/collectd.conf',
+ 'config_dir': '/etc/remote_collectd/conf.d',
+ 'read_interval': 60,
+ 'file_logging': True,
+ 'syslog_logging': False,
+ 'use_fqdn': True,
+ 'automatic_starting': True,
+ }
+}, merge=salt['pillar.get']('collectd:remote_client')) %}
diff --git a/collectd/meta/collectd.yml b/collectd/meta/collectd.yml
index 98d6ee1..dc349b5 100644
--- a/collectd/meta/collectd.yml
+++ b/collectd/meta/collectd.yml
@@ -1,30 +1,33 @@
-plugin:
+{%- from "collectd/map.jinja" import client with context %}
+{%- from "collectd/map.jinja" import remote_client with context %}
+local_plugin:
collectd_processes:
plugin: processes
- execution: local
template: collectd/files/collectd_processes.conf
process:
- collectdmon: {}
- collectd_check_local_endpoint:
- plugin: python
- execution: local
- template: collectd/files/collectd_check_local_endpoint.conf
- endpoint: {}
-{%- if pillar.collectd.client.get('check', {}).curl is defined %}
+ {%- if pillar.collectd.get('client', {}).get('enabled', False) %}
+ collectd:
+ match: '(collectd.*{{ client.config_file }}|collectd$)'
+ {%- endif %}
+ {%- if pillar.collectd.get('remote_client', {}).get('enabled', False) %}
+ remote_collectd:
+ match: 'collectd.*{{ remote_client.config_file }}'
+ {%- endif %}
+ {%- if pillar.collectd.get('client', {}).get('check', {}).curl is defined %}
collectd_curl:
plugin: curl
execution: local
template: collectd/files/collectd_curl.conf
data: {{ pillar.collectd.client.check.curl|yaml }}
-{%- endif %}
-{%- if pillar.collectd.client.get('check', {}).ping is defined %}
+ {%- endif %}
+ {%- if pillar.collectd.get('client', {}).get('check', {}).ping is defined %}
collectd_ping:
plugin: ping
execution: local
template: collectd/files/collectd_ping.conf
data: {{ pillar.collectd.client.check.ping|yaml }}
-{%- endif %}
-{%- if pillar.get('external', {}).network_device is defined %}
+ {%- endif %}
+ {%- if pillar.get('external', {}).network_device is defined %}
collectd_network_device:
plugin: snmp
execution: local
@@ -45,4 +48,9 @@
- 1.3.6.1.2.1.31.1.1.1.7
- 1.3.6.1.2.1.31.1.1.1.11
host: {{ pillar.external.network_device|yaml }}
-{%- endif %}
\ No newline at end of file
+ {%- endif %}
+ collectd_check_local_endpoint:
+ plugin: python
+ execution: remote
+ template: collectd/files/collectd_check_local_endpoint.conf
+ endpoint: {}
diff --git a/collectd/meta/sphinx.yml b/collectd/meta/sphinx.yml
index 4959fff..e7fa0cb 100644
--- a/collectd/meta/sphinx.yml
+++ b/collectd/meta/sphinx.yml
@@ -1,11 +1,11 @@
{%- from "collectd/map.jinja" import client with context %}
-{%- set service_grains = {'collectd': {'plugin': {}}} %}
+{%- set service_grains = {'collectd': {'local_plugin': {}}} %}
{%- for service_name, service in pillar.items() %}
{%- if service.get('_support', {}).get('collectd', {}).get('enabled', False) %}
{%- set grains_fragment_file = service_name+'/meta/collectd.yml' %}
{%- macro load_grains_file() %}{% include grains_fragment_file %}{% endmacro %}
{%- set grains_yaml = load_grains_file()|load_yaml %}
-{%- set _dummy = service_grains.collectd.plugin.update(grains_yaml.plugin) %}
+{%- do service_grains.collectd.local_plugin.update(grains_yaml.get('local_plugin', {})) %}
{%- endif %}
{%- endfor %}
doc:
@@ -21,4 +21,4 @@
value: {{ backend.host }}:{{ backend.port }}
{%- endfor %}
plugins:
- value: {% for plugin_name, plugin in service_grains.collectd.plugin.iteritems() %} {{ plugin.plugin }}{% endfor %}
+ value: {% for plugin_name, plugin in service_grains.collectd.local_plugin.iteritems() %} {{ plugin_name }}{% endfor %}
diff --git a/collectd/remote_client.sls b/collectd/remote_client.sls
new file mode 100644
index 0000000..a7ffcc3
--- /dev/null
+++ b/collectd/remote_client.sls
@@ -0,0 +1,37 @@
+{%- from "collectd/map.jinja" import remote_client with context %}
+{%- if remote_client.enabled %}
+
+include:
+- collectd._common
+
+{# Collect all remote plugins from Salt mine #}
+{%- set plugins = {} %}
+{%- for node_name, node_grains in salt['mine.get']('*', 'grains.items').iteritems() %}
+{%- if node_grains.collectd is defined %}
+{%- set plugins = salt['grains.filter_by']({'default': plugins}, merge=node_grains.collectd.get('remote_plugin', {})) %}
+{%- endif %}
+{%- endfor %}
+
+{%- set client = remote_client %}
+{%- include "collectd/_service.sls" %}
+
+{{ remote_client.service }}_service_file:
+ file.managed:
+{%- if grains.get('init', None) == 'systemd' %}
+ - name: /etc/systemd/system/{{ remote_client.service }}.service
+ - source: salt://collectd/files/collectd_systemd.service
+{%- else %}
+ - name: /etc/init/{{ remote_client.service }}
+ - source: salt://collectd/files/collectd_upstart.service
+{%- endif %}
+ - user: root
+ - mode: 644
+ - defaults:
+ service_name: {{ remote_client.service }}
+ config_file: {{ remote_client.config_file }}
+ automatic_starting: {{ remote_client.automatic_starting }}
+ - template: jinja
+ - require_in:
+ - service: {{ remote_client.service }}_service
+
+{%- endif %}
diff --git a/metadata/service/client/init.yml b/metadata/service/client/init.yml
index cbf9f47..6ef1257 100644
--- a/metadata/service/client/init.yml
+++ b/metadata/service/client/init.yml
@@ -7,3 +7,4 @@
client:
enabled: true
read_interval: 60
+ use_fqdn: true
diff --git a/metadata/service/remote_client/cluster.yml b/metadata/service/remote_client/cluster.yml
new file mode 100644
index 0000000..b340e79
--- /dev/null
+++ b/metadata/service/remote_client/cluster.yml
@@ -0,0 +1,11 @@
+applications:
+- collectd
+classes:
+- service.collectd.support
+parameters:
+ collectd:
+ remote_client:
+ enabled: true
+ read_interval: 10
+ use_fqdn: false
+ automatic_starting: false
diff --git a/metadata/service/remote_client/single.yml b/metadata/service/remote_client/single.yml
new file mode 100644
index 0000000..81062b4
--- /dev/null
+++ b/metadata/service/remote_client/single.yml
@@ -0,0 +1,11 @@
+applications:
+- collectd
+classes:
+- service.collectd.support
+parameters:
+ collectd:
+ remote_client:
+ enabled: true
+ read_interval: 10
+ use_fqdn: false
+ automatic_starting: true