Stacklight (#71)
* Stacklight integration
* Round 2
* Variable service_name is missing for systemd file
* preserve_data and ticker_interval are not strings
preserve_data is a boolean, and ticker_interval is a number, so their values
shouldn't have quotes.
* Use "ignore missing" with the j2 include statement
* Added cache dir
* Use module_dir instead of module_directory
This fixes a bug where module_directory is used as the variable name instead of
module_dir.
* Use the proper module directory
The stacklight module dir is /usr/share/lma_collector/common, not
/usr/share/lma_collector_modules. This fixes it.
* Add the extra_fields.lua module
This commit adds the extra_fields Lua module. The extra fields table defined in
this module is empty right now. Eventually, this file will be a Jinja2 template
and the content of the extra fields table will be generated based on the user
configuration.
* Regex encoder fix
* Fix the decoder configuration
This commit uses proper decoder names in heka/meta/heka.yml. It also removes
the aggregator input for now, because it does not have an associated decoder.
* Make Heka send metrics to InfluxDB
* Add HTTP metrics filter to log_collector
* Add logs counter filter to log_collector
* Templatize extra_fields.lua file
* Make InfluxDB time precision configurable
* Configure Elasticsearch output through Pillar
* Use influxdb_time_precision for InfluxDB output
This uses influxdb_time_precision set on metric_collector for configuring the
time precision in the InfluxDB output. This is to use just one parameter for
both the InfluxDB accumulator filter and InfluxDB output.
* Increase maximum open files limit to 102400
* Add alarming support
* Revert "[WIP] Add alarming support"
* Remove the aggregator output for now
This removes the aggregator output for now, as the aggregator doesn't work for
now. This is to avoid output errors in Heka.
* Do not place Heka logs in /var/log/upstart
With this commit all the Heka logs are sent to /var/log/<heka_service>.log.
Previously, stdout was sent to /var/log/<heka_service>.log and stderr was sent
to /var/log/upstart/<heka_service>.log, which was confusing to the operator.
* Remove http check input plugin
Because it is not used anymore.
* Add alarming support
* Make the aggregator load heka/meta/heka.yml
Currently _service.sls does not load aggregator metadata from
heka/meta/heka.yml. This commit fixes that.
* Use filter_by to merge node grains data
* Make the output/tcp.toml template extendable
* Add an aggregator.toml output template
This template extends the tcp.toml output template.
* Add generic timezone support to decoders
This change add a new parameter 'adjust_timezone' for the sandbox
decoder. This parameter should be set to true when the data to be
decoded doesn't contain the proper timezone information.
* Add a run_lua_tests.sh script
This script will be used to run the Lua tests (yet to be added).
To run the script:
cd tests
./run_lua_tests.sh
* Copy Lua tests from fuel-plugin-lma-collector
* Fix the afd tests
* Fix the gse tests
* Add aggregator config to support metadata
* Fix the definition of the remote_collector service
This change removes unneeded plugins and adds the ones that are
otherwise required.
* Fix state dependency
* Add monitoring of the Heka processes
* Set influxdb_time_precision in aggregator class
* Disable the heka service completely
Without this patch `service heka status` reports that the heka service is
running. For example:
root@ctl01:/etc/init.d# /etc/init.d/heka status
* hekad is running
* Define the highest_severity policy
* Generate the gse_policies Lua module
* Generate gse topology module for each alarm cluster
* Generate gse filter toml for each cluster alarm
* Adapt GSE Lua code
* Remove gse cluster_field parameter
This parameter is not needed anymore. Heka's message_matchers are now used to
match input messages.
* Support dimensions in gse metrics
* Do not rely on pacemaker_local_resource_active
* Define the majority_of_members policy
* Define the availability_of_members policy
* Configure outputs in support metadata
* Fix bug in map.jinja
Fix a bug in map.jinja where the filter_by for the metric_collector modified
the influxdb_defaults dict re-used for the remote_collector. The filter_by
function does deep merges, so some caution is required.
* Cleaning useless default map keys
* Make remote collector send only afd metrics to influx
* Add aggregator output to remote collector
* Extend collectd decoder to support vrrp metrics
* Update map.jinja
* Update collectd decoder to parse ntpd metrics
* Redefine alerting property
The alerting property can be one of 'disabled', 'enabled' or
'enabled_with_notification'
* Fix the gse_policies structure
The structure of the generated gse_policies.lua file is not correct. This
commit fixes that.
* Add Nagios output for metric_collector
The patch embeds the Lua sandbox encoder for Nagios.
* Add Nagios output for the aggregator
* Send only alarm-related data to mine
* Fix the grains_for_mine function
* Fix flake8 in heka_alarming.py
* Configure Hekad poolsize by pillar data
The poolsize must be increased depending on the number of filters.
Typically, the metric_collector on controller nodes and the aggregator on
monitoring node(s) should probably use poolsize=200.
* Make Heka service watch Lua dir
In this way the service will restart when the content of
/usr/share/lma_collector changes.
* Enable collection of notifications
* Add missing hostname variable in GSE code
* Add a log decoder for Galera
* Simplify message matchers
This removes the "Field[aggregator] == NIL" part in the Heka message matchers.
We used to use a scribbler decoder to tag input messages coming in through the
aggregator input. We now have a dedicated Heka "aggregator" instance, so this
mechanism is not necessary anymore.
* Update collectd decoder for nginx metrics
* Return an err message when set_member_status fails
With this commit an explicit error message is displayed in the Heka logs when
set_member_status fails because the cluster has "group_by" set to "hostname"
and an input message with no "hostname" field is received.
This addresses a comment from @SwannCroiset in #51.
* Add contrail log parsers
* Fix the heka grains for the aggregator/remote_collector
Previously, the heka salt grains of the node running aggregator/remote_collector
get all the metric_collector alarms from all nodes (/etc/salt/grains.d/heka).
The resulting mines data is then wrong for the monitoring node, while that
situtation fortunately has no impact regarding metric_collector alarm
configurations, the Nagios service leverging mine data get a wrong list of
alarms for the monitoring node.
This patch fixes the issue with minimal changes but it appears that the logic
behind _service.sls state is not optimal and become hard to understand.
This state is executed several times with different contexts for every heka
'server' types and is not indempotent, indeed the /etc/salt/grains.d/heka file
content is different between 'local' servers (metric|log)_collector and
'remote' servers remote_collector|aggregator.
* Fix issue in lma_alarm.lua template
* Add a log decoder for GlusterFS
* Fix collectd Lua decoder for system metrics
The regression has been introduced by 74ad71d41.
* Update collectd decoder for disk metrics
The disk plugin shipping with the 5.5. version of collectd (installed on
Xenial) provides new metrics: disk_io_time and disk_weighted_io_time.
* Use a dimension key for the Nagios host displaying alarm clusters
* Add redis log parser
* Add zookeeper log parser
* Add cassandra log parser
* Set actual swap_size in collectd decoder
Salt does not create Swap-related grains, but the "ps" module has
a "swap_memory" function that can be used to get Swap data. This commit
uses that function to set swap_size in the collectd decoder.
* Send annotations to InfluxDB
* Add ifmap log parser
* Support remote_collector and aggregator in cluster
When deployed in a cluster, the remote_collector and aggregator
services are only started when the node holds the virtual IP address.
* Add an os_telemetry_collector service
os_telemetry_collector implements reading of Сeilometer samples
from RabbitMQ and pulling them to InfluxDB (samples) and
ElasticSearch (resources)
* heka server role, backward compat
diff --git a/tests/run_lua_tests.sh b/tests/run_lua_tests.sh
new file mode 100755
index 0000000..7a6b472
--- /dev/null
+++ b/tests/run_lua_tests.sh
@@ -0,0 +1,86 @@
+#!/usr/bin/env bash
+
+## Functions
+
+log_info() {
+ echo "[INFO] $*"
+}
+
+log_err() {
+ echo "[ERROR] $*" >&2
+}
+
+_atexit() {
+ RETVAL=$?
+ trap true INT TERM EXIT
+
+ if [ $RETVAL -ne 0 ]; then
+ log_err "Execution failed"
+ else
+ log_info "Execution successful"
+ fi
+ return $RETVAL
+}
+
+## Main
+
+[ -n "$DEBUG" ] && set -x
+
+LUA_VERSION=$(lua -v 2>&1)
+if [[ $? -ne 0 ]]; then
+ log_err "No lua interpreter present"
+ exit $?
+fi
+if [[ ! $LUA_VERSION =~ [Lua\ 5\.1] ]]; then
+ log_err "Lua version 5.1 is required"
+ exit 1
+fi
+
+lua5.1 -e "require('lpeg')" > /dev/null 2>&1
+if [[ $? -ne 0 ]]; then
+ log_err "lua-lpeg is required (run apt-get install lua-lpeg)"
+ exit 1
+fi
+
+lua5.1 -e "require('cjson')" > /dev/null 2>&1
+if [[ $? -ne 0 ]]; then
+ log_err "lua-cjson is required (run apt-get install lua-cjson)"
+ exit 1
+fi
+
+for pgm in "cmake wget curl"; do
+ which $pgm > /dev/null 2>&1
+ if [[ $? -ne 0 ]]; then
+ log_err "$pgm is required (run apt-get install $pgm)"
+ exit 1
+ fi
+done
+
+if [[ ! -f /usr/lib/x86_64-linux-gnu/liblua5.1.so ]]; then
+ log_err "package liblua5.1-0-dev is not installed (run apt-get install liblua5.1-0-dev)"
+ exit 1
+fi
+
+set -e
+
+curl -s -o lua/mocks/annotation.lua "https://raw.githubusercontent.com/mozilla-services/heka/versions/0.10/sandbox/lua/modules/annotation.lua"
+curl -s -o lua/mocks/anomaly.lua "https://raw.githubusercontent.com/mozilla-services/heka/versions/0.10/sandbox/lua/modules/anomaly.lua"
+curl -s -o lua/mocks/date_time.lua "https://raw.githubusercontent.com/mozilla-services/lua_sandbox/97331863d3e05d25131b786e3e9199e805b9b4ba/modules/date_time.lua"
+curl -s -o lua/mocks/inspect.lua "https://raw.githubusercontent.com/kikito/inspect.lua/master/inspect.lua"
+
+CBUF_COMMIT="bb6dd9f88f148813315b5a660b7e2ba47f958b31"
+CBUF_TARBALL_URL="https://github.com/mozilla-services/lua_circular_buffer/archive/${CBUF_COMMIT}.tar.gz"
+CBUF_DIR="/tmp/lua_circular_buffer-${CBUF_COMMIT}"
+CBUF_SO="${CBUF_DIR}/release/circular_buffer.so"
+if [[ ! -f "${CBUF_SO}" ]]; then
+ rm -rf ${CBUF_DIR}
+ wget -qO - ${CBUF_TARBALL_URL} | tar -zxvf - -C /tmp
+ (cd ${CBUF_DIR} && mkdir release && cd release && cmake -DCMAKE_BUILD_TYPE=release .. && make)
+ cp ${CBUF_SO} ./
+fi
+
+for t in $(ls lua/test_*.lua); do
+ lua5.1 $t
+done
+
+trap _atexit INT TERM EXIT