Stacklight (#71)

* Stacklight integration

* Round 2

* Variable service_name is missing for systemd file

* preserve_data and ticker_interval are not strings

preserve_data is a boolean, and ticker_interval is a number, so their values
shouldn't have quotes.

* Use "ignore missing" with the j2 include statement

* Added cache dir

* Use module_dir instead of module_directory

This fixes a bug where module_directory is used as the variable name instead of
module_dir.

* Use the proper module directory

The stacklight module dir is /usr/share/lma_collector/common, not
/usr/share/lma_collector_modules. This fixes it.

* Add the extra_fields.lua module

This commit adds the extra_fields Lua module. The extra fields table defined in
this module is empty right now. Eventually, this file will be a Jinja2 template
and the content of the extra fields table will be generated based on the user
configuration.

* Regex encoder fix

* Fix the decoder configuration

This commit uses proper decoder names in heka/meta/heka.yml. It also removes
the aggregator input for now, because it does not have an associated decoder.

* Make Heka send metrics to InfluxDB

* Add HTTP metrics filter to log_collector

* Add logs counter filter to log_collector

* Templatize extra_fields.lua file

* Make InfluxDB time precision configurable

* Configure Elasticsearch output through Pillar

* Use influxdb_time_precision for InfluxDB output

This uses influxdb_time_precision set on metric_collector for configuring the
time precision in the InfluxDB output. This is to use just one parameter for
both the InfluxDB accumulator filter and InfluxDB output.

* Increase maximum open files limit to 102400

* Add alarming support

* Revert "[WIP] Add alarming support"

* Remove the aggregator output for now

This removes the aggregator output for now, as the aggregator doesn't work for
now. This is to avoid output errors in Heka.

* Do not place Heka logs in /var/log/upstart

With this commit all the Heka logs are sent to /var/log/<heka_service>.log.
Previously, stdout was sent to /var/log/<heka_service>.log and stderr was sent
to /var/log/upstart/<heka_service>.log, which was confusing to the operator.

* Remove http check input plugin

Because it is not used anymore.

* Add alarming support

* Make the aggregator load heka/meta/heka.yml

Currently _service.sls does not load aggregator metadata from
heka/meta/heka.yml. This commit fixes that.

* Use filter_by to merge node grains data

* Make the output/tcp.toml template extendable

* Add an aggregator.toml output template

This template extends the tcp.toml output template.

* Add generic timezone support to decoders

This change add a new parameter 'adjust_timezone' for the sandbox
decoder. This parameter should be set to true when the data to be
decoded doesn't contain the proper timezone information.

* Add a run_lua_tests.sh script

This script will be used to run the Lua tests (yet to be added).

To run the script:

    cd tests
    ./run_lua_tests.sh

* Copy Lua tests from fuel-plugin-lma-collector

* Fix the afd tests

* Fix the gse tests

* Add aggregator config to support metadata

* Fix the definition of the remote_collector service

This change removes unneeded plugins and adds the ones that are
otherwise required.

* Fix state dependency

* Add monitoring of the Heka processes

* Set influxdb_time_precision in aggregator class

* Disable the heka service completely

Without this patch `service heka status` reports that the heka service is
running. For example:

root@ctl01:/etc/init.d# /etc/init.d/heka status
 * hekad is running

* Define the highest_severity policy

* Generate the gse_policies Lua module

* Generate gse topology module for each alarm cluster

* Generate gse filter toml for each cluster alarm

* Adapt GSE Lua code

* Remove gse cluster_field parameter

This parameter is not needed anymore. Heka's message_matchers are now used to
match input messages.

* Support dimensions in gse metrics

* Do not rely on pacemaker_local_resource_active

* Define the majority_of_members policy

* Define the availability_of_members policy

* Configure outputs in support metadata

* Fix bug in map.jinja

Fix a bug in map.jinja where the filter_by for the metric_collector modified
the influxdb_defaults dict re-used for the remote_collector. The filter_by
function does deep merges, so some caution is required.

* Cleaning useless default map keys

* Make remote collector send only afd metrics to influx

* Add aggregator output to remote collector

* Extend collectd decoder to support vrrp metrics

* Update map.jinja

* Update collectd decoder to parse ntpd metrics

* Redefine alerting property

The alerting property can be one of 'disabled', 'enabled' or
'enabled_with_notification'

* Fix the gse_policies structure

The structure of the generated gse_policies.lua file is not correct. This
commit fixes that.

* Add Nagios output for metric_collector

The patch embeds the Lua sandbox encoder for Nagios.

* Add Nagios output for the aggregator

* Send only alarm-related data to mine

* Fix the grains_for_mine function

* Fix flake8 in heka_alarming.py

* Configure Hekad poolsize by pillar data

The poolsize must be increased depending on the number of filters.
Typically, the metric_collector on controller nodes and the aggregator on
monitoring node(s) should probably use poolsize=200.

* Make Heka service watch Lua dir

In this way the service will restart when the content of
/usr/share/lma_collector changes.

* Enable collection of notifications

* Add missing hostname variable in GSE code

* Add a log decoder for Galera

* Simplify message matchers

This removes the "Field[aggregator] == NIL" part in the Heka message matchers.

We used to use a scribbler decoder to tag input messages coming in through the
aggregator input. We now have a dedicated Heka "aggregator" instance, so this
mechanism is not necessary anymore.

* Update collectd decoder for nginx metrics

* Return an err message when set_member_status fails

With this commit an explicit error message is displayed in the Heka logs when
set_member_status fails because the cluster has "group_by" set to "hostname"
and an input message with no "hostname" field is received.

This addresses a comment from @SwannCroiset in #51.

* Add contrail log parsers

* Fix the heka grains for the aggregator/remote_collector

Previously, the heka salt grains of the node running aggregator/remote_collector
get all the metric_collector alarms from all nodes (/etc/salt/grains.d/heka).
The resulting mines data is then wrong for the monitoring node, while that
situtation fortunately has no impact regarding metric_collector alarm
configurations, the Nagios service leverging mine data get a wrong list of
alarms for the monitoring node.

This patch fixes the issue with minimal changes but it appears that the logic
behind _service.sls state is not optimal and become hard to understand.
This state is executed several times with different contexts for every heka
'server' types and is not indempotent, indeed the /etc/salt/grains.d/heka file
content is different between 'local' servers (metric|log)_collector and
'remote' servers remote_collector|aggregator.

* Fix issue in lma_alarm.lua template

* Add a log decoder for GlusterFS

* Fix collectd Lua decoder for system metrics

The regression has been introduced by 74ad71d41.

* Update collectd decoder for disk metrics

The disk plugin shipping with the 5.5. version of collectd (installed on
Xenial) provides new metrics: disk_io_time and disk_weighted_io_time.

* Use a dimension key for the Nagios host displaying alarm clusters

* Add redis log parser

* Add zookeeper log parser

* Add cassandra log parser

* Set actual swap_size in collectd decoder

Salt does not create Swap-related grains, but the "ps" module has
a "swap_memory" function that can be used to get Swap data. This commit
uses that function to set swap_size in the collectd decoder.

* Send annotations to InfluxDB

* Add ifmap log parser

* Support remote_collector and aggregator in cluster

When deployed in a cluster, the remote_collector and aggregator
services are only started when the node holds the virtual IP address.

* Add an os_telemetry_collector service

os_telemetry_collector implements reading of Сeilometer samples
from RabbitMQ and pulling them to InfluxDB (samples) and
ElasticSearch (resources)

* heka server role, backward compat
diff --git a/README.rst b/README.rst
index 90730cd..75e0ed0 100644
--- a/README.rst
+++ b/README.rst
@@ -3,98 +3,33 @@
 Heka Formula
 ============
 
-Heka is an open source stream processing software system developed by Mozilla. Heka is a Swiss Army Knife type tool for data processing
+Heka is an open source stream processing software system developed by Mozilla. Heka is a Swiss Army Knife type tool for data processing.
 
 Sample pillars
 ==============
 
-Basic log shipper streaming decoded rsyslog's logfiles using amqp broker as transport.
-From every message there is one amqp message and it's also logged to  heka's logfile in RST format.
+Log collector service
 
 .. code-block:: yaml
 
-
     heka:
-      server:
+      log_collector:
         enabled: true
-        input:
-          rsyslog-syslog:
-            engine: logstreamer
-            log_directory: /var/log
-            file_match: syslog\.?(?P<Index>\d+)?(.gz)?
-            decoder: RsyslogDecoder
-            priority: ["^Index"]
-          rsyslog-auth:
-            engine: logstreamer
-            log_directory: /var/log
-            file_match: auth\.log\.?(?P<Index>\d+)?(.gz)?
-            decoder: RsyslogDecoder
-            priority: ["^Index"]
-        decoder:
-          rsyslog:
-            engine: rsyslog
-            template: %TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n
-            hostname_keep: TRUE
-            tz: Europe/Prague
         output:
-          rabbitmq:
-            engine: amqp
+          elasticsearch01:
+            engine: elasticsearch
             host: localhost
-            user: guest
-            password: guest
-            vhost: /logs
-            exchange: logs
-            exchange_type: fanout
-            encoder: ProtobufEncoder
-            use_framing: true
-          heka-logfile:
-            engine: logoutput
-            encoder: RstEncoder
+            port: 9200
+            encoder: es_json
             message_matcher: TRUE
-        encoder:
-          heka-logfile:
-            engine: RstEncoder
 
-
-Heka acting as message router and dashboard.
-Messages are consumed from amqp and sent to elasticsearch server.
-
+Metric collector service
 
 .. code-block:: yaml
 
-
     heka:
-      server:
+      metric_collector:
         enabled: true
-        input:
-          rabbitmq:
-            engine: amqp
-            host: localhost
-            user: guest
-            password: guest
-            vhost: /logs
-            exchange: logs
-            exchange_type: fanout
-            decoder: ProtoBufDecoder
-            splitter: HekaFramingSplitter
-          rsyslog-syslog:
-            engine: logstreamer
-            log_directory: /var/log
-            file_match: syslog\.?(?P<Index>\d+)?(.gz)?
-            decoder: RsyslogDecoder
-            priority: ["^Index"]
-          rsyslog-auth:
-            engine: logstreamer
-            log_directory: /var/log
-            file_match: auth\.log\.?(?P<Index>\d+)?(.gz)?
-            decoder: RsyslogDecoder
-            priority: ["^Index"]
-        decoder:
-          rsyslog:
-            engine: rsyslog
-            template: %TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n
-            hostname_keep: TRUE
-            tz: Europe/Prague
         output:
           elasticsearch01:
             engine: elasticsearch
@@ -105,11 +40,25 @@
           dashboard01:
             engine: dashboard
             ticker_interval: 30
-        encoder:
-          es-json:
-            engine: es-json
+
+Aggregator service
+
+.. code-block:: yaml
+
+    heka:
+      aggregator:
+        enabled: true
+        output:
+          elasticsearch01:
+            engine: elasticsearch
+            host: localhost
+            port: 9200
+            encoder: es_json
             message_matcher: TRUE
-            index: logfile-%{%Y.%m.%d}
+          dashboard01:
+            engine: dashboard
+            ticker_interval: 30
+
 
 Read more
 =========