Stacklight (#71)

* Stacklight integration

* Round 2

* Variable service_name is missing for systemd file

* preserve_data and ticker_interval are not strings

preserve_data is a boolean, and ticker_interval is a number, so their values
shouldn't have quotes.

* Use "ignore missing" with the j2 include statement

* Added cache dir

* Use module_dir instead of module_directory

This fixes a bug where module_directory is used as the variable name instead of
module_dir.

* Use the proper module directory

The stacklight module dir is /usr/share/lma_collector/common, not
/usr/share/lma_collector_modules. This fixes it.

* Add the extra_fields.lua module

This commit adds the extra_fields Lua module. The extra fields table defined in
this module is empty right now. Eventually, this file will be a Jinja2 template
and the content of the extra fields table will be generated based on the user
configuration.

* Regex encoder fix

* Fix the decoder configuration

This commit uses proper decoder names in heka/meta/heka.yml. It also removes
the aggregator input for now, because it does not have an associated decoder.

* Make Heka send metrics to InfluxDB

* Add HTTP metrics filter to log_collector

* Add logs counter filter to log_collector

* Templatize extra_fields.lua file

* Make InfluxDB time precision configurable

* Configure Elasticsearch output through Pillar

* Use influxdb_time_precision for InfluxDB output

This uses influxdb_time_precision set on metric_collector for configuring the
time precision in the InfluxDB output. This is to use just one parameter for
both the InfluxDB accumulator filter and InfluxDB output.

* Increase maximum open files limit to 102400

* Add alarming support

* Revert "[WIP] Add alarming support"

* Remove the aggregator output for now

This removes the aggregator output for now, as the aggregator doesn't work for
now. This is to avoid output errors in Heka.

* Do not place Heka logs in /var/log/upstart

With this commit all the Heka logs are sent to /var/log/<heka_service>.log.
Previously, stdout was sent to /var/log/<heka_service>.log and stderr was sent
to /var/log/upstart/<heka_service>.log, which was confusing to the operator.

* Remove http check input plugin

Because it is not used anymore.

* Add alarming support

* Make the aggregator load heka/meta/heka.yml

Currently _service.sls does not load aggregator metadata from
heka/meta/heka.yml. This commit fixes that.

* Use filter_by to merge node grains data

* Make the output/tcp.toml template extendable

* Add an aggregator.toml output template

This template extends the tcp.toml output template.

* Add generic timezone support to decoders

This change add a new parameter 'adjust_timezone' for the sandbox
decoder. This parameter should be set to true when the data to be
decoded doesn't contain the proper timezone information.

* Add a run_lua_tests.sh script

This script will be used to run the Lua tests (yet to be added).

To run the script:

    cd tests
    ./run_lua_tests.sh

* Copy Lua tests from fuel-plugin-lma-collector

* Fix the afd tests

* Fix the gse tests

* Add aggregator config to support metadata

* Fix the definition of the remote_collector service

This change removes unneeded plugins and adds the ones that are
otherwise required.

* Fix state dependency

* Add monitoring of the Heka processes

* Set influxdb_time_precision in aggregator class

* Disable the heka service completely

Without this patch `service heka status` reports that the heka service is
running. For example:

root@ctl01:/etc/init.d# /etc/init.d/heka status
 * hekad is running

* Define the highest_severity policy

* Generate the gse_policies Lua module

* Generate gse topology module for each alarm cluster

* Generate gse filter toml for each cluster alarm

* Adapt GSE Lua code

* Remove gse cluster_field parameter

This parameter is not needed anymore. Heka's message_matchers are now used to
match input messages.

* Support dimensions in gse metrics

* Do not rely on pacemaker_local_resource_active

* Define the majority_of_members policy

* Define the availability_of_members policy

* Configure outputs in support metadata

* Fix bug in map.jinja

Fix a bug in map.jinja where the filter_by for the metric_collector modified
the influxdb_defaults dict re-used for the remote_collector. The filter_by
function does deep merges, so some caution is required.

* Cleaning useless default map keys

* Make remote collector send only afd metrics to influx

* Add aggregator output to remote collector

* Extend collectd decoder to support vrrp metrics

* Update map.jinja

* Update collectd decoder to parse ntpd metrics

* Redefine alerting property

The alerting property can be one of 'disabled', 'enabled' or
'enabled_with_notification'

* Fix the gse_policies structure

The structure of the generated gse_policies.lua file is not correct. This
commit fixes that.

* Add Nagios output for metric_collector

The patch embeds the Lua sandbox encoder for Nagios.

* Add Nagios output for the aggregator

* Send only alarm-related data to mine

* Fix the grains_for_mine function

* Fix flake8 in heka_alarming.py

* Configure Hekad poolsize by pillar data

The poolsize must be increased depending on the number of filters.
Typically, the metric_collector on controller nodes and the aggregator on
monitoring node(s) should probably use poolsize=200.

* Make Heka service watch Lua dir

In this way the service will restart when the content of
/usr/share/lma_collector changes.

* Enable collection of notifications

* Add missing hostname variable in GSE code

* Add a log decoder for Galera

* Simplify message matchers

This removes the "Field[aggregator] == NIL" part in the Heka message matchers.

We used to use a scribbler decoder to tag input messages coming in through the
aggregator input. We now have a dedicated Heka "aggregator" instance, so this
mechanism is not necessary anymore.

* Update collectd decoder for nginx metrics

* Return an err message when set_member_status fails

With this commit an explicit error message is displayed in the Heka logs when
set_member_status fails because the cluster has "group_by" set to "hostname"
and an input message with no "hostname" field is received.

This addresses a comment from @SwannCroiset in #51.

* Add contrail log parsers

* Fix the heka grains for the aggregator/remote_collector

Previously, the heka salt grains of the node running aggregator/remote_collector
get all the metric_collector alarms from all nodes (/etc/salt/grains.d/heka).
The resulting mines data is then wrong for the monitoring node, while that
situtation fortunately has no impact regarding metric_collector alarm
configurations, the Nagios service leverging mine data get a wrong list of
alarms for the monitoring node.

This patch fixes the issue with minimal changes but it appears that the logic
behind _service.sls state is not optimal and become hard to understand.
This state is executed several times with different contexts for every heka
'server' types and is not indempotent, indeed the /etc/salt/grains.d/heka file
content is different between 'local' servers (metric|log)_collector and
'remote' servers remote_collector|aggregator.

* Fix issue in lma_alarm.lua template

* Add a log decoder for GlusterFS

* Fix collectd Lua decoder for system metrics

The regression has been introduced by 74ad71d41.

* Update collectd decoder for disk metrics

The disk plugin shipping with the 5.5. version of collectd (installed on
Xenial) provides new metrics: disk_io_time and disk_weighted_io_time.

* Use a dimension key for the Nagios host displaying alarm clusters

* Add redis log parser

* Add zookeeper log parser

* Add cassandra log parser

* Set actual swap_size in collectd decoder

Salt does not create Swap-related grains, but the "ps" module has
a "swap_memory" function that can be used to get Swap data. This commit
uses that function to set swap_size in the collectd decoder.

* Send annotations to InfluxDB

* Add ifmap log parser

* Support remote_collector and aggregator in cluster

When deployed in a cluster, the remote_collector and aggregator
services are only started when the node holds the virtual IP address.

* Add an os_telemetry_collector service

os_telemetry_collector implements reading of Сeilometer samples
from RabbitMQ and pulling them to InfluxDB (samples) and
ElasticSearch (resources)

* heka server role, backward compat
diff --git a/tests/lua/mocks/extra_fields.lua b/tests/lua/mocks/extra_fields.lua
new file mode 100644
index 0000000..1bba814
--- /dev/null
+++ b/tests/lua/mocks/extra_fields.lua
@@ -0,0 +1,23 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+local M = {}
+setfenv(1, M) -- Remove external access to contain everything in the module
+
+environment_id = 42
+
+tags = {
+    environment_id=environment_id
+}
+
+return M
diff --git a/tests/lua/test_accumulator.lua b/tests/lua/test_accumulator.lua
new file mode 100644
index 0000000..837ffd5
--- /dev/null
+++ b/tests/lua/test_accumulator.lua
@@ -0,0 +1,68 @@
+-- Copyright 2016 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+require('os')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+local accumulator = require('accumulator')
+
+TestAccumulator = {}
+
+    function TestAccumulator:test_flush_on_append()
+        local sentinel = false
+        local function test_cb(items)
+            assertEquals(#items, 3)
+            sentinel = true
+        end
+        local accum = accumulator.new(2, 5, test_cb)
+        accum:append(1)
+        assertEquals(sentinel, false)
+        accum:append(2)
+        assertEquals(sentinel, false)
+        accum:append(3)
+        assertEquals(sentinel, true)
+    end
+
+    function TestAccumulator:test_flush_interval_with_buffer()
+        local now = os.time()
+        local sentinel = false
+        local function test_cb(items)
+            assertEquals(#items, 1)
+            sentinel = true
+        end
+        local accum = accumulator.new(20, 1, test_cb)
+        accum:append(1)
+        assertEquals(sentinel, false)
+        accum:flush((now + 2) * 1e9)
+        assertEquals(sentinel, true)
+    end
+
+    function TestAccumulator:test_flush_interval_with_empty_buffer()
+        local now = os.time()
+        local sentinel = false
+        local function test_cb(items)
+            assertEquals(#items, 0)
+            sentinel = true
+        end
+        local accum = accumulator.new(20, 1, test_cb)
+        accum:flush((now + 2) * 1e9)
+        assertEquals(sentinel, true)
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
+
diff --git a/tests/lua/test_afd.lua b/tests/lua/test_afd.lua
new file mode 100644
index 0000000..a54bdfd
--- /dev/null
+++ b/tests/lua/test_afd.lua
@@ -0,0 +1,155 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+-- mock the inject_message() function from the Heka sandbox library
+local last_injected_msg
+function inject_message(msg)
+    last_injected_msg = msg
+end
+
+local afd = require('afd')
+local consts = require('gse_constants')
+local extra = require('extra_fields')
+
+TestAfd = {}
+
+    function TestAfd:setUp()
+        afd.reset_alarms()
+    end
+
+    function TestAfd:test_add_to_alarms()
+        afd.add_to_alarms(consts.CRIT, 'last', 'metric_1', {}, {}, '==', 0, 0, nil, nil, "crit message")
+        local alarms = afd.get_alarms()
+        assertEquals(alarms[1].severity, 'CRITICAL')
+        assertEquals(alarms[1].metric, 'metric_1')
+        assertEquals(alarms[1].message, 'crit message')
+
+        afd.add_to_alarms(consts.WARN, 'last', 'metric_2', {}, {}, '>=', 10, 2, 5, 600, "warn message")
+        alarms = afd.get_alarms()
+        assertEquals(alarms[2].severity, 'WARN')
+        assertEquals(alarms[2].metric, 'metric_2')
+        assertEquals(alarms[2].message, 'warn message')
+    end
+
+    function TestAfd:test_inject_afd_metric_without_alarms()
+        afd.inject_afd_metric(consts.OKAY, 'node-1', 'foo', {}, false)
+
+        local alarms = afd.get_alarms()
+        assertEquals(#alarms, 0)
+        assertEquals(last_injected_msg.Type, 'afd_metric')
+        assertEquals(last_injected_msg.Fields.value, consts.OKAY)
+        assertEquals(last_injected_msg.Fields.hostname, 'node-1')
+        assertEquals(last_injected_msg.Payload, '{"alarms":[]}')
+    end
+
+    function TestAfd:test_inject_afd_metric_with_alarms()
+        afd.add_to_alarms(consts.CRIT, 'last', 'metric_1', {}, {}, '==', 0, 0, nil, nil, "important message")
+        afd.inject_afd_metric(consts.CRIT, 'node-1', 'foo', {}, false)
+
+        local alarms = afd.get_alarms()
+        assertEquals(#alarms, 0)
+        assertEquals(last_injected_msg.Type, 'afd_metric')
+        assertEquals(last_injected_msg.Fields.value, consts.CRIT)
+        assertEquals(last_injected_msg.Fields.hostname, 'node-1')
+        assertEquals(last_injected_msg.Fields.environment_id, extra.environment_id)
+        assert(last_injected_msg.Payload:match('"message":"important message"'))
+        assert(last_injected_msg.Payload:match('"severity":"CRITICAL"'))
+    end
+
+    function TestAfd:test_alarms_for_human_without_fields()
+        local alarms = afd.alarms_for_human({{
+            severity='WARNING',
+            ['function']='avg',
+            metric='load_longterm',
+            fields={},
+            tags={},
+            operator='>',
+            value=7,
+            threshold=5,
+            window=600,
+            periods=0,
+            message='load too high',
+        }})
+
+        assertEquals(#alarms, 1)
+        assertEquals(alarms[1], 'load too high (WARNING, rule=\'avg(load_longterm)>5\', current=7.00)')
+    end
+
+    function TestAfd:test_alarms_for_human_with_fields()
+        local alarms = afd.alarms_for_human({{
+            severity='CRITICAL',
+            ['function']='avg',
+            metric='fs_space_percent_free',
+            fields={fs='/'},
+            tags={},
+            operator='<=',
+            value=2,
+            threshold=5,
+            window=600,
+            periods=0,
+            message='free disk space too low'
+        }})
+
+        assertEquals(#alarms, 1)
+        assertEquals(alarms[1], 'free disk space too low (CRITICAL, rule=\'avg(fs_space_percent_free[fs="/"])<=5\', current=2.00)')
+    end
+
+    function TestAfd:test_alarms_for_human_with_hostname()
+        local alarms = afd.alarms_for_human({{
+            severity='WARNING',
+            ['function']='avg',
+            metric='load_longterm',
+            fields={},
+            tags={},
+            operator='>',
+            value=7,
+            threshold=5,
+            window=600,
+            periods=0,
+            message='load too high',
+            hostname='node-1'
+        }})
+
+        assertEquals(#alarms, 1)
+        assertEquals(alarms[1], 'load too high (WARNING, rule=\'avg(load_longterm)>5\', current=7.00, host=node-1)')
+    end
+
+    function TestAfd:test_alarms_for_human_with_hints()
+        local alarms = afd.alarms_for_human({{
+            severity='WARNING',
+            ['function']='avg',
+            metric='load_longterm',
+            fields={},
+            tags={dependency_level='hint',dependency_name='controller'},
+            operator='>',
+            value=7,
+            threshold=5,
+            window=600,
+            periods=0,
+            message='load too high',
+            hostname='node-1'
+        }})
+
+        assertEquals(#alarms, 2)
+        assertEquals(alarms[1], 'Other related alarms:')
+        assertEquals(alarms[2], 'load too high (WARNING, rule=\'avg(load_longterm)>5\', current=7.00, host=node-1)')
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/lua/test_afd_alarm.lua b/tests/lua/test_afd_alarm.lua
new file mode 100644
index 0000000..35c877a
--- /dev/null
+++ b/tests/lua/test_afd_alarm.lua
@@ -0,0 +1,1080 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+local lma_alarm = require('afd_alarms')
+local consts = require('gse_constants')
+
+local alarms = {
+    { -- 1
+        name = 'FS_all_no_field',
+        description = 'FS all no field',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'fs_space_percent_free',
+                    window = 120,
+                    ['function'] = 'avg',
+                    relational_operator = '<=',
+                    threshold = 11,
+                },
+            },
+            logical_operator = 'and',
+        },
+        severity = 'warning',
+    },
+    { -- 2
+        name = 'RabbitMQ_Critical',
+        description = 'Number of messages in queue is critical',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    relational_operator = '>=',
+                    metric = 'rabbitmq_messages',
+                    fields = {},
+                    window = "300",
+                    periods = "0",
+                    ['function'] = 'min',
+                    threshold = "50",
+                },
+            },
+            logical_operator = 'or',
+        },
+        severity = 'critical',
+    },
+    { -- 3
+        name = 'CPU_Critical_Controller',
+        description = 'CPU is critical for the controller',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'cpu_idle',
+                    window = 120,
+                    periods = 2,
+                    ['function'] = 'avg',
+                    relational_operator = '<=',
+                    threshold = 5,
+                },
+                {
+                    metric = 'cpu_wait',
+                    window = 120,
+                    periods = 1,
+                    ['function'] = 'avg',
+                    relational_operator = '>=',
+                    threshold = 20,
+                },
+            },
+            logical_operator = 'or',
+        },
+        severity = 'critical',
+    },
+    { -- 4
+        name = 'CPU_Warning_Controller',
+        description = 'CPU is warning for controller',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'cpu_idle',
+                    window = 100,
+                    periods = 2,
+                    ['function'] = 'avg',
+                    relational_operator = '<=',
+                    threshold = 15,
+                },
+                {
+                    metric = 'cpu_wait',
+                    window = 60,
+                    periods = 0,
+                    ['function'] = 'avg',
+                    relational_operator = '>=',
+                    threshold = 25,
+                },
+            },
+            logical_operator = 'or',
+        },
+        severity = 'warning',
+    },
+    { -- 5
+        name = 'CPU_Critical_Controller_AND',
+        description = 'CPU is critical for controller',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'cpu_idle',
+                    window = 120,
+                    periods = 2,
+                    ['function'] = 'avg',
+                    relational_operator = '<=',
+                    threshold = 3,
+                },
+                {
+                    metric = 'cpu_wait',
+                    window = 60,
+                    periods = 1,
+                    ['function'] = 'avg',
+                    relational_operator = '>=',
+                    threshold = 30,
+                },
+            },
+            logical_operator = 'and',
+        },
+        severity = 'critical',
+    },
+    { -- 6
+        name = 'FS_root',
+        description = 'FS root',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'fs_space_percent_free',
+                    window = 120,
+                    ['function'] = 'avg',
+                    fields = { fs='/'},
+                    relational_operator = '<=',
+                    threshold = 10,
+                },
+            },
+            logical_operator = 'and',
+        },
+        severity = 'critical',
+    },
+    { -- 7
+        name = 'Backend_errors_5xx',
+        description = 'Errors 5xx on backends',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'haproxy_backend_response_5xx',
+                    window = 30,
+                    periods = 1,
+                    ['function'] = 'diff',
+                    relational_operator = '>',
+                    threshold = 0,
+                },
+            },
+            logical_operator = 'or',
+        },
+        severity = 'warning',
+    },
+    { -- 8
+        name = 'nova_logs_errors_rate',
+        description = 'Rate of change for nova logs in error is too high',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'log_messages',
+                    window = 60,
+                    periods = 4,
+                    ['function'] = 'roc',
+                    threshold = 1.5,
+                },
+            },
+        },
+        severity = 'warning',
+    },
+    { -- 9
+        name = 'heartbeat',
+        description = 'No metric!',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'foo_heartbeat',
+                    window = 60,
+                    periods = 1,
+                    ['function'] = 'last',
+                    relational_operator = '==',
+                    threshold = 0,
+                },
+            },
+        },
+        severity = 'down',
+    },
+}
+
+afd_on_multivalue = {
+    name = 'keystone-high-http-response-times',
+    description = 'The 90 percentile response time for Keystone is too high',
+    enabled = true,
+    trigger = {
+        rules = {
+            {
+                metric = 'http_response_times',
+                window = 60,
+                periods = 1,
+                ['function'] = 'max',
+                threshold = 5,
+                fields = { http_method = 'POST' },
+                relational_operator = '>=',
+                value = 'upper_90',
+            },
+        },
+    },
+    severity = 'warning',
+}
+
+missing_value_afd_on_multivalue = {
+    name = 'keystone-high-http-response-times',
+    description = 'The 90 percentile response time for Keystone is too high',
+    enabled = true,
+    trigger = {
+        rules = {
+            {
+                metric = 'http_response_times',
+                window = 30,
+                periods = 2,
+                ['function'] = 'max',
+                threshold = 5,
+                fields = { http_method = 'POST' },
+                relational_operator = '>=',
+                -- value = 'upper_90',
+            },
+        },
+    },
+    severity = 'warning',
+}
+
+TestLMAAlarm = {}
+
+local current_time = 0
+
+function TestLMAAlarm:tearDown()
+    lma_alarm.reset_alarms()
+    current_time = 0
+end
+
+local function next_time(inc)
+    if not inc then inc = 10 end
+    current_time = current_time + (inc*1e9)
+    return current_time
+end
+
+function TestLMAAlarm:test_start_evaluation()
+    lma_alarm.load_alarm(alarms[3]) -- window=120 period=2
+    lma_alarm.set_start_time(current_time)
+    local alarm = lma_alarm.get_alarm('CPU_Critical_Controller')
+    assertEquals(alarm:is_evaluation_time(next_time(10)), false) -- 10 seconds
+    assertEquals(alarm:is_evaluation_time(next_time(50)), false) -- 60 seconds
+    assertEquals(alarm:is_evaluation_time(next_time(60)), false) -- 120 seconds
+    assertEquals(alarm:is_evaluation_time(next_time(120)), true) -- 240 seconds
+    assertEquals(alarm:is_evaluation_time(next_time(240)), true) -- later
+end
+
+function TestLMAAlarm:test_not_the_time()
+    lma_alarm.load_alarms(alarms)
+    lma_alarm.set_start_time(current_time)
+    local state, _ = lma_alarm.evaluate(next_time()) -- no alarm w/ window <= 10s
+    assertEquals(state, nil)
+end
+
+function TestLMAAlarm:test_lookup_fields_for_metric()
+    lma_alarm.load_alarms(alarms)
+    local fields_required = lma_alarm.get_metric_fields('fs_space_percent_free')
+    assertItemsEquals(fields_required, {"fs"})
+end
+
+function TestLMAAlarm:test_lookup_empty_fields_for_metric()
+    lma_alarm.load_alarms(alarms)
+    local fields_required = lma_alarm.get_metric_fields('cpu_idle')
+    assertItemsEquals(fields_required, {})
+    local fields_required = lma_alarm.get_metric_fields('fs_space_percent_free')
+    assertItemsEquals(fields_required, {'fs'})
+end
+
+function TestLMAAlarm:test_lookup_interested_alarms()
+    lma_alarm.load_alarms(alarms)
+    local alarms = lma_alarm.get_interested_alarms('foometric')
+    assertEquals(#alarms, 0)
+    local alarms = lma_alarm.get_interested_alarms('cpu_wait')
+    assertEquals(#alarms, 3)
+
+end
+
+function TestLMAAlarm:test_get_alarms()
+    lma_alarm.load_alarms(alarms)
+    local all_alarms = lma_alarm.get_alarms()
+    local num = 0
+    for _, _ in pairs(all_alarms) do
+        num = num + 1
+    end
+    assertEquals(num, #alarms)
+end
+
+function TestLMAAlarm:test_no_datapoint()
+    lma_alarm.load_alarms(alarms)
+    lma_alarm.set_start_time(current_time)
+    local t = next_time(300) -- at this time all alarms can be evaluated
+    local state, results = lma_alarm.evaluate(t)
+    assertEquals(state, consts.UNKW)
+    assert(#results > 0)
+    for _, result in ipairs(results) do
+        assertEquals(result.alert.message, 'No datapoint have been received ever')
+        assertNotEquals(result.alert.fields, nil)
+    end
+end
+
+function TestLMAAlarm:test_rules_logical_op_and_no_alert()
+    lma_alarm.load_alarms(alarms)
+    local alarm = lma_alarm.get_alarm('CPU_Critical_Controller_AND')
+    lma_alarm.set_start_time(current_time)
+    local t1 = next_time(60) -- 60s
+    local t2 = next_time(60) -- 120s
+    local t3 = next_time(60) -- 180s
+    local t4 = next_time(60) -- 240s
+    lma_alarm.add_value(t1, 'cpu_wait', 3)
+    lma_alarm.add_value(t2, 'cpu_wait', 10)
+    lma_alarm.add_value(t3, 'cpu_wait', 1)
+    lma_alarm.add_value(t4, 'cpu_wait', 10)
+
+    lma_alarm.add_value(t1, 'cpu_idle', 30)
+    lma_alarm.add_value(t2, 'cpu_idle', 10)
+    lma_alarm.add_value(t3, 'cpu_idle', 10)
+    lma_alarm.add_value(t4, 'cpu_idle', 20)
+    local state, result = alarm:evaluate(t4)
+    assertEquals(#result, 0)
+    assertEquals(state, consts.OKAY)
+end
+
+function TestLMAAlarm:test_rules_logical_missing_datapoint__op_and()
+    lma_alarm.load_alarm(alarms[5])
+    lma_alarm.set_start_time(current_time)
+    local t1 = next_time(60)
+    local t2 = next_time(60)
+    local t3 = next_time(60)
+    local t4 = next_time(60)
+    lma_alarm.add_value(t1, 'cpu_wait', 0) -- 60s
+    lma_alarm.add_value(t2, 'cpu_wait', 2) -- 120s
+    lma_alarm.add_value(t3, 'cpu_wait', 5) -- 180s
+    lma_alarm.add_value(t4, 'cpu_wait', 6) -- 240s
+    lma_alarm.add_value(t1, 'cpu_idle', 20) -- 60s
+    lma_alarm.add_value(t2, 'cpu_idle', 20) -- 120s
+    lma_alarm.add_value(t3, 'cpu_idle', 20) -- 180s
+    lma_alarm.add_value(t4, 'cpu_idle', 20) -- 240s
+    local state, result = lma_alarm.evaluate(t4) -- 240s we can evaluate
+    assertEquals(state, consts.OKAY)
+    assertEquals(#result, 0)
+    local state, result = lma_alarm.evaluate(next_time(60)) -- 60s w/o datapoint
+    assertEquals(state, consts.OKAY)
+    --  cpu_wait have no data within its observation period
+    local state, result = lma_alarm.evaluate(next_time(1)) -- 61s w/o datapoint
+    assertEquals(state, consts.UNKW)
+    assertEquals(#result, 1)
+    assertEquals(result[1].alert.metric, 'cpu_wait')
+    assert(result[1].alert.message:match('No datapoint have been received over the last'))
+
+    --  both cpu_idle and cpu_wait have no data within their observation periods
+    local state, result = lma_alarm.evaluate(next_time(180)) -- 241s w/o datapoint
+    assertEquals(state, consts.UNKW)
+    assertEquals(#result, 2)
+    assertEquals(result[1].alert.metric, 'cpu_idle')
+    assert(result[1].alert.message:match('No datapoint have been received over the last'))
+    assertEquals(result[2].alert.metric, 'cpu_wait')
+    assert(result[2].alert.message:match('No datapoint have been received over the last'))
+
+    -- datapoints come back for both metrics
+    lma_alarm.add_value(next_time(), 'cpu_idle', 20)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 20)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 20)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 20)
+    local state, result = lma_alarm.evaluate(next_time()) -- 240s we can evaluate
+    assertEquals(state, consts.OKAY)
+    assertEquals(#result, 0)
+end
+
+function TestLMAAlarm:test_rules_logical_missing_datapoint__op_and_2()
+    lma_alarm.load_alarm(alarms[5])
+    lma_alarm.set_start_time(current_time)
+    local t1 = next_time(60)
+    local t2 = next_time(60)
+    local t3 = next_time(60)
+    local t4 = next_time(60)
+    lma_alarm.add_value(t1, 'cpu_wait', 0) -- 60s
+    lma_alarm.add_value(t2, 'cpu_wait', 2) -- 120s
+    lma_alarm.add_value(t3, 'cpu_wait', 5) -- 180s
+    lma_alarm.add_value(t4, 'cpu_wait', 6) -- 240s
+    lma_alarm.add_value(t1, 'cpu_idle', 20) -- 60s
+    lma_alarm.add_value(t2, 'cpu_idle', 20) -- 120s
+    lma_alarm.add_value(t3, 'cpu_idle', 20) -- 180s
+    lma_alarm.add_value(t4, 'cpu_idle', 20) -- 240s
+    local state, result = lma_alarm.evaluate(t4) -- 240s we can evaluate
+    assertEquals(state, consts.OKAY)
+    assertEquals(#result, 0)
+    local state, result = lma_alarm.evaluate(next_time(60)) -- 60s w/o datapoint
+    assertEquals(state, consts.OKAY)
+    --  cpu_wait have no data within its observation period
+    local state, result = lma_alarm.evaluate(next_time(1)) -- 61s w/o datapoint
+    assertEquals(state, consts.UNKW)
+    assertEquals(#result, 1)
+    assertEquals(result[1].alert.metric, 'cpu_wait')
+    assert(result[1].alert.message:match('No datapoint have been received over the last'))
+
+    lma_alarm.add_value(next_time(170), 'cpu_wait', 20)
+    --  cpu_idle have no data within its observation period
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(state, consts.UNKW)
+    assertEquals(#result, 1)
+    assertEquals(result[1].alert.metric, 'cpu_idle')
+    assert(result[1].alert.message:match('No datapoint have been received over the last'))
+
+    -- datapoints come back for both metrics
+    lma_alarm.add_value(next_time(), 'cpu_idle', 20)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 20)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 20)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 20)
+    local state, result = lma_alarm.evaluate(next_time()) -- 240s we can evaluate
+    assertEquals(state, consts.OKAY)
+    assertEquals(#result, 0)
+end
+
+function TestLMAAlarm:test_rules_logical_op_and()
+    lma_alarm.load_alarm(alarms[5])
+    local cpu_critical_and = lma_alarm.get_alarm('CPU_Critical_Controller_AND')
+    lma_alarm.add_value(next_time(1), 'cpu_wait', 30)
+    lma_alarm.add_value(next_time(1), 'cpu_wait', 30)
+    lma_alarm.add_value(next_time(1), 'cpu_wait', 35)
+
+    lma_alarm.add_value(next_time(2), 'cpu_idle', 0)
+    lma_alarm.add_value(next_time(2), 'cpu_idle', 1)
+    lma_alarm.add_value(next_time(2), 'cpu_idle', 7)
+    lma_alarm.add_value(next_time(2), 'cpu_idle', 2)
+    local state, result = cpu_critical_and:evaluate(current_time)
+    assertEquals(state, consts.CRIT)
+    assertEquals(#result, 2) -- both rules match: avg(cpu_wait)>=30 and avg(cpu_idle)<=15
+
+    lma_alarm.add_value(next_time(120), 'cpu_idle', 70)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 70)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 70)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 40)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 38)
+    local state, result = cpu_critical_and:evaluate(current_time)
+    assertEquals(state, consts.OKAY)
+    assertEquals(#result, 0) -- avg(cpu_wait)>=30 matches but not avg(cpu_idle)<=15
+
+    lma_alarm.add_value(next_time(200), 'cpu_idle', 70)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 70)
+    local state, result = cpu_critical_and:evaluate(current_time)
+    assertEquals(state, consts.UNKW)
+    assertEquals(#result, 1) -- no data for avg(cpu_wait)>=30 and avg(cpu_idle)<=3 doesn't match
+
+    next_time(240) -- spend enough time to invalidate datapoints of cpu_wait
+    lma_alarm.add_value(current_time, 'cpu_idle', 2)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 2)
+    local state, result = cpu_critical_and:evaluate(current_time)
+    assertEquals(state, consts.UNKW)
+    assertEquals(#result, 2) -- no data for avg(cpu_wait)>=30 and avg(cpu_idle)<=3 matches
+end
+
+function TestLMAAlarm:test_rules_logical_op_or_one_alert()
+    lma_alarm.load_alarms(alarms)
+    local cpu_warn_and = lma_alarm.get_alarm('CPU_Warning_Controller')
+    lma_alarm.add_value(next_time(), 'cpu_wait', 15)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 10)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 20)
+
+    lma_alarm.add_value(next_time(), 'cpu_idle', 11)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 8)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 7)
+    local state, result = cpu_warn_and:evaluate(current_time)
+    assertEquals(state, consts.WARN)
+    assertEquals(#result, 1) -- avg(cpu_wait) IS NOT >=25 and avg(cpu_idle)<=2
+end
+
+function TestLMAAlarm:test_rules_logical_op_or_all_alert()
+    lma_alarm.load_alarm(alarms[4])
+    local cpu_warn_and = lma_alarm.get_alarm('CPU_Warning_Controller')
+    lma_alarm.add_value(next_time(), 'cpu_wait', 35)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 20)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 32)
+
+    lma_alarm.add_value(next_time(), 'cpu_idle', 3)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 2.5)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 1.5)
+    local state, result = cpu_warn_and:evaluate(current_time)
+    assertEquals(state, consts.WARN)
+    assertEquals(#result, 2) -- avg(cpu_wait) >=25 and avg(cpu_idle)<=3
+end
+
+function TestLMAAlarm:test_min()
+    lma_alarm.load_alarms(alarms)
+    lma_alarm.add_value(next_time(), 'rabbitmq_messages', 50)
+    lma_alarm.add_value(next_time(), 'rabbitmq_messages', 100)
+    lma_alarm.add_value(next_time(), 'rabbitmq_messages', 75)
+    lma_alarm.add_value(next_time(), 'rabbitmq_messages', 81)
+    local rabbitmq_critical = lma_alarm.get_alarm('RabbitMQ_Critical')
+    assertEquals(rabbitmq_critical.severity, consts.CRIT)
+    local state_crit, result = rabbitmq_critical:evaluate(current_time)
+    assertEquals(state_crit, consts.CRIT) -- min()>=50
+    assertEquals(#result, 1)
+    assertEquals(result[1].value, 50)
+end
+
+ function TestLMAAlarm:test_max()
+    local a = {
+        name = 'foo alert',
+        description = 'foo description',
+        trigger = {
+            rules = {
+                {
+                    metric = 'rabbitmq_queue_messages',
+                    window = 30,
+                    periods = 2,
+                    ['function'] = 'max',
+                    threshold = 200,
+                    relational_operator = '>=',
+                },
+            },
+        },
+        severity = 'warning',
+    }
+     lma_alarm.load_alarm(a)
+     lma_alarm.add_value(next_time(), 'rabbitmq_queue_messages', 0, {queue = 'queue-XX', hostname = 'node-x'})
+     lma_alarm.add_value(next_time(), 'rabbitmq_queue_messages', 260, {queue = 'queue-XX', hostname = 'node-x'})
+     lma_alarm.add_value(next_time(), 'rabbitmq_queue_messages', 200, {queue = 'queue-XX', hostname = 'node-x'})
+     lma_alarm.add_value(next_time(), 'rabbitmq_queue_messages', 152, {queue = 'queue-XX', hostname = 'node-x'})
+     lma_alarm.add_value(next_time(), 'rabbitmq_queue_messages', 152, {queue = 'nova', hostname = 'node-x'})
+     lma_alarm.add_value(next_time(), 'rabbitmq_queue_messages', 532, {queue = 'nova', hostname = 'node-x'})
+     local state_warn, result = lma_alarm.evaluate(current_time)
+     assertEquals(state_warn, consts.WARN)
+     assertEquals(#result, 1)
+     assertEquals(result[1].alert['function'], 'max')
+     assertEquals(result[1].alert.value, 532)
+ end
+
+function TestLMAAlarm:test_diff()
+    lma_alarm.load_alarms(alarms)
+    local errors_5xx = lma_alarm.get_alarm('Backend_errors_5xx')
+    assertEquals(errors_5xx.severity, consts.WARN)
+
+    -- with 5xx errors
+    lma_alarm.add_value(next_time(), 'haproxy_backend_response_5xx', 1)
+    lma_alarm.add_value(next_time(), 'haproxy_backend_response_5xx', 11) -- +10s
+    lma_alarm.add_value(next_time(), 'haproxy_backend_response_5xx', 21) -- +10s
+    local state, result = errors_5xx:evaluate(current_time)
+    assertEquals(state, consts.WARN)
+    assertEquals(#result, 1)
+    assertEquals(result[1].value, 20)
+
+    -- without 5xx errors
+    lma_alarm.add_value(next_time(), 'haproxy_backend_response_5xx', 21)
+    lma_alarm.add_value(next_time(), 'haproxy_backend_response_5xx', 21) -- +10s
+    lma_alarm.add_value(next_time(), 'haproxy_backend_response_5xx', 21) -- +10s
+    local state, result = errors_5xx:evaluate(current_time)
+    assertEquals(state, consts.OKAY)
+    assertEquals(#result, 0)
+
+    -- missing data
+    local state, result = errors_5xx:evaluate(next_time(60))
+    assertEquals(state, consts.UNKW)
+end
+
+function TestLMAAlarm:test_roc()
+    lma_alarm.load_alarms(alarms)
+    local errors_logs = lma_alarm.get_alarm('nova_logs_errors_rate')
+    assertEquals(errors_logs.severity, consts.WARN)
+    local m_values = {}
+
+    -- Test one error in the current window
+    m_values = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- historical window 0
+                 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- historical window 0
+                 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- historical window 3
+                 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- historical window 4
+                 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- previous window
+                 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0 } -- current window
+    for _,v in pairs(m_values) do
+        lma_alarm.add_value(next_time(5), 'log_messages', v, {service = 'nova', level = 'error'})
+    end
+    local state, _ = errors_logs:evaluate(current_time)
+    assertEquals(state, consts.WARN)
+
+    -- Test one error in the historical window
+    m_values = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- historical window 0
+                 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- historical window 0
+                 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,  -- historical window 3
+                 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- historical window 4
+                 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,  -- previous window
+                 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } -- current window
+    for _,v in pairs(m_values) do
+        lma_alarm.add_value(next_time(5), 'log_messages', v, {service = 'nova', level = 'error'})
+    end
+    local state, _ = errors_logs:evaluate(current_time)
+    assertEquals(state, consts.OKAY)
+
+    -- with rate errors
+    m_values = { 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- historical window 1
+                 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- historical window 2
+                 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- historical window 3
+                 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- historical window 4
+                 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- previous window
+                 1, 2, 1, 1, 1, 2, 1, 5, 5, 7, 1, 7 } -- current window
+    for _,v in pairs(m_values) do
+        lma_alarm.add_value(next_time(5), 'log_messages', v, {service = 'nova', level = 'error'})
+    end
+    local state, _ = errors_logs:evaluate(current_time)
+    assertEquals(state, consts.WARN)
+
+    -- without rate errors
+    m_values = { 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- historical window 1
+                 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- historical window 2
+                 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- historical window 3
+                 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- historical window 4
+                 1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2,  -- previous window
+                 1, 2, 1, 1, 1, 2, 1, 3, 4, 3, 3, 4 } -- current window
+    for _,v in pairs(m_values) do
+        lma_alarm.add_value(next_time(5), 'log_messages', v, {service = 'nova', level = 'error'})
+    end
+    local state, _ = errors_logs:evaluate(current_time)
+    assertEquals(state, consts.OKAY)
+end
+
+function TestLMAAlarm:test_alarm_first_match()
+    lma_alarm.load_alarm(alarms[3]) --  cpu critical (window 240s)
+    lma_alarm.load_alarm(alarms[4]) --  cpu warning (window 120s)
+    lma_alarm.set_start_time(current_time)
+
+    next_time(240) -- both alarms can now be evaluated
+    lma_alarm.add_value(next_time(), 'cpu_idle', 15)
+    lma_alarm.add_value(next_time(), 'cpu_wait', 9)
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(state, consts.WARN) -- 2nd alarm raised
+    assertEquals(#result, 1) -- cpu_idle match (<= 15) and cpu_wait don't match (>= 25)
+
+    next_time(240) -- both alarms can now be evaluated with new datapoints
+    lma_alarm.add_value(next_time(), 'cpu_wait', 15)
+    lma_alarm.add_value(next_time(), 'cpu_idle', 4)
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(state, consts.CRIT) -- first alarm raised
+    assertEquals(#result, 1) -- cpu_idle match (<= 5) and cpu_wait don't match (>= 20)
+end
+
+function TestLMAAlarm:test_rules_fields()
+    lma_alarm.load_alarm(alarms[1]) -- FS_all_no_field
+    lma_alarm.load_alarm(alarms[6]) -- FS_root
+    lma_alarm.set_start_time(current_time)
+
+    local t = next_time()
+    lma_alarm.add_value(t, 'fs_space_percent_free', 6, {fs = '/'})
+    lma_alarm.add_value(t, 'fs_space_percent_free', 6 )
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 12, {fs = '/'})
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 17 )
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 6, {fs = '/'})
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 6, {fs = 'foo'})
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 3, {fs = 'foo'})
+    local t = next_time()
+
+    local root_fs = lma_alarm.get_alarm('FS_root')
+    local state, result = root_fs:evaluate(t)
+    assertEquals(#result, 1)
+    assertItemsEquals(result[1].fields, {fs='/'})
+    assertEquals(result[1].value, 8)
+
+
+    local root_fs = lma_alarm.get_alarm('FS_all_no_field')
+    local state, result = root_fs:evaluate(t)
+    assertEquals(#result, 1)
+
+    assertItemsEquals(result[1].fields, {})
+    assertEquals(result[1].value, 8)
+end
+
+function TestLMAAlarm:test_last_fct()
+    lma_alarm.load_alarm(alarms[9])
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(), 'foo_heartbeat', 1)
+    lma_alarm.add_value(next_time(), 'foo_heartbeat', 1)
+    lma_alarm.add_value(next_time(), 'foo_heartbeat', 0)
+    lma_alarm.add_value(next_time(), 'foo_heartbeat', 1)
+    lma_alarm.add_value(next_time(), 'foo_heartbeat', 0)
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(state, consts.DOWN)
+    next_time(61)
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(state, consts.UNKW)
+    lma_alarm.add_value(next_time(), 'foo_heartbeat', 0)
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(state, consts.DOWN)
+    lma_alarm.add_value(next_time(), 'foo_heartbeat', 1)
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(state, consts.OKAY)
+end
+
+function TestLMAAlarm:test_rule_with_multivalue()
+    lma_alarm.load_alarm(afd_on_multivalue)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 0.4, foo = 1}, {http_method = 'POST'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 0.2, foo = 1}, {http_method = 'POST'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 6, foo = 1}, {http_method = 'POST'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 3, foo = 1}, {http_method = 'POST'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 4, foo = 1}, {http_method = 'POST'})
+    local state, result = lma_alarm.evaluate(next_time()) -- window 60 second
+    assertEquals(state, consts.WARN)
+    assertItemsEquals(result[1].alert.fields, {http_method='POST'})
+    assertEquals(result[1].alert.value, 6)
+end
+
+function TestLMAAlarm:test_nocrash_missing_value_with_multivalue_metric()
+    lma_alarm.load_alarm(missing_value_afd_on_multivalue)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 0.4, foo = 1}, {http_method = 'POST'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 0.2, foo = 1}, {http_method = 'POST'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 6, foo = 1}, {http_method = 'POST'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 3, foo = 1}, {http_method = 'POST'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 4, foo = 1}, {http_method = 'POST'})
+    local state, result = lma_alarm.evaluate(next_time()) -- window 60 second
+    assertEquals(state, consts.UNKW)
+end
+
+function TestLMAAlarm:test_complex_field_matching_alarm_trigger()
+    local alert = {
+        name = 'keystone-high-http-response-times',
+        description = 'The 90 percentile response time for Keystone is too high',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'http_response_times',
+                    window = 30,
+                    periods = 2,
+                    ['function'] = 'max',
+                    threshold = 5,
+                    fields = { http_method = 'POST || GET',
+                               http_status = '2xx || ==3xx'},
+                    relational_operator = '>=',
+                    value = 'upper_90',
+                },
+            },
+        },
+        severity = 'warning',
+    }
+    lma_alarm.load_alarm(alert)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 0.4, foo = 1}, {http_method = 'POST', http_status = '2xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 0.2, foo = 1}, {http_method = 'POST', http_status = '2xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 6, foo = 1}, {http_method = 'POST', http_status = '3xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 999, foo = 1}, {http_method = 'POST', http_status = '5xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 3, foo = 1}, {http_method = 'GET', http_status = '2xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 4, foo = 1}, {http_method = 'POST', http_status = '2xx'})
+    local state, result = lma_alarm.evaluate(next_time()) -- window 60 second
+    assertEquals(state, consts.WARN)
+    assertEquals(result[1].alert.value, 6) -- the max
+    assertItemsEquals(result[1].alert.fields, {http_method='POST || GET', http_status='2xx || ==3xx'})
+end
+
+function TestLMAAlarm:test_complex_field_matching_alarm_ok()
+    local alert = {
+        name = 'keystone-high-http-response-times',
+        description = 'The 90 percentile response time for Keystone is too high',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'http_response_times',
+                    window = 30,
+                    periods = 2,
+                    ['function'] = 'avg',
+                    threshold = 5,
+                    fields = { http_method = 'POST || GET',
+                               http_status = '2xx || 3xx'},
+                    relational_operator = '>=',
+                    value = 'upper_90',
+                },
+            },
+        },
+        severity = 'warning',
+    }
+
+    lma_alarm.load_alarm(alert)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 0.4, foo = 1}, {http_method = 'POST', http_status = '2xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 0.2, foo = 1}, {http_method = 'POST', http_status = '2xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 6, foo = 1}, {http_method = 'POST', http_status = '2xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 3, foo = 1}, {http_method = 'GET', http_status = '2xx'})
+    lma_alarm.add_value(next_time(), 'http_response_times', {upper_90 = 4, foo = 1}, {http_method = 'POST', http_status = '2xx'})
+    local state, result = lma_alarm.evaluate(next_time()) -- window 60 second
+    assertEquals(state, consts.OKAY)
+end
+
+function TestLMAAlarm:test_group_by_required_field()
+    local alert = {
+        name = 'foo-alarm',
+        description = 'foo description',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'foo_metric_name',
+                    window = 30,
+                    periods = 1,
+                    ['function'] = 'avg',
+                    fields = { foo = 'bar', bar = 'foo' },
+                    group_by = {'fs'},
+                    relational_operator = '<=',
+                    threshold = 5,
+                },
+            },
+        },
+        severity = 'warning',
+    }
+    lma_alarm.load_alarm(alert)
+    local fields = lma_alarm.get_metric_fields('foo_metric_name')
+    assertItemsEquals(fields, { "fs", "foo", "bar" })
+
+    local fields = lma_alarm.get_metric_fields('non_existant_metric')
+    assertItemsEquals(fields, {})
+end
+
+function TestLMAAlarm:test_group_by_one_field()
+    local alert = {
+        name = 'osd-filesystem-warning',
+        description = 'free space is too low',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'fs_space_percent_free',
+                    window = 30,
+                    periods = 1,
+                    ['function'] = 'avg',
+                    fields = { fs = '=~ osd%-%d && !~ /var/log' },
+                    group_by = {'fs'},
+                    relational_operator = '<=',
+                    threshold = 5,
+                },
+            },
+        },
+        severity = 'warning',
+    }
+    lma_alarm.load_alarm(alert)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 5, {fs = 'osd-1'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 4, {fs = 'osd-2'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 80, {fs = 'osd-3'})
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 4, {fs = 'osd-1'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 3, {fs = 'osd-2'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 80, {fs = 'osd-3'})
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 4, {fs = 'osd-1'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 2, {fs = 'osd-2'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 80, {fs = 'osd-3'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 1, {fs = '/var/log/osd-3'})
+
+    local state, result = lma_alarm.evaluate(next_time()) -- window 60 second
+    assertEquals(#result, 2)
+    assertEquals(state, consts.WARN)
+
+    next_time(100) -- spend enough time to invalidate datapoints
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 50, {fs = 'osd-1'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 50, {fs = 'osd-2'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 50, {fs = 'osd-3'})
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 50, {fs = 'osd-1'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 50, {fs = 'osd-2'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 50, {fs = 'osd-3'})
+    local state, result = lma_alarm.evaluate(next_time()) -- window 60 second
+    assertEquals(#result, 0)
+    assertEquals(state, consts.OKAY)
+end
+
+function TestLMAAlarm:test_group_by_several_fields()
+    local alert = {
+        name = 'osd-filesystem-warning',
+        description = 'free space is too low',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'fs_space_percent_free',
+                    window = 30,
+                    periods = 1,
+                    ['function'] = 'last',
+                    fields = {},
+                    group_by = {'fs', 'osd'},
+                    relational_operator = '<=',
+                    threshold = 5,
+                },
+            },
+        },
+        severity = 'warning',
+    }
+    lma_alarm.load_alarm(alert)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 5, {fs = '/foo', osd = '1'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 4, {fs = '/foo', osd = '2'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 80, {fs = '/foo', osd = '3'})
+
+    local state, result = lma_alarm.evaluate(next_time(20))
+    assertEquals(state, consts.WARN)
+    -- one item for {fs = '/foo', osd = '1'} and another one for {fs = '/foo', osd = '2'}
+    assertEquals(#result, 2)
+
+    next_time(100) -- spend enough time to invalidate datapoints
+
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 5, {fs = '/foo', osd = '1'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 4, {fs = '/foo', osd = '2'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 80, {fs = '/foo', osd = '3'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 15, {fs = '/bar', osd = '1'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 14, {fs = '/bar', osd = '2'})
+    lma_alarm.add_value(current_time, 'fs_space_percent_free', 2, {fs = '/bar', osd = '3'})
+    local state, result = lma_alarm.evaluate(next_time(20))
+    assertEquals(state, consts.WARN)
+    -- one item for {fs = '/foo', osd = '1'}, another one for {fs = '/foo', osd = '2'}
+    -- and another one for {fs = '/bar', osd = '3'}
+    assertEquals(#result, 3)
+end
+
+function TestLMAAlarm:test_group_by_missing_field_is_unknown()
+    local alert = {
+        name = 'osd-filesystem-warning',
+        description = 'free space is too low',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'fs_space_percent_free',
+                    window = 30,
+                    periods = 1,
+                    ['function'] = 'avg',
+                    fields = { fs = '=~ osd%-%d && !~ /var/log' },
+                    group_by = {'fs'},
+                    relational_operator = '<=',
+                    threshold = 5,
+                },
+            },
+        },
+        severity = 'warning',
+    }
+    lma_alarm.load_alarm(alert)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 5)
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 4)
+    lma_alarm.add_value(next_time(), 'fs_space_percent_free', 4)
+
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(#result, 1)
+    assertEquals(state, consts.UNKW)
+end
+
+function TestLMAAlarm:test_no_data_policy_okay()
+    local alarm = {
+        name = 'foo-alarm',
+        description = 'foo description',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'foo_metric_name',
+                    window = 30,
+                    periods = 1,
+                    ['function'] = 'avg',
+                    fields = { foo = 'bar', bar = 'foo' },
+                    group_by = {'fs'},
+                    relational_operator = '<=',
+                    threshold = 5,
+                },
+            },
+        },
+        severity = 'warning',
+        no_data_policy = 'okay',
+    }
+    lma_alarm.load_alarm(alarm)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(100), 'another_metric', 5)
+
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(#result, 0)
+    assertEquals(state, consts.OKAY)
+end
+
+function TestLMAAlarm:test_no_data_policy_critical()
+    local alarm = {
+        name = 'foo-alarm',
+        description = 'foo description',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'foo_metric_name',
+                    window = 30,
+                    periods = 1,
+                    ['function'] = 'avg',
+                    fields = { foo = 'bar', bar = 'foo' },
+                    group_by = {'fs'},
+                    relational_operator = '<=',
+                    threshold = 5,
+                },
+            },
+        },
+        severity = 'critical',
+        no_data_policy = 'critical',
+    }
+    lma_alarm.load_alarm(alarm)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(100), 'another_metric', 5)
+
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(#result, 1)
+    assertEquals(state, consts.CRIT)
+end
+
+function TestLMAAlarm:test_no_data_policy_skip()
+    local alarm = {
+        name = 'foo-alarm',
+        description = 'foo description',
+        enabled = true,
+        trigger = {
+            rules = {
+                {
+                    metric = 'foo_metric_name',
+                    window = 30,
+                    periods = 1,
+                    ['function'] = 'avg',
+                    fields = { foo = 'bar', bar = 'foo' },
+                    group_by = {'fs'},
+                    relational_operator = '<=',
+                    threshold = 5,
+                },
+            },
+        },
+        severity = 'critical',
+        no_data_policy = 'skip',
+    }
+    lma_alarm.load_alarm(alarm)
+    lma_alarm.set_start_time(current_time)
+
+    lma_alarm.add_value(next_time(100), 'another_metric', 5)
+
+    local state, result = lma_alarm.evaluate(next_time())
+    assertEquals(state, nil)
+end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/lua/test_gse.lua b/tests/lua/test_gse.lua
new file mode 100644
index 0000000..be892f5
--- /dev/null
+++ b/tests/lua/test_gse.lua
@@ -0,0 +1,230 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+-- mock the inject_message() function from the Heka sandbox library
+local last_injected_msg
+function inject_message(msg)
+    last_injected_msg = msg
+end
+
+local cjson = require('cjson')
+local consts = require('gse_constants')
+
+local gse = require('gse')
+local gse_policy = require('gse_policy')
+
+highest_policy = {
+    gse_policy.new({
+        status='down',
+        trigger={
+            logical_operator='or',
+            rules={{
+                ['function']='count',
+                arguments={'down'},
+                relational_operator='>',
+                threshold=0
+            }}
+        }
+    }),
+    gse_policy.new({
+        status='critical',
+        trigger={
+            logical_operator='or',
+            rules={{
+                ['function']='count',
+                arguments={'critical'},
+                relational_operator='>',
+                threshold=0
+            }}
+        }
+    }),
+    gse_policy.new({
+        status='warning',
+        trigger={
+            logical_operator='or',
+            rules={{
+                ['function']='count',
+                arguments={'warning'},
+                relational_operator='>',
+                threshold=0
+            }}
+        }
+    }),
+    gse_policy.new({status='okay'})
+}
+
+-- define clusters
+gse.add_cluster("heat", {'heat-api', 'controller'}, {'nova', 'glance', 'neutron', 'keystone', 'rabbitmq'}, 'member', highest_policy)
+gse.add_cluster("nova", {'nova-api', 'nova-ec2-api', 'nova-scheduler'}, {'glance', 'neutron', 'keystone', 'rabbitmq'}, 'member', highest_policy)
+gse.add_cluster("neutron", {'neutron-api'}, {'keystone', 'rabbitmq'}, 'member', highest_policy)
+gse.add_cluster("keystone", {'keystone-admin-api', 'keystone-public-api'}, {}, 'member', highest_policy)
+gse.add_cluster("glance", {'glance-api', 'glance-registry-api'}, {'keystone'}, 'member', highest_policy)
+gse.add_cluster("rabbitmq", {'rabbitmq-cluster', 'controller'}, {}, 'hostname', highest_policy)
+
+-- provision facts
+gse.set_member_status("neutron", "neutron-api", consts.DOWN, {{message="All neutron endpoints are down"}}, 'node-1')
+gse.set_member_status('keystone', 'keystone-admin-api', consts.OKAY, {}, 'node-1')
+gse.set_member_status('glance', "glance-api", consts.WARN, {{message="glance-api endpoint is down on node-1"}}, 'node-1')
+gse.set_member_status('glance', "glance-registry-api", consts.DOWN, {{message='glance-registry endpoints are down'}}, 'node-1')
+gse.set_member_status("rabbitmq", 'rabbitmq-cluster', consts.WARN, {{message="1 RabbitMQ node out of 3 is down"}}, 'node-2')
+gse.set_member_status("rabbitmq", 'rabbitmq-cluster', consts.OKAY, {}, 'node-1')
+gse.set_member_status("rabbitmq", 'rabbitmq-cluster', consts.OKAY, {}, 'node-3')
+gse.set_member_status('heat', "heat-api", consts.WARN, {{message='5xx errors detected'}}, 'node-1')
+gse.set_member_status('nova', "nova-api", consts.OKAY, {}, 'node-1')
+gse.set_member_status('nova', "nova-ec2_api", consts.OKAY, {}, 'node-1')
+gse.set_member_status('nova', "nova-scheduler", consts.OKAY, {}, 'node-1')
+gse.set_member_status('rabbitmq', "controller", consts.WARN, {{message='no space left'}}, 'node-1')
+gse.set_member_status('heat', "controller", consts.WARN, {{message='no space left'}}, 'node-1')
+
+for _, v in ipairs({'rabbitmq', 'keystone', 'glance', 'neutron', 'nova', 'heat'}) do
+    gse.resolve_status(v)
+end
+
+TestGse = {}
+
+    function TestGse:test_ordered_clusters()
+        local ordered_clusters = gse.get_ordered_clusters()
+        assertEquals(#ordered_clusters, 6)
+        assertEquals(ordered_clusters[1], 'rabbitmq')
+        assertEquals(ordered_clusters[2], 'keystone')
+        assertEquals(ordered_clusters[3], 'glance')
+        assertEquals(ordered_clusters[4], 'neutron')
+        assertEquals(ordered_clusters[5], 'nova')
+        assertEquals(ordered_clusters[6], 'heat')
+    end
+
+    function TestGse:test_01_rabbitmq_is_warning()
+        local status, alarms = gse.resolve_status('rabbitmq')
+        assertEquals(status, consts.WARN)
+        assertEquals(#alarms, 2)
+        assertEquals(alarms[1].hostname, 'node-1')
+        assertEquals(alarms[1].tags.dependency_name, 'controller')
+        assertEquals(alarms[1].tags.dependency_level, 'direct')
+        assertEquals(alarms[2].hostname, 'node-2')
+        assertEquals(alarms[2].tags.dependency_name, 'rabbitmq-cluster')
+        assertEquals(alarms[2].tags.dependency_level, 'direct')
+    end
+
+    function TestGse:test_02_keystone_is_okay()
+        local status, alarms = gse.resolve_status('keystone')
+        assertEquals(status, consts.OKAY)
+        assertEquals(#alarms, 0)
+    end
+
+    function TestGse:test_03_glance_is_down()
+        local status, alarms = gse.resolve_status('glance')
+        assertEquals(status, consts.DOWN)
+        assertEquals(#alarms, 2)
+        assert(alarms[1].hostname == nil)
+        assertEquals(alarms[1].tags.dependency_name, 'glance-api')
+        assertEquals(alarms[1].tags.dependency_level, 'direct')
+        assert(alarms[2].hostname == nil)
+        assertEquals(alarms[2].tags.dependency_name, 'glance-registry-api')
+        assertEquals(alarms[2].tags.dependency_level, 'direct')
+    end
+
+    function TestGse:test_04_neutron_is_down()
+        local status, alarms = gse.resolve_status('neutron')
+        assertEquals(status, consts.DOWN)
+        assertEquals(#alarms, 3)
+        assertEquals(alarms[1].tags.dependency_name, 'neutron-api')
+        assertEquals(alarms[1].tags.dependency_level, 'direct')
+        assert(alarms[1].hostname == nil)
+        assertEquals(alarms[2].tags.dependency_name, 'rabbitmq')
+        assertEquals(alarms[2].tags.dependency_level, 'hint')
+        assertEquals(alarms[2].hostname, 'node-1')
+        assertEquals(alarms[3].tags.dependency_name, 'rabbitmq')
+        assertEquals(alarms[3].tags.dependency_level, 'hint')
+        assertEquals(alarms[3].hostname, 'node-2')
+    end
+
+    function TestGse:test_05_nova_is_okay()
+        local status, alarms = gse.resolve_status('nova')
+        assertEquals(status, consts.OKAY)
+        assertEquals(#alarms, 0)
+    end
+
+    function TestGse:test_06_heat_is_warning_with_hints()
+        local status, alarms = gse.resolve_status('heat')
+        assertEquals(status, consts.WARN)
+        assertEquals(#alarms, 6)
+        assertEquals(alarms[1].tags.dependency_name, 'controller')
+        assertEquals(alarms[1].tags.dependency_level, 'direct')
+        assert(alarms[1].hostname == nil)
+        assertEquals(alarms[2].tags.dependency_name, 'heat-api')
+        assertEquals(alarms[2].tags.dependency_level, 'direct')
+        assert(alarms[2].hostname == nil)
+        assertEquals(alarms[3].tags.dependency_name, 'glance')
+        assertEquals(alarms[3].tags.dependency_level, 'hint')
+        assert(alarms[3].hostname == nil)
+        assertEquals(alarms[4].tags.dependency_name, 'glance')
+        assertEquals(alarms[4].tags.dependency_level, 'hint')
+        assert(alarms[4].hostname == nil)
+        assertEquals(alarms[5].tags.dependency_name, 'neutron')
+        assertEquals(alarms[5].tags.dependency_level, 'hint')
+        assert(alarms[5].hostname == nil)
+        assertEquals(alarms[6].tags.dependency_name, 'rabbitmq')
+        assertEquals(alarms[6].tags.dependency_level, 'hint')
+        assertEquals(alarms[6].hostname, 'node-2')
+    end
+
+    function TestGse:test_inject_cluster_metric_for_nova()
+        gse.inject_cluster_metric('nova', {key = "val"}, true)
+        local metric = last_injected_msg
+        assertEquals(metric.Type, 'gse_metric')
+        assertEquals(metric.Fields.member, 'nova')
+        assertEquals(metric.Fields.name, 'cluster_status')
+        assertEquals(metric.Fields.value, consts.OKAY)
+        assertEquals(metric.Fields.key, 'val')
+        assertEquals(metric.Payload, '{"alarms":[]}')
+    end
+
+    function TestGse:test_inject_cluster_metric_for_glance()
+        gse.inject_cluster_metric('glance', {key = "val"}, true)
+        local metric = last_injected_msg
+        assertEquals(metric.Type, 'gse_metric')
+        assertEquals(metric.Fields.member, 'glance')
+        assertEquals(metric.Fields.name, 'cluster_status')
+        assertEquals(metric.Fields.value, consts.DOWN)
+        assertEquals(metric.Fields.key, 'val')
+        assert(metric.Payload:match("glance%-registry endpoints are down"))
+        assert(metric.Payload:match("glance%-api endpoint is down on node%-1"))
+    end
+
+    function TestGse:test_inject_cluster_metric_for_heat()
+        gse.inject_cluster_metric('heat', {key = "val"}, true)
+        local metric = last_injected_msg
+        assertEquals(metric.Type, 'gse_metric')
+        assertEquals(metric.Fields.member, 'heat')
+        assertEquals(metric.Fields.name, 'cluster_status')
+        assertEquals(metric.Fields.value, consts.WARN)
+        assertEquals(metric.Fields.key, 'val')
+        assert(metric.Payload:match("5xx errors detected"))
+        assert(metric.Payload:match("1 RabbitMQ node out of 3 is down"))
+    end
+
+    function TestGse:test_reverse_index()
+        local clusters = gse.find_cluster_memberships('controller')
+        assertEquals(#clusters, 2)
+        assertEquals(clusters[1], 'heat')
+        assertEquals(clusters[2], 'rabbitmq')
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/lua/test_gse_cluster_policy.lua b/tests/lua/test_gse_cluster_policy.lua
new file mode 100644
index 0000000..a87d8cf
--- /dev/null
+++ b/tests/lua/test_gse_cluster_policy.lua
@@ -0,0 +1,201 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+local gse_policy = require('gse_policy')
+local consts = require('gse_constants')
+
+local test_policy_down = gse_policy.new({
+    status='down',
+    trigger={
+        logical_operator='or',
+        rules={{
+            ['function']='count',
+            arguments={'down'},
+            relational_operator='>',
+            threshold=0
+        }}
+    }
+})
+
+local test_policy_critical = gse_policy.new({
+    status='critical',
+    trigger={
+        logical_operator='and',
+        rules={{
+            ['function']='count',
+            arguments={'critical'},
+            relational_operator='>',
+            threshold=0
+        }, {
+            ['function']='percent',
+            arguments={'okay', 'warning'},
+            relational_operator='<',
+            threshold=50
+        }}
+    }
+})
+
+local test_policy_warning = gse_policy.new({
+        status='warning',
+        trigger={
+            logical_operator='or',
+            rules={{
+                ['function']='percent',
+                arguments={'okay'},
+                relational_operator='<',
+                threshold=50
+            }, {
+                ['function']='percent',
+                arguments={'warning'},
+                relational_operator='>',
+                threshold=30
+            }}
+        }
+})
+
+local test_policy_okay = gse_policy.new({
+        status='okay'
+})
+
+TestGsePolicy = {}
+
+    function TestGsePolicy:test_policy_down()
+        assertEquals(test_policy_down.status, consts.DOWN)
+        assertEquals(test_policy_down.logical_op, 'or')
+        assertEquals(#test_policy_down.rules, 1)
+        assertEquals(test_policy_down.rules[1]['function'], 'count')
+        assertEquals(#test_policy_down.rules[1].arguments, 1)
+        assertEquals(test_policy_down.rules[1].arguments[1], consts.DOWN)
+        assertEquals(test_policy_down.rules[1].relational_op, '>')
+        assertEquals(test_policy_down.rules[1].threshold, 0)
+        assertEquals(test_policy_down.require_percent, false)
+    end
+
+    function TestGsePolicy:test_policy_okay_evaluate_true()
+        local facts = {
+            [consts.OKAY]=5,
+            [consts.WARN]=0,
+            [consts.CRIT]=0,
+            [consts.DOWN]=0,
+            [consts.UNKW]=0,
+        }
+        assertEquals(test_policy_okay:evaluate(facts), true)
+    end
+
+    function TestGsePolicy:test_policy_okay_evaluate_true_again()
+        local facts = {
+            [consts.OKAY]=0,
+            [consts.WARN]=0,
+            [consts.CRIT]=0,
+            [consts.DOWN]=0,
+            [consts.UNKW]=0,
+        }
+        assertEquals(test_policy_okay:evaluate(facts), true)
+    end
+
+    function TestGsePolicy:test_policy_warn_evaluate_true()
+        local facts = {
+            [consts.OKAY]=2,
+            [consts.WARN]=2,
+            [consts.CRIT]=0,
+            [consts.DOWN]=0,
+            [consts.UNKW]=1,
+        }
+        assertEquals(test_policy_warning:evaluate(facts), true)
+    end
+
+    function TestGsePolicy:test_policy_warn_evaluate_false()
+        local facts = {
+            [consts.OKAY]=6,
+            [consts.WARN]=2,
+            [consts.CRIT]=0,
+            [consts.DOWN]=0,
+            [consts.UNKW]=1,
+        }
+        assertEquals(test_policy_warning:evaluate(facts), false)
+    end
+
+    function TestGsePolicy:test_policy_warn_evaluate_true_again()
+        local facts = {
+            [consts.OKAY]=3,
+            [consts.WARN]=2,
+            [consts.CRIT]=0,
+            [consts.DOWN]=0,
+            [consts.UNKW]=0,
+        }
+        assertEquals(test_policy_warning:evaluate(facts), true)
+    end
+
+    function TestGsePolicy:test_policy_crit_evaluate_true()
+        local facts = {
+            [consts.OKAY]=1,
+            [consts.WARN]=1,
+            [consts.CRIT]=3,
+            [consts.DOWN]=0,
+            [consts.UNKW]=0,
+        }
+        assertEquals(test_policy_critical:evaluate(facts), true)
+    end
+
+    function TestGsePolicy:test_policy_crit_evaluate_false()
+        local facts = {
+            [consts.OKAY]=4,
+            [consts.WARN]=1,
+            [consts.CRIT]=3,
+            [consts.DOWN]=0,
+            [consts.UNKW]=0,
+        }
+        assertEquals(test_policy_critical:evaluate(facts), false)
+    end
+
+    function TestGsePolicy:test_policy_crit_evaluate_false_again()
+        local facts = {
+            [consts.OKAY]=3,
+            [consts.WARN]=1,
+            [consts.CRIT]=0,
+            [consts.DOWN]=0,
+            [consts.UNKW]=0,
+        }
+        assertEquals(test_policy_critical:evaluate(facts), false)
+    end
+
+    function TestGsePolicy:test_policy_down_evaluate_true()
+        local facts = {
+            [consts.OKAY]=2,
+            [consts.WARN]=2,
+            [consts.CRIT]=0,
+            [consts.DOWN]=1,
+            [consts.UNKW]=0,
+        }
+        assertEquals(test_policy_down:evaluate(facts), true)
+    end
+
+    function TestGsePolicy:test_policy_down_evaluate_false()
+        local facts = {
+            [consts.OKAY]=2,
+            [consts.WARN]=3,
+            [consts.CRIT]=0,
+            [consts.DOWN]=0,
+            [consts.UNKW]=0,
+        }
+        assertEquals(test_policy_down:evaluate(facts), false)
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/lua/test_gse_utils.lua b/tests/lua/test_gse_utils.lua
new file mode 100644
index 0000000..c9cbcba
--- /dev/null
+++ b/tests/lua/test_gse_utils.lua
@@ -0,0 +1,37 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+local gse_utils = require('gse_utils')
+local consts = require('gse_constants')
+
+TestGseUtils = {}
+
+    function TestGseUtils:test_max_status()
+        local status = gse_utils.max_status(consts.DOWN, consts.WARN)
+        assertEquals(consts.DOWN, status)
+        local status = gse_utils.max_status(consts.OKAY, consts.WARN)
+        assertEquals(consts.WARN, status)
+        local status = gse_utils.max_status(consts.OKAY, consts.DOWN)
+        assertEquals(consts.DOWN, status)
+        local status = gse_utils.max_status(consts.UNKW, consts.DOWN)
+        assertEquals(consts.DOWN, status)
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/lua/test_influxdb.lua b/tests/lua/test_influxdb.lua
new file mode 100644
index 0000000..160c6b5
--- /dev/null
+++ b/tests/lua/test_influxdb.lua
@@ -0,0 +1,52 @@
+-- Copyright 2016 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+require('os')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+local influxdb = require('influxdb')
+
+TestInfluxDB = {}
+
+    function TestInfluxDB:test_ms_precision_encoder()
+        local encoder = influxdb.new("ms")
+        assertEquals(encoder:encode_datapoint(1e9 * 1000, 'foo', 1), 'foo value=1.000000 1000000')
+        assertEquals(encoder:encode_datapoint(1e9 * 1000, 'foo', 'bar'), 'foo value="bar" 1000000')
+        assertEquals(encoder:encode_datapoint(1e9 * 1000, 'foo', 'b"ar'), 'foo value="b\\"ar" 1000000')
+        assertEquals(encoder:encode_datapoint(1e9 * 1000, 'foo', 1, {tag2="t2",tag1="t1"}), 'foo,tag1=t1,tag2=t2 value=1.000000 1000000')
+        assertEquals(encoder:encode_datapoint(1e9 * 1000, 'foo', {a=1, b=2}), 'foo a=1.000000,b=2.000000 1000000')
+    end
+
+    function TestInfluxDB:test_second_precision_encoder()
+        local encoder = influxdb.new("s")
+        assertEquals(encoder:encode_datapoint(1e9 * 1000, 'foo', 1), 'foo value=1.000000 1000')
+    end
+
+    function TestInfluxDB:test_us_precision_encoder()
+        local encoder = influxdb.new("us")
+        assertEquals(encoder:encode_datapoint(1e9 * 1000, 'foo', 1), 'foo value=1.000000 1000000000')
+    end
+
+    function TestInfluxDB:test_encoder_with_bad_input()
+        local encoder = influxdb.new()
+        assertEquals(encoder:encode_datapoint(1e9 * 1000, nil, 1), '')
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
+
+
diff --git a/tests/lua/test_lma_utils.lua b/tests/lua/test_lma_utils.lua
new file mode 100644
index 0000000..8b6d198
--- /dev/null
+++ b/tests/lua/test_lma_utils.lua
@@ -0,0 +1,96 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+function inject_message(msg)
+    if msg == 'fail' then
+        error('fail')
+    end
+end
+
+function inject_payload(payload_type, payload_name, data)
+    if data == 'fail' then
+        error('fail')
+    end
+end
+
+local lma_utils = require('lma_utils')
+
+TestLmaUtils = {}
+
+    function TestLmaUtils:test_safe_json_encode_with_valid_data()
+        local ret = lma_utils.safe_json_encode({})
+        assertEquals(ret, '{}')
+    end
+
+    function TestLmaUtils:test_safe_inject_message_without_error()
+        local ret, msg = lma_utils.safe_inject_message({})
+        assertEquals(ret, 0)
+        assertEquals(msg, nil)
+    end
+
+    function TestLmaUtils:test_safe_inject_message_with_error()
+        local ret, msg = lma_utils.safe_inject_message('fail')
+        assertEquals(ret, -1)
+        assert(msg:match(': fail'))
+    end
+
+    function TestLmaUtils:test_safe_inject_payload_without_error()
+        local ret, msg = lma_utils.safe_inject_payload('txt', 'foo', {})
+        assertEquals(ret, 0)
+        assertEquals(msg, nil)
+    end
+
+    function TestLmaUtils:test_safe_inject_payload_with_error()
+        local ret, msg = lma_utils.safe_inject_payload('txt', 'foo', 'fail')
+        assertEquals(ret, -1)
+        assert(msg:match(': fail'))
+    end
+
+    function TestLmaUtils:test_truncate_with_small_string()
+        local ret = lma_utils.truncate('foo', 10, '<BR/>')
+        assertEquals(ret, 'foo')
+    end
+
+    function TestLmaUtils:test_truncate_with_large_string()
+        local ret = lma_utils.truncate('foo and long string', 10, '<BR/>')
+        assertEquals(ret, 'foo and lo')
+    end
+
+    function TestLmaUtils:test_truncate_with_one_delimiter()
+        local ret = lma_utils.truncate('foo<BR/>longstring', 10, '<BR/>')
+        assertEquals(ret, 'foo')
+    end
+
+    function TestLmaUtils:test_truncate_with_several_delimiters_1()
+        local ret = lma_utils.truncate('foo<BR/>bar<BR/>longstring', 10, '<BR/>')
+        assertEquals(ret, 'foo')
+    end
+
+    function TestLmaUtils:test_truncate_with_several_delimiters_2()
+        local ret = lma_utils.truncate('foo<BR/>ba<BR/>longstring', 10, '<BR/>')
+        assertEquals(ret, 'foo<BR/>ba')
+    end
+
+    function TestLmaUtils:test_truncate_with_several_delimiters_3()
+        local ret = lma_utils.truncate('foo<BR/>ba<BR/>long<BR/>string', 12, '<BR/>')
+        assertEquals(ret, 'foo<BR/>ba')
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/lua/test_patterns.lua b/tests/lua/test_patterns.lua
new file mode 100644
index 0000000..86d9507
--- /dev/null
+++ b/tests/lua/test_patterns.lua
@@ -0,0 +1,122 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+require('os')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+local patt = require('patterns')
+local l = require('lpeg')
+
+TestPatterns = {}
+
+    function TestPatterns:test_Uuid()
+        assertEquals(patt.Uuid:match('be6876f2-c1e6-42ea-ad95-792a5500f0fa'),
+                                     'be6876f2-c1e6-42ea-ad95-792a5500f0fa')
+        assertEquals(patt.Uuid:match('be6876f2c1e642eaad95792a5500f0fa'),
+                                     'be6876f2-c1e6-42ea-ad95-792a5500f0fa')
+        assertEquals(patt.Uuid:match('ze6876f2c1e642eaad95792a5500f0fa'),
+                                     nil)
+        assertEquals(patt.Uuid:match('be6876f2-c1e642eaad95792a5500f0fa'),
+                                     nil)
+    end
+
+    function TestPatterns:test_Timestamp()
+        -- note that Timestamp:match() returns the number of nanosecs since the
+        -- Epoch in the local timezone
+        local_epoch = os.time(os.date("!*t",0)) * 1e9
+        assertEquals(patt.Timestamp:match('1970-01-01 00:00:01+00:00'),
+                                          local_epoch + 1e9)
+        assertEquals(patt.Timestamp:match('1970-01-01 00:00:02'),
+                                          local_epoch + 2e9)
+        assertEquals(patt.Timestamp:match('1970-01-01 00:00:03'),
+                                          local_epoch + 3e9)
+        assertEquals(patt.Timestamp:match('1970-01-01T00:00:04-00:00'),
+                                          local_epoch + 4e9)
+        assertEquals(patt.Timestamp:match('1970-01-01 01:00:05+01:00'),
+                                          local_epoch + 5e9)
+        assertEquals(patt.Timestamp:match('1970-01-01 00:00:00.123456+00:00'),
+                                          local_epoch + 0.123456 * 1e9)
+        assertEquals(patt.Timestamp:match('1970-01-01 00:01'),
+                                          nil)
+    end
+
+    function TestPatterns:test_programname()
+        assertEquals(l.C(patt.programname):match('nova-api'), 'nova-api')
+        assertEquals(l.C(patt.programname):match('nova-api foo'), 'nova-api')
+    end
+
+    function TestPatterns:test_anywhere()
+        assertEquals(patt.anywhere(l.C(patt.dash)):match(' - '), '-')
+        assertEquals(patt.anywhere(patt.dash):match(' . '), nil)
+    end
+
+    function TestPatterns:test_openstack()
+        local_epoch = os.time(os.date("!*t",0)) * 1e9
+        assertEquals(patt.openstack:match(
+            '1970-01-01 00:00:02 3434 INFO oslo_service.periodic_task [-] Blabla...'),
+            {Timestamp = local_epoch + 2e9, Pid = '3434', SeverityLabel = 'INFO',
+             PythonModule = 'oslo_service.periodic_task', Message = '[-] Blabla...'})
+    end
+
+    function TestPatterns:test_openstack_request_context()
+        assertEquals(patt.openstack_request_context:match('[-]'), nil)
+        assertEquals(patt.openstack_request_context:match(
+            "[req-4db318af-54c9-466d-b365-fe17fe4adeed - - - - -]"),
+            {RequestId = '4db318af-54c9-466d-b365-fe17fe4adeed'})
+        assertEquals(patt.openstack_request_context:match(
+            "[req-4db318af-54c9-466d-b365-fe17fe4adeed 8206d40abcc3452d8a9c1ea629b4a8d0 112245730b1f4858ab62e3673e1ee9e2 - - -]"),
+            {RequestId = '4db318af-54c9-466d-b365-fe17fe4adeed',
+             UserId = '8206d40a-bcc3-452d-8a9c-1ea629b4a8d0',
+             TenantId = '11224573-0b1f-4858-ab62-e3673e1ee9e2'})
+    end
+
+    function TestPatterns:test_openstack_http()
+        assertEquals(patt.openstack_http:match(
+            '"OPTIONS / HTTP/1.0" status: 200 len: 497 time: 0.0006731'),
+            {http_method = 'OPTIONS', http_url = '/', http_version = '1.0',
+             http_status = '200', http_response_size = 497,
+             http_response_time = 0.0006731})
+        assertEquals(patt.openstack_http:match(
+            'foo "OPTIONS / HTTP/1.0" status: 200 len: 497 time: 0.0006731 bar'),
+            {http_method = 'OPTIONS', http_url = '/', http_version = '1.0',
+             http_status = '200', http_response_size = 497,
+             http_response_time = 0.0006731})
+    end
+
+    function TestPatterns:test_openstack_http_with_extra_space()
+        assertEquals(patt.openstack_http:match(
+            '"OPTIONS / HTTP/1.0" status: 200  len: 497 time: 0.0006731'),
+            {http_method = 'OPTIONS', http_url = '/', http_version = '1.0',
+             http_status = '200', http_response_size = 497,
+             http_response_time = 0.0006731})
+        assertEquals(patt.openstack_http:match(
+            'foo "OPTIONS / HTTP/1.0" status: 200  len: 497 time: 0.0006731 bar'),
+            {http_method = 'OPTIONS', http_url = '/', http_version = '1.0',
+             http_status = '200', http_response_size = 497,
+             http_response_time = 0.0006731})
+    end
+
+    function TestPatterns:test_ip_address()
+        assertEquals(patt.ip_address:match('192.168.1.2'),
+            {ip_address = '192.168.1.2'})
+        assertEquals(patt.ip_address:match('foo 192.168.1.2 bar'),
+            {ip_address = '192.168.1.2'})
+        assertEquals(patt.ip_address:match('192.1688.1.2'), nil)
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/lua/test_table_utils.lua b/tests/lua/test_table_utils.lua
new file mode 100644
index 0000000..88a7e90
--- /dev/null
+++ b/tests/lua/test_table_utils.lua
@@ -0,0 +1,86 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+
+local table_utils = require('table_utils')
+
+TestTableUtils = {}
+
+    function TestTableUtils:setUp()
+        self.array = { 'a', 'b', 'c' }
+        self.dict = { c='C', a='A', b='B' }
+    end
+
+    function TestTableUtils:test_item_pos_with_match()
+        assertEquals(table_utils.item_pos('b', self.array), 2)
+    end
+
+    function TestTableUtils:test_item_pos_without_match()
+        assertEquals(table_utils.item_pos('z', self.array), nil)
+    end
+
+    function TestTableUtils:test_item_find_with_match()
+        assertEquals(table_utils.item_find('b', self.array), true)
+    end
+
+    function TestTableUtils:test_item_find_without_match()
+        assertEquals(table_utils.item_find('z', self.array), false)
+    end
+
+    function TestTableUtils:test_deep_copy()
+        local copy = table_utils.deepcopy(self.array)
+        assertEquals(#copy, #self.array)
+        assertEquals(copy[1], self.array[1])
+        assertEquals(copy[2], self.array[2])
+        assertEquals(copy[3], self.array[3])
+        assert(copy ~= self.array)
+    end
+
+    function TestTableUtils:test_orderedPairs()
+        local t = {}
+        for k,v in table_utils.orderedPairs(self.dict) do
+            t[#t+1] = { k=k, v=v }
+        end
+        assertEquals(#t, 3)
+        assertEquals(t[1].k, 'a')
+        assertEquals(t[1].v, 'A')
+        assertEquals(t[2].k, 'b')
+        assertEquals(t[2].v, 'B')
+        assertEquals(t[3].k, 'c')
+        assertEquals(t[3].v, 'C')
+    end
+
+    function TestTableUtils:test_table_equal_with_equal_keys_and_values()
+        assertTrue(table_utils.table_equal({a = 'a', b = 'b'}, {a = 'a', b = 'b'}))
+    end
+
+    function TestTableUtils:test_table_equal_with_nonequal_values()
+        assertFalse(table_utils.table_equal({a = 'a', b = 'b'}, {a = 'a', b = 'c'}))
+    end
+
+    function TestTableUtils:test_table_equal_with_nonequal_keys_1()
+        assertFalse(table_utils.table_equal({a = 'a', b = 'b'}, {a = 'a', c = 'b'}))
+    end
+
+    function TestTableUtils:test_table_equal_with_nonequal_keys_2()
+        assertFalse(table_utils.table_equal({a = 'a', b = 'b'},
+                                            {a = 'a', b = 'b', c = 'c'}))
+    end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/lua/test_value_matching.lua b/tests/lua/test_value_matching.lua
new file mode 100644
index 0000000..3142fb5
--- /dev/null
+++ b/tests/lua/test_value_matching.lua
@@ -0,0 +1,229 @@
+-- Copyright 2015 Mirantis, Inc.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+--     http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
+EXPORT_ASSERT_TO_GLOBALS=true
+require('luaunit')
+package.path = package.path .. ";../heka/files/lua/common/?.lua;lua/mocks/?.lua"
+local M = require('value_matching')
+
+TestValueMatching = {}
+
+function TestValueMatching:test_simple_matching()
+    local tests = {
+        {'/var/log',        '/var/log'},
+        {'== /var/log',     '/var/log'},
+        {'==/var/log',      '/var/log'},
+        {'==/var/log ',      '/var/log'},
+        {'==\t/var/log',    '/var/log'},
+        {'=="/var/log"',    '/var/log'},
+        {'== "/var/log"',   '/var/log'},
+        {'== " /var/log"',  ' /var/log'},
+        {'== "/var/log "',  '/var/log '},
+        {'== " /var/log "', ' /var/log '},
+        {8,                 8},
+        {8,                '8'},
+        {"9",               "9"},
+        {"==10",            " 10"},
+        {"10",              10},
+        {"== 10",           " 10"},
+        {"== 10.0",         " 10.0"},
+        {"== -10.01",       " -10.01"},
+        {"== 10 ",          " 10 "},
+        {' <=11',           '-11'},
+        {"!= -12",          42},
+        {"!= 12",           42},
+        {" > 13",            42},
+        {">= 13",           13},
+        {">= -13",           42},
+        {"< 14",            -0},
+        {"<= 14 ",           0},
+        {"<= 14",           "14"},
+    }
+    local r
+    for _, v in ipairs(tests) do
+        local exp, value = v[1], v[2]
+        local m = M.new(exp)
+        r = m:matches(value)
+        assertTrue(r)
+    end
+end
+
+function TestValueMatching:test_simple_not_matching()
+    local tests = {
+        {'/var/log',       '/var/log/mysql'},
+        {'== "/var/log"'   , '/var/log '},
+        {'"/var/log"',     '/var/log '},
+        {'"/var/log "',    '/var/log'},
+        {'nova-api',       'nova-compute'},
+        {'== /var/log',    '/var/log/mysql'},
+        {'==/var/log',     '/var/log/mysql'},
+        {'!=/var/log',     '/var/log'},
+        {'!= /var/log',    '/var/log'},
+        {'>10',            '5'},
+        {'> 10',           '5 '},
+        {' <11',           '11'},
+        {' >=11',          '-11'},
+        {' >=11 && <= 42', '-11'},
+        {' >=11 || == 42', '-11'},
+    }
+
+    for _, v in ipairs(tests) do
+        local exp, value = v[1], v[2]
+        local m = M.new(exp)
+        r = m:matches(value)
+        assertFalse(r)
+    end
+end
+
+function TestValueMatching:test_string_matching()
+    local tests = {
+        {'== "foo.bar"', "foo.bar", true},
+        {'== foo.bar', "foo.bar", true},
+        {'== foo.bar ', "foo.bar", true},
+        {'== foo || bar', "bar", true},
+        {'== foo || bar', "foo", true},
+        {'== foo || bar', "??", false},
+        {'!= foo || != bar', "42", true},
+    }
+
+    for _, v in ipairs(tests) do
+        local exp, value, expected = v[1], v[2], v[3]
+        local m = M.new(exp)
+        r = m:matches(value)
+        assertEquals(r, expected)
+    end
+
+end
+
+function TestValueMatching:test_invalid_expression()
+    local tests = {
+        '&& 1 && 1',
+        ' && 1',
+        '|| == 1',
+        '&& != 12',
+        ' ',
+        '   ',
+        '\t',
+        '',
+        nil,
+    }
+    for _, exp in ipairs(tests) do
+        assertError(M.new, exp)
+    end
+end
+
+function TestValueMatching:test_range_matching()
+    local tests = {
+        {'>= 200 && < 300', 200, true},
+        {'>=200&&<300'    , 200, true},
+        {' >=200&&<300'   , 200, true},
+        {'>= 200 && < 300', 204, true},
+        {'>= 200 && < 300', 300, false},
+        {'>= 200 && < 300', 42,  false},
+        {'>= 200 && < 300', 0,  false},
+    }
+
+    for _, v in ipairs(tests) do
+        local exp, value, expected = v[1], v[2], v[3]
+        local m = M.new(exp)
+        r = m:matches(value)
+        assertEquals(r, expected)
+    end
+end
+
+function TestValueMatching:test_wrong_data()
+    local tests = {
+        {'>= 200 && < 300', "foo", false},
+        {'>= 200 && < 300', ""   , false},
+        {'== 200'         , "bar", false},
+        {'== foo'         , "10" , false},
+        {'!= foo'         , " 10", true},
+    }
+    for _, v in ipairs(tests) do
+        local exp, value, expected = v[1], v[2], v[3]
+        local m = M.new(exp)
+        r = m:matches(value)
+        assertEquals(r, expected)
+    end
+end
+
+function TestValueMatching:test_precedence()
+    local tests = {
+        {'>= 200 && < 300 || >500', "200", true},
+        {'>= 200 && < 300 || >500', "501", true},
+        {'>= 200 && < 300 || >=500', "500", true},
+        {'>400 || >= 200 && < 300', "500", true},
+        {'>=300 && <500 || >= 200 && < 300', "300", true},
+        {'>=300 && <500 || >= 200 && < 300', "500", false},
+    }
+
+    for _, v in ipairs(tests) do
+        local exp, value, expected = v[1], v[2], v[3]
+        local m = M.new(exp)
+        r = m:matches(value)
+        assertEquals(r, expected)
+    end
+end
+
+function TestValueMatching:test_pattern_matching()
+    local tests = {
+        {'=~ /var/lib/ceph/osd/ceph%-%d+', "/var/lib/ceph/osd/ceph-1", true},
+        {'=~ /var/lib/ceph/osd/ceph%-%d+', "/var/lib/ceph/osd/ceph-42", true},
+        {'=~ ^/var/lib/ceph/osd/ceph%-%d+$', "/var/lib/ceph/osd/ceph-42", true},
+        {'=~ "/var/lib/ceph/osd/ceph%-%d+"', "/var/lib/ceph/osd/ceph-42", true},
+        {'=~ "ceph%-%d+"', "/var/lib/ceph/osd/ceph-42", true},
+        {'=~ "/var/lib/ceph/osd/ceph%-%d+$"', "/var/lib/ceph/osd/ceph-42 ", false}, -- trailing space
+        {'=~ /var/lib/ceph/osd/ceph%-%d+', "/var/log", false},
+        {'=~ /var/lib/ceph/osd/ceph%-%d+ || foo', "/var/lib/ceph/osd/ceph-1", true},
+        {'=~ "foo||bar" || foo', "foo||bar", true},
+        {'=~ "foo||bar" || foo', "foo", true},
+        {'=~ "foo&&bar" || foo', "foo&&bar", true},
+        {'=~ "foo&&bar" || foo', "foo", true},
+        {'=~ bar && /var/lib/ceph/osd/ceph%-%d+', "/var/lib/ceph/osd/ceph-1", false},
+        {'=~ -', "-", true},
+        {'=~ %-', "-", true},
+        {'!~ /var/lib/ceph/osd/ceph', "/var/log", true},
+        {'!~ /var/lib/ceph/osd/ceph%-%d+', "/var/log", true},
+        {'!~ .+osd%-%d+', "/var/log", true},
+        {'!~ osd%-%d+', "/var/log", true},
+        --{'=~ [', "[", true},
+    }
+
+    for _, v in ipairs(tests) do
+        local exp, value, expected = v[1], v[2], v[3]
+        local m = M.new(exp)
+        r = m:matches(value)
+        assertEquals(r, expected)
+    end
+end
+
+function TestValueMatching:test_wrong_patterns_never_match()
+    -- These patterns raise errors like:
+    -- malformed pattern (missing ']')
+    local tests = {
+        {'=~ [', "[", false},
+        {'!~ [', "[", false},
+    }
+
+    for _, v in ipairs(tests) do
+        local exp, value, expected = v[1], v[2], v[3]
+        local m = M.new(exp)
+        r = m:matches(value)
+        assertEquals(r, expected)
+    end
+end
+
+lu = LuaUnit
+lu:setVerbosity( 1 )
+os.exit( lu:run() )
diff --git a/tests/run_lua_tests.sh b/tests/run_lua_tests.sh
new file mode 100755
index 0000000..7a6b472
--- /dev/null
+++ b/tests/run_lua_tests.sh
@@ -0,0 +1,86 @@
+#!/usr/bin/env bash
+
+## Functions
+
+log_info() {
+    echo "[INFO] $*"
+}
+
+log_err() {
+    echo "[ERROR] $*" >&2
+}
+
+_atexit() {
+    RETVAL=$?
+    trap true INT TERM EXIT
+
+    if [ $RETVAL -ne 0 ]; then
+        log_err "Execution failed"
+    else
+        log_info "Execution successful"
+    fi
+    return $RETVAL
+}
+
+## Main
+
+[ -n "$DEBUG" ] && set -x
+
+LUA_VERSION=$(lua -v 2>&1)
+if [[ $? -ne 0 ]]; then
+    log_err "No lua interpreter present"
+    exit $?
+fi
+if [[ ! $LUA_VERSION =~ [Lua\ 5\.1] ]]; then
+    log_err "Lua version 5.1 is required"
+    exit 1
+fi
+
+lua5.1 -e "require('lpeg')" > /dev/null 2>&1
+if [[ $? -ne 0 ]]; then
+    log_err "lua-lpeg is required (run apt-get install lua-lpeg)"
+    exit 1
+fi
+
+lua5.1 -e "require('cjson')" > /dev/null 2>&1
+if [[ $? -ne 0 ]]; then
+    log_err "lua-cjson is required (run apt-get install lua-cjson)"
+    exit 1
+fi
+
+for pgm in "cmake wget curl"; do
+    which $pgm > /dev/null 2>&1
+    if [[ $? -ne 0 ]]; then
+        log_err "$pgm is required (run apt-get install $pgm)"
+        exit 1
+    fi
+done
+
+if [[ ! -f /usr/lib/x86_64-linux-gnu/liblua5.1.so ]]; then
+    log_err "package liblua5.1-0-dev is not installed (run apt-get install liblua5.1-0-dev)"
+    exit 1
+fi
+
+set -e
+
+curl -s -o lua/mocks/annotation.lua "https://raw.githubusercontent.com/mozilla-services/heka/versions/0.10/sandbox/lua/modules/annotation.lua"
+curl -s -o lua/mocks/anomaly.lua "https://raw.githubusercontent.com/mozilla-services/heka/versions/0.10/sandbox/lua/modules/anomaly.lua"
+curl -s -o lua/mocks/date_time.lua "https://raw.githubusercontent.com/mozilla-services/lua_sandbox/97331863d3e05d25131b786e3e9199e805b9b4ba/modules/date_time.lua"
+curl -s -o lua/mocks/inspect.lua "https://raw.githubusercontent.com/kikito/inspect.lua/master/inspect.lua"
+
+CBUF_COMMIT="bb6dd9f88f148813315b5a660b7e2ba47f958b31"
+CBUF_TARBALL_URL="https://github.com/mozilla-services/lua_circular_buffer/archive/${CBUF_COMMIT}.tar.gz"
+CBUF_DIR="/tmp/lua_circular_buffer-${CBUF_COMMIT}"
+CBUF_SO="${CBUF_DIR}/release/circular_buffer.so"
+if [[ ! -f "${CBUF_SO}" ]]; then
+    rm -rf ${CBUF_DIR}
+    wget -qO - ${CBUF_TARBALL_URL} | tar -zxvf - -C /tmp
+    (cd ${CBUF_DIR} && mkdir release && cd release && cmake -DCMAKE_BUILD_TYPE=release .. && make)
+    cp ${CBUF_SO} ./
+fi
+
+for t in $(ls lua/test_*.lua); do
+    lua5.1 $t
+done
+
+trap _atexit INT TERM EXIT