Use official 0.14.0 release

- 0.14.0 contains features developed in fork
- official Docker image has critical CVE

Related-PROD: PRODX-15814
Change-Id: I30bc06e8fecfb046dbdee498011cc9eaed6c6d75
diff --git a/Dockerfile b/Dockerfile
index c21cdf6..70b2e9d 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,15 +1,8 @@
 FROM python:3.9.2-alpine3.13
 
 WORKDIR /usr/src/app
-
-COPY setup.py /usr/src/app/
-RUN pip install .
-
-COPY prometheus_es_exporter/*.py /usr/src/app/prometheus_es_exporter/
-RUN pip install -e .
-
-COPY LICENSE /usr/src/app/
-COPY README.md /usr/src/app/
+COPY requirements.txt /usr/src/app/
+RUN pip install -r requirements.txt
 
 EXPOSE 9206
 
diff --git a/LICENSE b/LICENSE
deleted file mode 100644
index 83af499..0000000
--- a/LICENSE
+++ /dev/null
@@ -1,21 +0,0 @@
-The MIT License (MIT)
-
-Copyright (c) 2016 Braedon Vickers
-
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
diff --git a/MANIFEST.in b/MANIFEST.in
deleted file mode 100644
index c1a7121..0000000
--- a/MANIFEST.in
+++ /dev/null
@@ -1,2 +0,0 @@
-include LICENSE
-include README.md
diff --git a/README.md b/README.md
deleted file mode 100644
index 2eb4a28..0000000
--- a/README.md
+++ /dev/null
@@ -1,72 +0,0 @@
-Prometheus Elasticsearch Exporter
-====
-This Prometheus exporter collects metrics from queries run on an Elasticsearch cluster's data, and metrics about the cluster itself.
-
-## Query Metrics
-The exporter periodically runs configured queries against the Elasticsearch cluster and exports the results as Prometheus gauge metrics.
-
-Values are parsed out of the Elasticsearch results automatically, with the path through the JSON to the value being used to construct metric names.
-
-Metrics are only extracted from aggregation results, with the exception of the query `hits.total` count (exposed as `hits`) and `took` time (exposed as `took_milliseconds`). The keys of any buckets are converted to labels, rather than being inserted into the metric name. See [tests/test_parser.py](tests/test_parser.py) for all the supported queries/metrics.
-
-## Cluster Metrics
-The exporter queries the Elasticsearch cluster's `_cluster/health`, `_nodes/stats`, and `_stats` endpoints whenever its metrics endpoint is called, and exports the results as Prometheus gauge metrics.
-
-Endpoint responses are parsed into metrics as generically as possible so that (hopefully) all versions of Elasticsearch (past and future) can be reasonably supported with the same code. This results in less than ideal metrics in some cases - e.g. redundancy between some metrics, no distinction between gauges and counters (everything's a gauge). If you spot something you think can be reasonably improved let me know via a Github issue (or better yet - a PR).
-
-See [tests/test_cluster_health_parser.py](tests/test_cluster_health_parser.py), [tests/test_nodes_stats_parser.py](tests/test_nodes_stats_parser.py), and [tests/test_indices_stats_parser.py](tests/test_indices_stats_parser.py) for examples of responses and the metrics produced.
-
-# Installation
-The exporter requires Python 3 and Pip 3 to be installed.
-
-To install the latest published version via Pip, run:
-```
-> pip3 install prometheus-es-exporter
-```
-Note that you may need to add the start script location (see pip output) to your `PATH`.
-
-# Usage
-Once installed, you can run the exporter with the `prometheus-es-exporter` command.
-
-By default, it will bind to port 9206, query Elasticsearch on `localhost:9200` and run queries configured in a file `exporter.cfg` in the working directory. You can change these defaults as required by passing in options:
-```
-> prometheus-es-exporter -p <port> -e <elasticsearch nodes> -c <path to query config file>
-```
-Run with the `-h` flag to see details on all the available options.
-
-See the provided [exporter.cfg](exporter.cfg) file for query configuration examples and explanation.
-
-# Docker
-Docker images for released versions can be found on Docker Hub (note that no `latest` version is provided):
-```
-> sudo docker pull braedon/prometheus-es-exporter:<version>
-```
-To run a container successfully, you will need to mount a query config file to `/usr/src/app/exporter.cfg` and map container port 9206 to a port on the host. Any options placed after the image name (`prometheus-es-exporter`) will be passed to the process inside the container. For example, you will need to use this to configure the elasticsearch node(s) using `-e`.
-```
-> sudo docker run --rm --name exporter \
-    -v <path to query config file>:/usr/src/app/exporter.cfg \
-    -p <host port>:9206 \
-    braedon/prometheus-es-exporter:<version> -e <elasticsearch nodes>
-```
-If you don't want to mount the query config file in at run time, you could extend an existing image with your own Dockerfile that copies the config file in at build time.
-
-# Development
-To install directly from the git repo, run the following in the root project directory:
-```
-> pip3 install .
-```
-The exporter can be installed in "editable" mode, using pip's `-e` flag. This allows you to test out changes without having to re-install.
-```
-> pip3 install -e .
-```
-To run tests (as usual, from the root project directory), use:
-```
-> python3 -m unittest
-```
-Note that these tests currently only cover the response parsing functionality - there are no automated system tests as of yet.
-
-To build a docker image directly from the git repo, run the following in the root project directory:
-```
-> sudo docker build -t <your repository name and tag> .
-```
-Send me a PR if you have a change you want to contribute!
diff --git a/build/lib/prometheus_es_exporter/__init__.py b/build/lib/prometheus_es_exporter/__init__.py
deleted file mode 100644
index 010bfa0..0000000
--- a/build/lib/prometheus_es_exporter/__init__.py
+++ /dev/null
@@ -1,438 +0,0 @@
-import argparse
-import configparser
-import json
-import logging
-import re
-import sched
-import signal
-import sys
-import time
-
-from collections import OrderedDict
-from elasticsearch import Elasticsearch
-from elasticsearch.exceptions import ConnectionTimeout, NotFoundError
-from functools import partial
-from jog import JogFormatter
-from prometheus_client import start_http_server, Gauge
-from prometheus_client.core import GaugeMetricFamily, REGISTRY
-
-from prometheus_es_exporter import cluster_health_parser
-from prometheus_es_exporter import indices_stats_parser
-from prometheus_es_exporter import nodes_stats_parser
-from prometheus_es_exporter.parser import parse_response
-
-gauges = {}
-
-metric_invalid_chars = re.compile(r'[^a-zA-Z0-9_:]')
-metric_invalid_start_chars = re.compile(r'^[^a-zA-Z_:]')
-label_invalid_chars = re.compile(r'[^a-zA-Z0-9_]')
-label_invalid_start_chars = re.compile(r'^[^a-zA-Z_]')
-label_start_double_under = re.compile(r'^__+')
-
-
-def format_label_key(label_key):
-    label_key = re.sub(label_invalid_chars, '_', label_key)
-    label_key = re.sub(label_invalid_start_chars, '_', label_key)
-    label_key = re.sub(label_start_double_under, '_', label_key)
-    return label_key
-
-
-def format_label_value(value_list):
-    return '_'.join(value_list)
-
-
-def format_metric_name(name_list):
-    metric = '_'.join(name_list)
-    metric = re.sub(metric_invalid_chars, '_', metric)
-    metric = re.sub(metric_invalid_start_chars, '_', metric)
-    return metric
-
-
-def group_metrics(metrics):
-    metric_dict = {}
-    for (name_list, label_dict, value) in metrics:
-        metric_name = format_metric_name(name_list)
-        label_dict = OrderedDict([(format_label_key(k), format_label_value(v))
-                                  for k, v in label_dict.items()])
-
-        if metric_name not in metric_dict:
-            metric_dict[metric_name] = (tuple(label_dict.keys()), {})
-
-        label_keys = metric_dict[metric_name][0]
-        label_values = tuple([label_dict[key]
-                              for key in label_keys])
-
-        metric_dict[metric_name][1][label_values] = value
-
-    return metric_dict
-
-
-def update_gauges(metrics):
-    metric_dict = group_metrics(metrics)
-
-    for metric_name, (label_keys, value_dict) in metric_dict.items():
-        if metric_name in gauges:
-            (old_label_values_set, gauge) = gauges[metric_name]
-        else:
-            old_label_values_set = set()
-            gauge = Gauge(metric_name, '', label_keys)
-
-        new_label_values_set = set(value_dict.keys())
-
-        for label_values in old_label_values_set - new_label_values_set:
-            gauge.remove(*label_values)
-
-        for label_values, value in value_dict.items():
-            if label_values:
-                gauge.labels(*label_values).set(value)
-            else:
-                gauge.set(value)
-
-        gauges[metric_name] = (new_label_values_set, gauge)
-
-
-def gauge_generator(metrics):
-    metric_dict = group_metrics(metrics)
-
-    for metric_name, (label_keys, value_dict) in metric_dict.items():
-        # If we have label keys we may have multiple different values,
-        # each with their own label values.
-        if label_keys:
-            gauge = GaugeMetricFamily(metric_name, '', labels=label_keys)
-
-            for label_values, value in value_dict.items():
-                gauge.add_metric(label_values, value)
-
-        # No label keys, so we must have only a single value.
-        else:
-            gauge = GaugeMetricFamily(metric_name, '', value=list(value_dict.values())[0])
-
-        yield gauge
-
-
-def zero_gauges(name):
-    for metric_name, values in gauges.items():
-        if name in metric_name:
-            label_values, gauge = values
-            for labels in label_values:
-                if labels:
-                    gauge.labels(*labels).set(0)
-                else:
-                    gauge.set(0)
-
-
-def run_query(es_client, name, indices, query, timeout):
-    try:
-        response = es_client.search(index=indices, body=query, request_timeout=timeout)
-
-        metrics = parse_response(response, [name])
-    except NotFoundError:
-            logging.warn('Not found indices [%s]. Zeroing metrics for query [%s].', indices, query)
-            zero_gauges(name)
-    except Exception:
-        logging.exception('Error while querying indices [%s], query [%s].', indices, query)
-    else:
-        update_gauges(metrics)
-
-
-def collector_up_gauge(name_list, description, succeeded=True):
-    metric_name = format_metric_name(name_list + ['up'])
-    description = 'Did the {} fetch succeed.'.format(description)
-    return GaugeMetricFamily(metric_name, description, value=int(succeeded))
-
-
-class ClusterHealthCollector(object):
-    def __init__(self, es_client, timeout, level):
-        self.metric_name_list = ['es', 'cluster_health']
-        self.description = 'Cluster Health'
-
-        self.es_client = es_client
-        self.timeout = timeout
-        self.level = level
-
-    def collect(self):
-        try:
-            response = self.es_client.cluster.health(level=self.level, request_timeout=self.timeout)
-
-            metrics = cluster_health_parser.parse_response(response, self.metric_name_list)
-        except ConnectionTimeout:
-            logging.warn('Timeout while fetching %s (timeout %ss).', self.description, self.timeout)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        except Exception:
-            logging.exception('Error while fetching %s.', self.description)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        else:
-            yield from gauge_generator(metrics)
-            yield collector_up_gauge(self.metric_name_list, self.description)
-
-
-class NodesStatsCollector(object):
-    def __init__(self, es_client, timeout, metrics=None):
-        self.metric_name_list = ['es', 'nodes_stats']
-        self.description = 'Nodes Stats'
-
-        self.es_client = es_client
-        self.timeout = timeout
-        self.metrics = metrics
-
-    def collect(self):
-        try:
-            response = self.es_client.nodes.stats(metric=self.metrics, request_timeout=self.timeout)
-
-            metrics = nodes_stats_parser.parse_response(response, self.metric_name_list)
-        except ConnectionTimeout:
-            logging.warn('Timeout while fetching %s (timeout %ss).', self.description, self.timeout)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        except Exception:
-            logging.exception('Error while fetching %s.', self.description)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        else:
-            yield from gauge_generator(metrics)
-            yield collector_up_gauge(self.metric_name_list, self.description)
-
-
-class IndicesStatsCollector(object):
-    def __init__(self, es_client, timeout, parse_indices=False, metrics=None, fields=None):
-        self.metric_name_list = ['es', 'indices_stats']
-        self.description = 'Indices Stats'
-
-        self.es_client = es_client
-        self.timeout = timeout
-        self.parse_indices = parse_indices
-        self.metrics = metrics
-        self.fields = fields
-
-    def collect(self):
-        try:
-            response = self.es_client.indices.stats(metric=self.metrics, fields=self.fields, request_timeout=self.timeout)
-
-            metrics = indices_stats_parser.parse_response(response, self.parse_indices, self.metric_name_list)
-        except ConnectionTimeout:
-            logging.warn('Timeout while fetching %s (timeout %ss).', self.description, self.timeout)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        except Exception:
-            logging.exception('Error while fetching %s.', self.description)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        else:
-            yield from gauge_generator(metrics)
-            yield collector_up_gauge(self.metric_name_list, self.description)
-
-
-def run_scheduler(scheduler, interval, func):
-    def scheduled_run(scheduled_time,):
-        try:
-            func()
-        except Exception:
-            logging.exception('Error while running scheduled job.')
-
-        current_time = time.monotonic()
-        next_scheduled_time = scheduled_time + interval
-        while next_scheduled_time < current_time:
-            next_scheduled_time += interval
-
-        scheduler.enterabs(
-            next_scheduled_time,
-            1,
-            scheduled_run,
-            (next_scheduled_time,)
-        )
-
-    next_scheduled_time = time.monotonic()
-    scheduler.enterabs(
-        next_scheduled_time,
-        1,
-        scheduled_run,
-        (next_scheduled_time,)
-    )
-
-
-def shutdown():
-    logging.info('Shutting down')
-    sys.exit(1)
-
-
-def signal_handler(signum, frame):
-    shutdown()
-
-
-def csv_choice_arg_parser(choices, arg):
-    metrics = arg.split(',')
-
-    invalid_metrics = []
-    for metric in metrics:
-        if metric not in choices:
-            invalid_metrics.append(metric)
-
-    if invalid_metrics:
-        msg = 'invalid metric(s): "{}" in "{}" (choose from {})' \
-            .format(','.join(invalid_metrics), arg, ','.join(choices))
-        raise argparse.ArgumentTypeError(msg)
-
-    return metrics
-
-
-# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html#_nodes_statistics
-NODES_STATS_METRICS_OPTIONS = [
-    'indices', 'fs', 'http', 'jvm', 'os',
-    'process', 'thread_pool', 'transport',
-    'breaker', 'discovery', 'ingest'
-]
-nodes_stats_metrics_parser = partial(csv_choice_arg_parser, NODES_STATS_METRICS_OPTIONS)
-
-
-'completion,docs,fielddata,flush,get,indexing,merge,query_cache,recovery,refresh,request_cache,search,segments,store,suggest,translog,warmer'
-
-# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html#node-indices-stats
-INDICES_STATS_METRICS_OPTIONS = [
-    'completion', 'docs', 'fielddata',
-    'flush', 'get', 'indexing', 'merge',
-    'query_cache', 'recovery', 'refresh',
-    'request_cache', 'search', 'segments',
-    'store', 'suggest', 'translog', 'warmer'
-]
-indices_stats_metrics_parser = partial(csv_choice_arg_parser, INDICES_STATS_METRICS_OPTIONS)
-
-
-def indices_stats_fields_parser(arg):
-    if arg == '*':
-        return arg
-    else:
-        return arg.split(',')
-
-
-def main():
-    signal.signal(signal.SIGTERM, signal_handler)
-
-    parser = argparse.ArgumentParser(description='Export ES query results to Prometheus.')
-    parser.add_argument('-e', '--es-cluster', default='localhost',
-                        help='addresses of nodes in a Elasticsearch cluster to run queries on. Nodes should be separated by commas e.g. es1,es2. Ports can be provided if non-standard (9200) e.g. es1:9999 (default: localhost)')
-    parser.add_argument('--ca-certs',
-                        help='path to a CA certificate bundle. Can be absolute, or relative to the current working directory. If not specified, SSL certificate verification is disabled.')
-    parser.add_argument('-p', '--port', type=int, default=9206,
-                        help='port to serve the metrics endpoint on. (default: 9206)')
-    parser.add_argument('--basic-user',
-                        help='User for authentication. (default: no user)')
-    parser.add_argument('--basic-password',
-                        help='Password for authentication. (default: no password)')
-    parser.add_argument('--query-disable', action='store_true',
-                        help='disable query monitoring. Config file does not need to be present if query monitoring is disabled.')
-    parser.add_argument('-c', '--config-file', default='exporter.cfg',
-                        help='path to query config file. Can be absolute, or relative to the current working directory. (default: exporter.cfg)')
-    parser.add_argument('--cluster-health-disable', action='store_true',
-                        help='disable cluster health monitoring.')
-    parser.add_argument('--cluster-health-timeout', type=float, default=10.0,
-                        help='request timeout for cluster health monitoring, in seconds. (default: 10)')
-    parser.add_argument('--cluster-health-level', default='indices', choices=['cluster', 'indices', 'shards'],
-                        help='level of detail for cluster health monitoring.  (default: indices)')
-    parser.add_argument('--nodes-stats-disable', action='store_true',
-                        help='disable nodes stats monitoring.')
-    parser.add_argument('--nodes-stats-timeout', type=float, default=10.0,
-                        help='request timeout for nodes stats monitoring, in seconds. (default: 10)')
-    parser.add_argument('--nodes-stats-metrics', type=nodes_stats_metrics_parser,
-                        help='limit nodes stats to specific metrics. Metrics should be separated by commas e.g. indices,fs.')
-    parser.add_argument('--indices-stats-disable', action='store_true',
-                        help='disable indices stats monitoring.')
-    parser.add_argument('--indices-stats-timeout', type=float, default=10.0,
-                        help='request timeout for indices stats monitoring, in seconds. (default: 10)')
-    parser.add_argument('--indices-stats-mode', default='cluster', choices=['cluster', 'indices'],
-                        help='detail mode for indices stats monitoring. (default: cluster)')
-    parser.add_argument('--indices-stats-metrics', type=indices_stats_metrics_parser,
-                        help='limit indices stats to specific metrics. Metrics should be separated by commas e.g. indices,fs.')
-    parser.add_argument('--indices-stats-fields', type=indices_stats_fields_parser,
-                        help='include fielddata info for specific fields. Fields should be separated by commas e.g. indices,fs. Use \'*\' for all.')
-    parser.add_argument('-j', '--json-logging', action='store_true',
-                        help='turn on json logging.')
-    parser.add_argument('--log-level', default='INFO', choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
-                        help='detail level to log. (default: INFO)')
-    parser.add_argument('-v', '--verbose', action='store_true',
-                        help='turn on verbose (DEBUG) logging. Overrides --log-level.')
-    args = parser.parse_args()
-
-    if args.basic_user and args.basic_password is None:
-        parser.error('Username provided with no password.')
-    elif args.basic_user is None and args.basic_password:
-        parser.error('Password provided with no username.')
-    elif args.basic_user:
-        http_auth = (args.basic_user, args.basic_password)
-    else:
-        http_auth = None
-
-    log_handler = logging.StreamHandler()
-    log_format = '[%(asctime)s] %(name)s.%(levelname)s %(threadName)s %(message)s'
-    formatter = JogFormatter(log_format) if args.json_logging else logging.Formatter(log_format)
-    log_handler.setFormatter(formatter)
-
-    log_level = getattr(logging, args.log_level)
-    logging.basicConfig(
-        handlers=[log_handler],
-        level=logging.DEBUG if args.verbose else log_level
-    )
-    logging.captureWarnings(True)
-
-    port = args.port
-    es_cluster = args.es_cluster.split(',')
-
-    if args.ca_certs:
-        es_client = Elasticsearch(es_cluster, verify_certs=True, ca_certs=args.ca_certs, http_auth=http_auth)
-    else:
-        es_client = Elasticsearch(es_cluster, verify_certs=False, http_auth=http_auth)
-
-    scheduler = None
-
-    if not args.query_disable:
-        scheduler = sched.scheduler()
-
-        config = configparser.ConfigParser()
-        config.read_file(open(args.config_file))
-
-        query_prefix = 'query_'
-        queries = {}
-        for section in config.sections():
-            if section.startswith(query_prefix):
-                query_name = section[len(query_prefix):]
-                query_interval = config.getfloat(section, 'QueryIntervalSecs', fallback=15)
-                query_timeout = config.getfloat(section, 'QueryTimeoutSecs', fallback=10)
-                query_indices = config.get(section, 'QueryIndices', fallback='_all')
-                query = json.loads(config.get(section, 'QueryJson'))
-
-                queries[query_name] = (query_interval, query_timeout, query_indices, query)
-
-        if queries:
-            for name, (interval, timeout, indices, query) in queries.items():
-                func = partial(run_query, es_client, name, indices, query, timeout)
-                run_scheduler(scheduler, interval, func)
-        else:
-            logging.warn('No queries found in config file %s', args.config_file)
-
-    if not args.cluster_health_disable:
-        REGISTRY.register(ClusterHealthCollector(es_client,
-                                                 args.cluster_health_timeout,
-                                                 args.cluster_health_level))
-
-    if not args.nodes_stats_disable:
-        REGISTRY.register(NodesStatsCollector(es_client,
-                                              args.nodes_stats_timeout,
-                                              metrics=args.nodes_stats_metrics))
-
-    if not args.indices_stats_disable:
-        parse_indices = args.indices_stats_mode == 'indices'
-        REGISTRY.register(IndicesStatsCollector(es_client,
-                                                args.indices_stats_timeout,
-                                                parse_indices=parse_indices,
-                                                metrics=args.indices_stats_metrics,
-                                                fields=args.indices_stats_fields))
-
-    logging.info('Starting server...')
-    start_http_server(port)
-    logging.info('Server started on port %s', port)
-
-    try:
-        if scheduler:
-            scheduler.run()
-        else:
-            while True:
-                time.sleep(5)
-    except KeyboardInterrupt:
-        pass
-
-    shutdown()
diff --git a/build/lib/prometheus_es_exporter/__main__.py b/build/lib/prometheus_es_exporter/__main__.py
deleted file mode 100644
index 4356e3d..0000000
--- a/build/lib/prometheus_es_exporter/__main__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from prometheus_es_exporter import main
-
-if __name__ == '__main__':
-    main()
diff --git a/build/lib/prometheus_es_exporter/cluster_health_parser.py b/build/lib/prometheus_es_exporter/cluster_health_parser.py
deleted file mode 100644
index 7dd1336..0000000
--- a/build/lib/prometheus_es_exporter/cluster_health_parser.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from collections import OrderedDict
-from .utils import merge_dicts_ordered
-
-singular_forms = {
-    'indices': 'index',
-    'shards': 'shard'
-}
-
-
-def parse_block(block, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    # Green is 0, so if we add statuses of mutiple blocks together
-    # (e.g. all the indices) we don't need to know how many there were
-    # to know if things are good.
-    # i.e. 0 means all green, > 0 means something isn't green.
-    status = block['status']
-    if status == 'green':
-        status_int = 0
-    elif status == 'yellow':
-        status_int = 1
-    elif status == 'red':
-        status_int = 2
-    result.append((metric + ['status'], labels, status_int))
-
-    for key, value in block.items():
-        if isinstance(value, bool):
-            result.append((metric + [key], labels, int(value)))
-        elif isinstance(value, (int, float)):
-            result.append((metric + [key], labels, value))
-        elif isinstance(value, dict):
-            if key in singular_forms:
-                singular_key = singular_forms[key]
-            else:
-                singular_key = key
-            for n_key, n_value in value.items():
-                result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {singular_key: [n_key]})))
-
-    return result
-
-
-def parse_response(response, metric=None):
-    if metric is None:
-        metric = []
-
-    result = []
-
-    # Create a shallow copy as we are going to modify it
-    response = response.copy()
-
-    if not response['timed_out']:
-        # Delete this field as we don't want to parse it as metric
-        del response['timed_out']
-
-        result.extend(parse_block(response, metric=metric))
-
-    return result
diff --git a/build/lib/prometheus_es_exporter/indices_stats_parser.py b/build/lib/prometheus_es_exporter/indices_stats_parser.py
deleted file mode 100644
index e660a2e..0000000
--- a/build/lib/prometheus_es_exporter/indices_stats_parser.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from collections import OrderedDict
-from .utils import merge_dicts_ordered
-
-singular_forms = {
-    'fields': 'field'
-}
-excluded_keys = []
-bucket_dict_keys = [
-    'fields'
-]
-bucket_list_keys = {}
-
-
-def parse_block(block, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for key, value in block.items():
-        if key not in excluded_keys:
-            if isinstance(value, bool):
-                result.append((metric + [key], labels, int(value)))
-            elif isinstance(value, (int, float)):
-                result.append((metric + [key], labels, value))
-            elif isinstance(value, dict):
-                if key in bucket_dict_keys:
-                    if key in singular_forms:
-                        singular_key = singular_forms[key]
-                    else:
-                        singular_key = key
-                    for n_key, n_value in value.items():
-                        result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {singular_key: [n_key]})))
-                else:
-                    result.extend(parse_block(value, metric=metric + [key], labels=labels))
-            elif isinstance(value, list) and key in bucket_list_keys:
-                bucket_name_key = bucket_list_keys[key]
-
-                for n_value in value:
-                    bucket_name = n_value[bucket_name_key]
-                    result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {bucket_name_key: [bucket_name]})))
-
-    return result
-
-
-def parse_response(response, parse_indices=False, metric=None):
-    if metric is None:
-        metric = []
-
-    result = []
-
-    if '_shards' not in response or not response['_shards']['failed']:
-        if parse_indices:
-            for key, value in response['indices'].items():
-                result.extend(parse_block(value, metric=metric, labels=OrderedDict({'index': [key]})))
-        else:
-            result.extend(parse_block(response['_all'], metric=metric, labels=OrderedDict({'index': ['_all']})))
-
-    return result
diff --git a/build/lib/prometheus_es_exporter/nodes_stats_parser.py b/build/lib/prometheus_es_exporter/nodes_stats_parser.py
deleted file mode 100644
index 9f8d4f4..0000000
--- a/build/lib/prometheus_es_exporter/nodes_stats_parser.py
+++ /dev/null
@@ -1,79 +0,0 @@
-from collections import OrderedDict
-from .utils import merge_dicts_ordered
-
-singular_forms = {
-    'pools': 'pool',
-    'collectors': 'collector',
-    'buffer_pools': 'buffer_pool',
-}
-excluded_keys = [
-    'timestamp',
-]
-bucket_dict_keys = [
-    'pools',
-    'collectors',
-    'buffer_pools',
-    'thread_pool',
-]
-bucket_list_keys = {
-    'data': 'path',
-    'devices': 'device_name'
-}
-
-
-def parse_block(block, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for key, value in block.items():
-        if key not in excluded_keys:
-            if isinstance(value, bool):
-                result.append((metric + [key], labels, int(value)))
-            elif isinstance(value, (int, float)):
-                result.append((metric + [key], labels, value))
-            elif isinstance(value, dict):
-                if key in bucket_dict_keys:
-                    if key in singular_forms:
-                        singular_key = singular_forms[key]
-                    else:
-                        singular_key = key
-                    for n_key, n_value in value.items():
-                        result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {singular_key: [n_key]})))
-                else:
-                    result.extend(parse_block(value, metric=metric + [key], labels=labels))
-            elif isinstance(value, list) and key in bucket_list_keys:
-                bucket_name_key = bucket_list_keys[key]
-
-                for n_value in value:
-                    bucket_name = n_value[bucket_name_key]
-                    result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {bucket_name_key: [bucket_name]})))
-
-    return result
-
-
-def parse_node(node, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    labels = merge_dicts_ordered(labels, node_name=[node['name']])
-
-    return parse_block(node, metric=metric, labels=labels)
-
-
-def parse_response(response, metric=None):
-    if metric is None:
-        metric = []
-
-    result = []
-
-    if '_nodes' not in response or not response['_nodes']['failed']:
-        for key, value in response['nodes'].items():
-            result.extend(parse_node(value, metric=metric, labels=OrderedDict({'node_id': [key]})))
-
-    return result
diff --git a/build/lib/prometheus_es_exporter/parser.py b/build/lib/prometheus_es_exporter/parser.py
deleted file mode 100644
index 9ec59fe..0000000
--- a/build/lib/prometheus_es_exporter/parser.py
+++ /dev/null
@@ -1,90 +0,0 @@
-from collections import OrderedDict
-
-
-def parse_buckets(agg_key, buckets, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for index, bucket in enumerate(buckets):
-        labels_next = labels.copy()
-
-        if 'key' in bucket.keys():
-            bucket_key = str(bucket['key'])
-            if agg_key in labels_next.keys():
-                labels_next[agg_key] = labels_next[agg_key] + [bucket_key]
-            else:
-                labels_next[agg_key] = [bucket_key]
-            del bucket['key']
-        else:
-            bucket_key = 'filter_' + str(index)
-            if agg_key in labels_next.keys():
-                labels_next[agg_key] = labels_next[agg_key] + [bucket_key]
-            else:
-                labels_next[agg_key] = [bucket_key]
-
-        result.extend(parse_agg(bucket_key, bucket, metric=metric, labels=labels_next))
-
-    return result
-
-
-def parse_buckets_fixed(agg_key, buckets, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for bucket_key, bucket in buckets.items():
-        labels_next = labels.copy()
-
-        if agg_key in labels_next.keys():
-            labels_next[agg_key] = labels_next[agg_key] + [bucket_key]
-        else:
-            labels_next[agg_key] = [bucket_key]
-
-        result.extend(parse_agg(bucket_key, bucket, metric=metric, labels=labels_next))
-
-    return result
-
-
-def parse_agg(agg_key, agg, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for key, value in agg.items():
-        if key == 'buckets' and isinstance(value, list):
-            result.extend(parse_buckets(agg_key, value, metric=metric, labels=labels))
-        elif key == 'buckets' and isinstance(value, dict):
-            result.extend(parse_buckets_fixed(agg_key, value, metric=metric, labels=labels))
-        elif isinstance(value, dict):
-            result.extend(parse_agg(key, value, metric=metric + [key], labels=labels))
-        else:
-            result.append((metric + [key], labels, value))
-
-    return result
-
-
-def parse_response(response, metric=None):
-    if metric is None:
-        metric = []
-
-    result = []
-
-    if not response['timed_out']:
-        result.append((metric + ['hits'], {}, response['hits']['total']))
-        result.append((metric + ['took', 'milliseconds'], {}, response['took']))
-
-        if 'aggregations' in response.keys():
-            for key, value in response['aggregations'].items():
-                result.extend(parse_agg(key, value, metric=metric + [key]))
-
-    return result
diff --git a/build/lib/prometheus_es_exporter/utils.py b/build/lib/prometheus_es_exporter/utils.py
deleted file mode 100644
index 06b55cd..0000000
--- a/build/lib/prometheus_es_exporter/utils.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from collections import OrderedDict
-
-
-def merge_dicts_ordered(*dict_args, **extra_entries):
-    """
-    Given an arbitrary number of dictionaries, merge them into a
-    single new dictionary. Later dictionaries take precedence if
-    a key is shared by multiple dictionaries.
-
-    Extra entries can also be provided via kwargs. These entries
-    have the highest precedence.
-    """
-    res = OrderedDict()
-
-    for d in dict_args + (extra_entries,):
-        res.update(d)
-
-    return res
diff --git a/dist/prometheus_es_exporter-0.5.1-py3.6.egg b/dist/prometheus_es_exporter-0.5.1-py3.6.egg
deleted file mode 100644
index d8ee176..0000000
--- a/dist/prometheus_es_exporter-0.5.1-py3.6.egg
+++ /dev/null
Binary files differ
diff --git a/exporter.cfg b/exporter.cfg
deleted file mode 100644
index 48226b9..0000000
--- a/exporter.cfg
+++ /dev/null
@@ -1,47 +0,0 @@
-# This section defines default settings for how queries should be run.
-# All settings can be overridden for any given query in its own section.
-# The values shown in this example are also the fallback values used if
-# a setting is not specified in the DEFAULT section or a query's section.
-[DEFAULT]
-# How often to run queries.
-QueryIntervalSecs = 15
-# How long to wait for a query to return before timing out.
-QueryTimeoutSecs = 10
-# The indices to run the query on.
-# Any way of specifying indices supported by your Elasticsearch version can be used.
-QueryIndices = _all
-
-# Queries are defined in sections beginning with 'query_'.
-# Characters following this prefix will be used as a prefix for all metrics
-# generated for this query
-[query_all]
-# Settings that are not specified are inheritied from the DEFAULT section.
-# The search query to run.
-QueryJson = {
-        "size": 0,
-        "query": {
-            "match_all": {}
-        }
-    }
-
-[query_terms]
-# The DEFAULT settings can be overridden.
-QueryIntervalSecs = 20
-QueryTimeoutSecs = 15
-QueryIndices = <logstash-{now/d}>
-QueryJson = {
-        "size": 0,
-        "query": {
-            "match_all": {}
-        },
-        "aggs": {
-            "group1_terms": {
-                "terms": {"field": "group1"},
-                "aggs": {
-                    "val_sum": {
-                        "sum": {"field": "val"}
-                    }
-                }
-            }
-        }
-    }
diff --git a/prometheus_es_exporter.egg-info/PKG-INFO b/prometheus_es_exporter.egg-info/PKG-INFO
deleted file mode 100644
index 158c716..0000000
--- a/prometheus_es_exporter.egg-info/PKG-INFO
+++ /dev/null
@@ -1,20 +0,0 @@
-Metadata-Version: 1.1
-Name: prometheus-es-exporter
-Version: 0.5.1
-Summary: Elasticsearch query Prometheus exporter
-Home-page: https://github.com/Braedon/prometheus-es-exporter
-Author: Braedon Vickers
-Author-email: braedon.vickers@gmail.com
-License: MIT
-Description: UNKNOWN
-Keywords: monitoring prometheus exporter elasticsearch
-Platform: UNKNOWN
-Classifier: Development Status :: 4 - Beta
-Classifier: Intended Audience :: Developers
-Classifier: Intended Audience :: System Administrators
-Classifier: Topic :: System :: Monitoring
-Classifier: License :: OSI Approved :: MIT License
-Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.4
-Classifier: Programming Language :: Python :: 3.5
-Classifier: Programming Language :: Python :: 3.6
diff --git a/prometheus_es_exporter.egg-info/SOURCES.txt b/prometheus_es_exporter.egg-info/SOURCES.txt
deleted file mode 100644
index 7c0c586..0000000
--- a/prometheus_es_exporter.egg-info/SOURCES.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-LICENSE
-MANIFEST.in
-README.md
-setup.py
-prometheus_es_exporter/__init__.py
-prometheus_es_exporter/__main__.py
-prometheus_es_exporter/cluster_health_parser.py
-prometheus_es_exporter/indices_stats_parser.py
-prometheus_es_exporter/nodes_stats_parser.py
-prometheus_es_exporter/parser.py
-prometheus_es_exporter/utils.py
-prometheus_es_exporter.egg-info/PKG-INFO
-prometheus_es_exporter.egg-info/SOURCES.txt
-prometheus_es_exporter.egg-info/dependency_links.txt
-prometheus_es_exporter.egg-info/entry_points.txt
-prometheus_es_exporter.egg-info/requires.txt
-prometheus_es_exporter.egg-info/top_level.txt
\ No newline at end of file
diff --git a/prometheus_es_exporter.egg-info/dependency_links.txt b/prometheus_es_exporter.egg-info/dependency_links.txt
deleted file mode 100644
index 8b13789..0000000
--- a/prometheus_es_exporter.egg-info/dependency_links.txt
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/prometheus_es_exporter.egg-info/entry_points.txt b/prometheus_es_exporter.egg-info/entry_points.txt
deleted file mode 100644
index 0ddf2ce..0000000
--- a/prometheus_es_exporter.egg-info/entry_points.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-[console_scripts]
-prometheus-es-exporter = prometheus_es_exporter:main
-
diff --git a/prometheus_es_exporter.egg-info/requires.txt b/prometheus_es_exporter.egg-info/requires.txt
deleted file mode 100644
index a471007..0000000
--- a/prometheus_es_exporter.egg-info/requires.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-elasticsearch
-jog
-prometheus-client
diff --git a/prometheus_es_exporter.egg-info/top_level.txt b/prometheus_es_exporter.egg-info/top_level.txt
deleted file mode 100644
index a338379..0000000
--- a/prometheus_es_exporter.egg-info/top_level.txt
+++ /dev/null
@@ -1 +0,0 @@
-prometheus_es_exporter
diff --git a/prometheus_es_exporter/__init__.py b/prometheus_es_exporter/__init__.py
deleted file mode 100644
index 2b5f15b..0000000
--- a/prometheus_es_exporter/__init__.py
+++ /dev/null
@@ -1,456 +0,0 @@
-import argparse
-import configparser
-import json
-import logging
-import re
-import sched
-import signal
-import sys
-import time
-
-from collections import OrderedDict
-from elasticsearch import Elasticsearch
-from elasticsearch.exceptions import (ConnectionTimeout,
-                                      ElasticsearchException, NotFoundError)
-from functools import partial
-from jog import JogFormatter
-from prometheus_client import start_http_server, Gauge
-from prometheus_client.core import GaugeMetricFamily, REGISTRY
-
-from prometheus_es_exporter import cluster_health_parser
-from prometheus_es_exporter import indices_stats_parser
-from prometheus_es_exporter import nodes_stats_parser
-from prometheus_es_exporter.parser import parse_response
-
-gauges = {}
-
-metric_invalid_chars = re.compile(r'[^a-zA-Z0-9_:]')
-metric_invalid_start_chars = re.compile(r'^[^a-zA-Z_:]')
-label_invalid_chars = re.compile(r'[^a-zA-Z0-9_]')
-label_invalid_start_chars = re.compile(r'^[^a-zA-Z_]')
-label_start_double_under = re.compile(r'^__+')
-
-
-def format_label_key(label_key):
-    label_key = re.sub(label_invalid_chars, '_', label_key)
-    label_key = re.sub(label_invalid_start_chars, '_', label_key)
-    label_key = re.sub(label_start_double_under, '_', label_key)
-    return label_key
-
-
-def format_label_value(value_list):
-    return '_'.join(value_list)
-
-
-def format_metric_name(name_list):
-    metric = '_'.join(name_list)
-    metric = re.sub(metric_invalid_chars, '_', metric)
-    metric = re.sub(metric_invalid_start_chars, '_', metric)
-    return metric
-
-
-def group_metrics(metrics):
-    metric_dict = {}
-    for (name_list, label_dict, value) in metrics:
-        metric_name = format_metric_name(name_list)
-        label_dict = OrderedDict([(format_label_key(k), format_label_value(v))
-                                  for k, v in label_dict.items()])
-
-        if metric_name not in metric_dict:
-            metric_dict[metric_name] = (tuple(label_dict.keys()), {})
-
-        label_keys = metric_dict[metric_name][0]
-        label_values = tuple([label_dict[key]
-                              for key in label_keys])
-
-        metric_dict[metric_name][1][label_values] = value
-
-    return metric_dict
-
-
-def unregister_collector_by_metric(registry, metric_name):
-    collector = registry._names_to_collectors[metric_name]
-    registry.unregister(collector)
-
-
-def update_gauges(metrics):
-    metric_dict = group_metrics(metrics)
-
-    for metric_name, (label_keys, value_dict) in metric_dict.items():
-        if metric_name in gauges:
-            (old_label_values_set, gauge) = gauges[metric_name]
-        else:
-            old_label_values_set = set()
-            try:
-                gauge = Gauge(metric_name, '', label_keys, registry=REGISTRY)
-            except ValueError:
-                unregister_collector_by_metric(REGISTRY, metric_name)
-                gauge = Gauge(metric_name, '', label_keys, registry=REGISTRY)
-
-        new_label_values_set = set(value_dict.keys())
-
-        for label_values in old_label_values_set - new_label_values_set:
-            gauge.remove(*label_values)
-
-        for label_values, value in value_dict.items():
-            if label_values:
-                gauge.labels(*label_values).set(value)
-            else:
-                gauge.set(value)
-
-        gauges[metric_name] = (new_label_values_set, gauge)
-
-
-def gauge_generator(metrics):
-    metric_dict = group_metrics(metrics)
-
-    for metric_name, (label_keys, value_dict) in metric_dict.items():
-        # If we have label keys we may have multiple different values,
-        # each with their own label values.
-        if label_keys:
-            gauge = GaugeMetricFamily(metric_name, '', labels=label_keys)
-
-            for label_values, value in value_dict.items():
-                gauge.add_metric(label_values, value)
-
-        # No label keys, so we must have only a single value.
-        else:
-            gauge = GaugeMetricFamily(metric_name, '', value=list(value_dict.values())[0])
-
-        yield gauge
-
-
-def zero_gauges(query_name):
-    for metric_name, values in gauges.items():
-        if metric_name.startswith(query_name):
-            label_values, gauge = values
-            for labels in label_values:
-                if labels:
-                    gauge.labels(*labels).set(0)
-                else:
-                    gauge.set(0)
-
-
-def drop_gauges(query_name):
-    for metric_name in gauges.keys():
-        if metric_name.startswith(query_name):
-            del gauges[metric_name]
-
-
-def run_query(es_client, name, indices, query, timeout):
-    try:
-        response = es_client.search(index=indices, body=query, request_timeout=timeout)
-
-        metrics = parse_response(response, [name])
-    except ElasticsearchException as e:
-        if isinstance(e, NotFoundError):
-            logging.warn('Not found indices [%s]. Zeroing metrics for query [%s].', indices, name)
-            zero_gauges(name)
-        else:
-            logging.exception('Error while querying indices [%s], query [%s]. Dropping related metrics.', indices, query)
-            drop_gauges(name)
-        return
-    update_gauges(metrics)
-
-
-def collector_up_gauge(name_list, description, succeeded=True):
-    metric_name = format_metric_name(name_list + ['up'])
-    description = 'Did the {} fetch succeed.'.format(description)
-    return GaugeMetricFamily(metric_name, description, value=int(succeeded))
-
-
-class ClusterHealthCollector(object):
-    def __init__(self, es_client, timeout, level):
-        self.metric_name_list = ['es', 'cluster_health']
-        self.description = 'Cluster Health'
-
-        self.es_client = es_client
-        self.timeout = timeout
-        self.level = level
-
-    def collect(self):
-        try:
-            response = self.es_client.cluster.health(level=self.level, request_timeout=self.timeout)
-
-            metrics = cluster_health_parser.parse_response(response, self.metric_name_list)
-        except ConnectionTimeout:
-            logging.warn('Timeout while fetching %s (timeout %ss).', self.description, self.timeout)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        except Exception:
-            logging.exception('Error while fetching %s.', self.description)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        else:
-            yield from gauge_generator(metrics)
-            yield collector_up_gauge(self.metric_name_list, self.description)
-
-
-class NodesStatsCollector(object):
-    def __init__(self, es_client, timeout, metrics=None):
-        self.metric_name_list = ['es', 'nodes_stats']
-        self.description = 'Nodes Stats'
-
-        self.es_client = es_client
-        self.timeout = timeout
-        self.metrics = metrics
-
-    def collect(self):
-        try:
-            response = self.es_client.nodes.stats(metric=self.metrics, request_timeout=self.timeout)
-
-            metrics = nodes_stats_parser.parse_response(response, self.metric_name_list)
-        except ConnectionTimeout:
-            logging.warn('Timeout while fetching %s (timeout %ss).', self.description, self.timeout)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        except Exception:
-            logging.exception('Error while fetching %s.', self.description)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        else:
-            yield from gauge_generator(metrics)
-            yield collector_up_gauge(self.metric_name_list, self.description)
-
-
-class IndicesStatsCollector(object):
-    def __init__(self, es_client, timeout, parse_indices=False, metrics=None, fields=None):
-        self.metric_name_list = ['es', 'indices_stats']
-        self.description = 'Indices Stats'
-
-        self.es_client = es_client
-        self.timeout = timeout
-        self.parse_indices = parse_indices
-        self.metrics = metrics
-        self.fields = fields
-
-    def collect(self):
-        try:
-            response = self.es_client.indices.stats(metric=self.metrics, fields=self.fields, request_timeout=self.timeout)
-
-            metrics = indices_stats_parser.parse_response(response, self.parse_indices, self.metric_name_list)
-        except ConnectionTimeout:
-            logging.warn('Timeout while fetching %s (timeout %ss).', self.description, self.timeout)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        except Exception:
-            logging.exception('Error while fetching %s.', self.description)
-            yield collector_up_gauge(self.metric_name_list, self.description, succeeded=False)
-        else:
-            yield from gauge_generator(metrics)
-            yield collector_up_gauge(self.metric_name_list, self.description)
-
-
-def run_scheduler(scheduler, interval, func):
-    def scheduled_run(scheduled_time,):
-        try:
-            func()
-        except Exception:
-            logging.exception('Error while running scheduled job.')
-
-        current_time = time.monotonic()
-        next_scheduled_time = scheduled_time + interval
-        while next_scheduled_time < current_time:
-            next_scheduled_time += interval
-
-        scheduler.enterabs(
-            next_scheduled_time,
-            1,
-            scheduled_run,
-            (next_scheduled_time,)
-        )
-
-    next_scheduled_time = time.monotonic()
-    scheduler.enterabs(
-        next_scheduled_time,
-        1,
-        scheduled_run,
-        (next_scheduled_time,)
-    )
-
-
-def shutdown():
-    logging.info('Shutting down')
-    sys.exit(1)
-
-
-def signal_handler(signum, frame):
-    shutdown()
-
-
-def csv_choice_arg_parser(choices, arg):
-    metrics = arg.split(',')
-
-    invalid_metrics = []
-    for metric in metrics:
-        if metric not in choices:
-            invalid_metrics.append(metric)
-
-    if invalid_metrics:
-        msg = 'invalid metric(s): "{}" in "{}" (choose from {})' \
-            .format(','.join(invalid_metrics), arg, ','.join(choices))
-        raise argparse.ArgumentTypeError(msg)
-
-    return metrics
-
-
-# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html#_nodes_statistics
-NODES_STATS_METRICS_OPTIONS = [
-    'indices', 'fs', 'http', 'jvm', 'os',
-    'process', 'thread_pool', 'transport',
-    'breaker', 'discovery', 'ingest'
-]
-nodes_stats_metrics_parser = partial(csv_choice_arg_parser, NODES_STATS_METRICS_OPTIONS)
-
-
-'completion,docs,fielddata,flush,get,indexing,merge,query_cache,recovery,refresh,request_cache,search,segments,store,suggest,translog,warmer'
-
-# https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html#node-indices-stats
-INDICES_STATS_METRICS_OPTIONS = [
-    'completion', 'docs', 'fielddata',
-    'flush', 'get', 'indexing', 'merge',
-    'query_cache', 'recovery', 'refresh',
-    'request_cache', 'search', 'segments',
-    'store', 'suggest', 'translog', 'warmer'
-]
-indices_stats_metrics_parser = partial(csv_choice_arg_parser, INDICES_STATS_METRICS_OPTIONS)
-
-
-def indices_stats_fields_parser(arg):
-    if arg == '*':
-        return arg
-    else:
-        return arg.split(',')
-
-
-def main():
-    signal.signal(signal.SIGTERM, signal_handler)
-
-    parser = argparse.ArgumentParser(description='Export ES query results to Prometheus.')
-    parser.add_argument('-e', '--es-cluster', default='localhost',
-                        help='addresses of nodes in a Elasticsearch cluster to run queries on. Nodes should be separated by commas e.g. es1,es2. Ports can be provided if non-standard (9200) e.g. es1:9999 (default: localhost)')
-    parser.add_argument('--ca-certs',
-                        help='path to a CA certificate bundle. Can be absolute, or relative to the current working directory. If not specified, SSL certificate verification is disabled.')
-    parser.add_argument('-p', '--port', type=int, default=9206,
-                        help='port to serve the metrics endpoint on. (default: 9206)')
-    parser.add_argument('--basic-user',
-                        help='User for authentication. (default: no user)')
-    parser.add_argument('--basic-password',
-                        help='Password for authentication. (default: no password)')
-    parser.add_argument('--query-disable', action='store_true',
-                        help='disable query monitoring. Config file does not need to be present if query monitoring is disabled.')
-    parser.add_argument('-c', '--config-file', default='exporter.cfg',
-                        help='path to query config file. Can be absolute, or relative to the current working directory. (default: exporter.cfg)')
-    parser.add_argument('--cluster-health-disable', action='store_true',
-                        help='disable cluster health monitoring.')
-    parser.add_argument('--cluster-health-timeout', type=float, default=10.0,
-                        help='request timeout for cluster health monitoring, in seconds. (default: 10)')
-    parser.add_argument('--cluster-health-level', default='indices', choices=['cluster', 'indices', 'shards'],
-                        help='level of detail for cluster health monitoring.  (default: indices)')
-    parser.add_argument('--nodes-stats-disable', action='store_true',
-                        help='disable nodes stats monitoring.')
-    parser.add_argument('--nodes-stats-timeout', type=float, default=10.0,
-                        help='request timeout for nodes stats monitoring, in seconds. (default: 10)')
-    parser.add_argument('--nodes-stats-metrics', type=nodes_stats_metrics_parser,
-                        help='limit nodes stats to specific metrics. Metrics should be separated by commas e.g. indices,fs.')
-    parser.add_argument('--indices-stats-disable', action='store_true',
-                        help='disable indices stats monitoring.')
-    parser.add_argument('--indices-stats-timeout', type=float, default=10.0,
-                        help='request timeout for indices stats monitoring, in seconds. (default: 10)')
-    parser.add_argument('--indices-stats-mode', default='cluster', choices=['cluster', 'indices'],
-                        help='detail mode for indices stats monitoring. (default: cluster)')
-    parser.add_argument('--indices-stats-metrics', type=indices_stats_metrics_parser,
-                        help='limit indices stats to specific metrics. Metrics should be separated by commas e.g. indices,fs.')
-    parser.add_argument('--indices-stats-fields', type=indices_stats_fields_parser,
-                        help='include fielddata info for specific fields. Fields should be separated by commas e.g. indices,fs. Use \'*\' for all.')
-    parser.add_argument('-j', '--json-logging', action='store_true',
-                        help='turn on json logging.')
-    parser.add_argument('--log-level', default='INFO', choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
-                        help='detail level to log. (default: INFO)')
-    parser.add_argument('-v', '--verbose', action='store_true',
-                        help='turn on verbose (DEBUG) logging. Overrides --log-level.')
-    args = parser.parse_args()
-
-    if args.basic_user and args.basic_password is None:
-        parser.error('Username provided with no password.')
-    elif args.basic_user is None and args.basic_password:
-        parser.error('Password provided with no username.')
-    elif args.basic_user:
-        http_auth = (args.basic_user, args.basic_password)
-    else:
-        http_auth = None
-
-    log_handler = logging.StreamHandler()
-    log_format = '[%(asctime)s] %(name)s.%(levelname)s %(threadName)s %(message)s'
-    formatter = JogFormatter(log_format) if args.json_logging else logging.Formatter(log_format)
-    log_handler.setFormatter(formatter)
-
-    log_level = getattr(logging, args.log_level)
-    logging.basicConfig(
-        handlers=[log_handler],
-        level=logging.DEBUG if args.verbose else log_level
-    )
-    logging.captureWarnings(True)
-
-    port = args.port
-    es_cluster = args.es_cluster.split(',')
-
-    if args.ca_certs:
-        es_client = Elasticsearch(es_cluster, verify_certs=True, ca_certs=args.ca_certs, http_auth=http_auth)
-    else:
-        es_client = Elasticsearch(es_cluster, verify_certs=False, http_auth=http_auth)
-
-    scheduler = None
-
-    if not args.query_disable:
-        scheduler = sched.scheduler()
-
-        config = configparser.ConfigParser()
-        config.read_file(open(args.config_file))
-
-        query_prefix = 'query_'
-        queries = {}
-        for section in config.sections():
-            if section.startswith(query_prefix):
-                query_name = section[len(query_prefix):]
-                query_interval = config.getfloat(section, 'QueryIntervalSecs', fallback=15)
-                query_timeout = config.getfloat(section, 'QueryTimeoutSecs', fallback=10)
-                query_indices = config.get(section, 'QueryIndices', fallback='_all')
-                query = json.loads(config.get(section, 'QueryJson'))
-
-                queries[query_name] = (query_interval, query_timeout, query_indices, query)
-
-        if queries:
-            for name, (interval, timeout, indices, query) in queries.items():
-                func = partial(run_query, es_client, name, indices, query, timeout)
-                run_scheduler(scheduler, interval, func)
-        else:
-            logging.warn('No queries found in config file %s', args.config_file)
-
-    if not args.cluster_health_disable:
-        REGISTRY.register(ClusterHealthCollector(es_client,
-                                                 args.cluster_health_timeout,
-                                                 args.cluster_health_level))
-
-    if not args.nodes_stats_disable:
-        REGISTRY.register(NodesStatsCollector(es_client,
-                                              args.nodes_stats_timeout,
-                                              metrics=args.nodes_stats_metrics))
-
-    if not args.indices_stats_disable:
-        parse_indices = args.indices_stats_mode == 'indices'
-        REGISTRY.register(IndicesStatsCollector(es_client,
-                                                args.indices_stats_timeout,
-                                                parse_indices=parse_indices,
-                                                metrics=args.indices_stats_metrics,
-                                                fields=args.indices_stats_fields))
-
-    logging.info('Starting server...')
-    start_http_server(port)
-    logging.info('Server started on port %s', port)
-
-    try:
-        if scheduler:
-            scheduler.run()
-        else:
-            while True:
-                time.sleep(5)
-    except KeyboardInterrupt:
-        pass
-
-    shutdown()
diff --git a/prometheus_es_exporter/__main__.py b/prometheus_es_exporter/__main__.py
deleted file mode 100644
index 4356e3d..0000000
--- a/prometheus_es_exporter/__main__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from prometheus_es_exporter import main
-
-if __name__ == '__main__':
-    main()
diff --git a/prometheus_es_exporter/cluster_health_parser.py b/prometheus_es_exporter/cluster_health_parser.py
deleted file mode 100644
index 7dd1336..0000000
--- a/prometheus_es_exporter/cluster_health_parser.py
+++ /dev/null
@@ -1,62 +0,0 @@
-from collections import OrderedDict
-from .utils import merge_dicts_ordered
-
-singular_forms = {
-    'indices': 'index',
-    'shards': 'shard'
-}
-
-
-def parse_block(block, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    # Green is 0, so if we add statuses of mutiple blocks together
-    # (e.g. all the indices) we don't need to know how many there were
-    # to know if things are good.
-    # i.e. 0 means all green, > 0 means something isn't green.
-    status = block['status']
-    if status == 'green':
-        status_int = 0
-    elif status == 'yellow':
-        status_int = 1
-    elif status == 'red':
-        status_int = 2
-    result.append((metric + ['status'], labels, status_int))
-
-    for key, value in block.items():
-        if isinstance(value, bool):
-            result.append((metric + [key], labels, int(value)))
-        elif isinstance(value, (int, float)):
-            result.append((metric + [key], labels, value))
-        elif isinstance(value, dict):
-            if key in singular_forms:
-                singular_key = singular_forms[key]
-            else:
-                singular_key = key
-            for n_key, n_value in value.items():
-                result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {singular_key: [n_key]})))
-
-    return result
-
-
-def parse_response(response, metric=None):
-    if metric is None:
-        metric = []
-
-    result = []
-
-    # Create a shallow copy as we are going to modify it
-    response = response.copy()
-
-    if not response['timed_out']:
-        # Delete this field as we don't want to parse it as metric
-        del response['timed_out']
-
-        result.extend(parse_block(response, metric=metric))
-
-    return result
diff --git a/prometheus_es_exporter/indices_stats_parser.py b/prometheus_es_exporter/indices_stats_parser.py
deleted file mode 100644
index e660a2e..0000000
--- a/prometheus_es_exporter/indices_stats_parser.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from collections import OrderedDict
-from .utils import merge_dicts_ordered
-
-singular_forms = {
-    'fields': 'field'
-}
-excluded_keys = []
-bucket_dict_keys = [
-    'fields'
-]
-bucket_list_keys = {}
-
-
-def parse_block(block, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for key, value in block.items():
-        if key not in excluded_keys:
-            if isinstance(value, bool):
-                result.append((metric + [key], labels, int(value)))
-            elif isinstance(value, (int, float)):
-                result.append((metric + [key], labels, value))
-            elif isinstance(value, dict):
-                if key in bucket_dict_keys:
-                    if key in singular_forms:
-                        singular_key = singular_forms[key]
-                    else:
-                        singular_key = key
-                    for n_key, n_value in value.items():
-                        result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {singular_key: [n_key]})))
-                else:
-                    result.extend(parse_block(value, metric=metric + [key], labels=labels))
-            elif isinstance(value, list) and key in bucket_list_keys:
-                bucket_name_key = bucket_list_keys[key]
-
-                for n_value in value:
-                    bucket_name = n_value[bucket_name_key]
-                    result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {bucket_name_key: [bucket_name]})))
-
-    return result
-
-
-def parse_response(response, parse_indices=False, metric=None):
-    if metric is None:
-        metric = []
-
-    result = []
-
-    if '_shards' not in response or not response['_shards']['failed']:
-        if parse_indices:
-            for key, value in response['indices'].items():
-                result.extend(parse_block(value, metric=metric, labels=OrderedDict({'index': [key]})))
-        else:
-            result.extend(parse_block(response['_all'], metric=metric, labels=OrderedDict({'index': ['_all']})))
-
-    return result
diff --git a/prometheus_es_exporter/nodes_stats_parser.py b/prometheus_es_exporter/nodes_stats_parser.py
deleted file mode 100644
index 9f8d4f4..0000000
--- a/prometheus_es_exporter/nodes_stats_parser.py
+++ /dev/null
@@ -1,79 +0,0 @@
-from collections import OrderedDict
-from .utils import merge_dicts_ordered
-
-singular_forms = {
-    'pools': 'pool',
-    'collectors': 'collector',
-    'buffer_pools': 'buffer_pool',
-}
-excluded_keys = [
-    'timestamp',
-]
-bucket_dict_keys = [
-    'pools',
-    'collectors',
-    'buffer_pools',
-    'thread_pool',
-]
-bucket_list_keys = {
-    'data': 'path',
-    'devices': 'device_name'
-}
-
-
-def parse_block(block, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for key, value in block.items():
-        if key not in excluded_keys:
-            if isinstance(value, bool):
-                result.append((metric + [key], labels, int(value)))
-            elif isinstance(value, (int, float)):
-                result.append((metric + [key], labels, value))
-            elif isinstance(value, dict):
-                if key in bucket_dict_keys:
-                    if key in singular_forms:
-                        singular_key = singular_forms[key]
-                    else:
-                        singular_key = key
-                    for n_key, n_value in value.items():
-                        result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {singular_key: [n_key]})))
-                else:
-                    result.extend(parse_block(value, metric=metric + [key], labels=labels))
-            elif isinstance(value, list) and key in bucket_list_keys:
-                bucket_name_key = bucket_list_keys[key]
-
-                for n_value in value:
-                    bucket_name = n_value[bucket_name_key]
-                    result.extend(parse_block(n_value, metric=metric + [key], labels=merge_dicts_ordered(labels, {bucket_name_key: [bucket_name]})))
-
-    return result
-
-
-def parse_node(node, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    labels = merge_dicts_ordered(labels, node_name=[node['name']])
-
-    return parse_block(node, metric=metric, labels=labels)
-
-
-def parse_response(response, metric=None):
-    if metric is None:
-        metric = []
-
-    result = []
-
-    if '_nodes' not in response or not response['_nodes']['failed']:
-        for key, value in response['nodes'].items():
-            result.extend(parse_node(value, metric=metric, labels=OrderedDict({'node_id': [key]})))
-
-    return result
diff --git a/prometheus_es_exporter/parser.py b/prometheus_es_exporter/parser.py
deleted file mode 100644
index f978cf2..0000000
--- a/prometheus_es_exporter/parser.py
+++ /dev/null
@@ -1,93 +0,0 @@
-from collections import OrderedDict
-
-
-def parse_buckets(agg_key, buckets, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for index, bucket in enumerate(buckets):
-        labels_next = labels.copy()
-
-        if 'key' in bucket.keys():
-            bucket_key = str(bucket['key'])
-            if agg_key in labels_next.keys():
-                labels_next[agg_key] = labels_next[agg_key] + [bucket_key]
-            else:
-                labels_next[agg_key] = [bucket_key]
-            del bucket['key']
-        else:
-            bucket_key = 'filter_' + str(index)
-            if agg_key in labels_next.keys():
-                labels_next[agg_key] = labels_next[agg_key] + [bucket_key]
-            else:
-                labels_next[agg_key] = [bucket_key]
-
-        result.extend(parse_agg(bucket_key, bucket, metric=metric, labels=labels_next))
-
-    return result
-
-
-def parse_buckets_fixed(agg_key, buckets, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for bucket_key, bucket in buckets.items():
-        labels_next = labels.copy()
-
-        if agg_key in labels_next.keys():
-            labels_next[agg_key] = labels_next[agg_key] + [bucket_key]
-        else:
-            labels_next[agg_key] = [bucket_key]
-
-        result.extend(parse_agg(bucket_key, bucket, metric=metric, labels=labels_next))
-
-    return result
-
-
-def parse_agg(agg_key, agg, metric=None, labels=None):
-    if metric is None:
-        metric = []
-    if labels is None:
-        labels = OrderedDict()
-
-    result = []
-
-    for key, value in agg.items():
-        if key == 'buckets' and isinstance(value, list):
-            result.extend(parse_buckets(agg_key, value, metric=metric, labels=labels))
-        elif key == 'buckets' and isinstance(value, dict):
-            result.extend(parse_buckets_fixed(agg_key, value, metric=metric, labels=labels))
-        elif isinstance(value, dict):
-            result.extend(parse_agg(key, value, metric=metric + [key], labels=labels))
-        else:
-            result.append((metric + [key], labels, value))
-
-    return result
-
-
-def parse_response(response, metric=None):
-    if metric is None:
-        metric = []
-
-    result = []
-
-    if not response['timed_out']:
-        hits_total = response['hits']['total']
-        if isinstance(hits_total, dict):
-            hits_total = hits_total.get('value', 0)
-        result.append((metric + ['hits'], {}, hits_total))
-        result.append((metric + ['took', 'milliseconds'], {}, response['took']))
-
-        if 'aggregations' in response.keys():
-            for key, value in response['aggregations'].items():
-                result.extend(parse_agg(key, value, metric=metric + [key]))
-
-    return result
diff --git a/prometheus_es_exporter/utils.py b/prometheus_es_exporter/utils.py
deleted file mode 100644
index 06b55cd..0000000
--- a/prometheus_es_exporter/utils.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from collections import OrderedDict
-
-
-def merge_dicts_ordered(*dict_args, **extra_entries):
-    """
-    Given an arbitrary number of dictionaries, merge them into a
-    single new dictionary. Later dictionaries take precedence if
-    a key is shared by multiple dictionaries.
-
-    Extra entries can also be provided via kwargs. These entries
-    have the highest precedence.
-    """
-    res = OrderedDict()
-
-    for d in dict_args + (extra_entries,):
-        res.update(d)
-
-    return res
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..c61bad2
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,10 @@
+certifi==2021.5.30
+click==8.0.1
+click-config-file==0.6.0
+configobj==5.0.6
+elasticsearch==7.14.0a1
+jog==0.1.1
+prometheus-client==0.11.0
+prometheus-es-exporter==0.14.0
+six==1.16.0
+urllib3==1.26.6
diff --git a/run-func-tests.sh b/run-func-tests.sh
deleted file mode 100755
index ba1f8ce..0000000
--- a/run-func-tests.sh
+++ /dev/null
@@ -1,4 +0,0 @@
-#!/bin/bash -x
-set -e
-
-exit 0
diff --git a/run-tests.sh b/run-tests.sh
deleted file mode 100755
index 09f844a..0000000
--- a/run-tests.sh
+++ /dev/null
@@ -1,6 +0,0 @@
-#!/bin/bash -x
-set -e
-
-pytest -v
-
-exit 0
diff --git a/setup.py b/setup.py
deleted file mode 100644
index 733b5af..0000000
--- a/setup.py
+++ /dev/null
@@ -1,34 +0,0 @@
-from setuptools import setup, find_packages
-
-setup(
-    name='prometheus-es-exporter',
-    version='0.5.1',
-    description='Elasticsearch query Prometheus exporter',
-    url='https://github.com/Braedon/prometheus-es-exporter',
-    author='Braedon Vickers',
-    author_email='braedon.vickers@gmail.com',
-    license='MIT',
-    classifiers=[
-        'Development Status :: 4 - Beta',
-        'Intended Audience :: Developers',
-        'Intended Audience :: System Administrators',
-        'Topic :: System :: Monitoring',
-        'License :: OSI Approved :: MIT License',
-        'Programming Language :: Python :: 3',
-        'Programming Language :: Python :: 3.4',
-        'Programming Language :: Python :: 3.5',
-        'Programming Language :: Python :: 3.6',
-    ],
-    keywords='monitoring prometheus exporter elasticsearch',
-    packages=find_packages(exclude=['tests']),
-    install_requires=[
-        'elasticsearch',
-        'jog',
-        'prometheus-client'
-    ],
-    entry_points={
-        'console_scripts': [
-            'prometheus-es-exporter=prometheus_es_exporter:main',
-        ],
-    },
-)
diff --git a/test-requirements.txt b/test-requirements.txt
deleted file mode 100644
index de162a9..0000000
--- a/test-requirements.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-# The order of packages is significant, because pip processes them in the order
-# of appearance. Changing the order has an impact on the overall integration
-# process, which may cause wedges in the gate later.
-
-flake8-docstrings==0.2.1.post1 # MIT
-flake8-import-order>=0.17.1 #LGPLv3
-bandit>=1.1.0 # Apache-2.0
-sphinx!=1.6.6,!=1.6.7,>=1.6.2 # BSD
-pytest==4.0.1
-pytest-mock==1.10.0
diff --git a/tests/__init__.py b/tests/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/tests/__init__.py
+++ /dev/null
diff --git a/tests/test_cluster_health_parser.py b/tests/test_cluster_health_parser.py
deleted file mode 100644
index b5c4ddd..0000000
--- a/tests/test_cluster_health_parser.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import unittest
-
-from prometheus_es_exporter.cluster_health_parser import parse_response
-from tests.utils import convert_result
-
-
-# Sample responses generated by querying the endpoint on a Elasticsearch
-# server populated with the following data (http command = Httpie utility):
-# > http -v POST localhost:9200/foo/bar/1 val:=1 group1=a group2=a
-# > http -v POST localhost:9200/foo/bar/2 val:=2 group1=a group2=b
-# > http -v POST localhost:9200/foo/bar/3 val:=3 group1=b group2=b
-class Test(unittest.TestCase):
-    maxDiff = None
-
-    def test_endpoint(self):
-        # Endpoint: /_cluster/health?pretty&level=shards
-        response = {
-            'cluster_name': 'elasticsearch',
-            'status': 'yellow',
-            'timed_out': False,
-            'number_of_nodes': 1,
-            'number_of_data_nodes': 1,
-            'active_primary_shards': 5,
-            'active_shards': 5,
-            'relocating_shards': 0,
-            'initializing_shards': 0,
-            'unassigned_shards': 5,
-            'delayed_unassigned_shards': 0,
-            'number_of_pending_tasks': 0,
-            'number_of_in_flight_fetch': 0,
-            'task_max_waiting_in_queue_millis': 0,
-            'active_shards_percent_as_number': 50.0,
-            'indices': {
-                'foo': {
-                    'status': 'yellow',
-                    'number_of_shards': 5,
-                    'number_of_replicas': 1,
-                    'active_primary_shards': 5,
-                    'active_shards': 5,
-                    'relocating_shards': 0,
-                    'initializing_shards': 0,
-                    'unassigned_shards': 5,
-                    'shards': {
-                        '0': {
-                            'status': 'yellow',
-                            'primary_active': True,
-                            'active_shards': 1,
-                            'relocating_shards': 0,
-                            'initializing_shards': 0,
-                            'unassigned_shards': 1
-                        },
-                        '1': {
-                            'status': 'yellow',
-                            'primary_active': True,
-                            'active_shards': 1,
-                            'relocating_shards': 0,
-                            'initializing_shards': 0,
-                            'unassigned_shards': 1
-                        },
-                        '2': {
-                            'status': 'yellow',
-                            'primary_active': True,
-                            'active_shards': 1,
-                            'relocating_shards': 0,
-                            'initializing_shards': 0,
-                            'unassigned_shards': 1
-                        },
-                        '3': {
-                            'status': 'yellow',
-                            'primary_active': True,
-                            'active_shards': 1,
-                            'relocating_shards': 0,
-                            'initializing_shards': 0,
-                            'unassigned_shards': 1
-                        },
-                        '4': {
-                            'status': 'yellow',
-                            'primary_active': True,
-                            'active_shards': 1,
-                            'relocating_shards': 0,
-                            'initializing_shards': 0,
-                            'unassigned_shards': 1
-                        }
-                    }
-                }
-            }
-        }
-
-        expected = {
-            'status': 1,
-            'number_of_nodes': 1,
-            'number_of_data_nodes': 1,
-            'active_primary_shards': 5,
-            'active_shards': 5,
-            'relocating_shards': 0,
-            'initializing_shards': 0,
-            'unassigned_shards': 5,
-            'delayed_unassigned_shards': 0,
-            'number_of_pending_tasks': 0,
-            'number_of_in_flight_fetch': 0,
-            'task_max_waiting_in_queue_millis': 0,
-            'active_shards_percent_as_number': 50.0,
-            'indices_status{index="foo"}': 1,
-            'indices_number_of_shards{index="foo"}': 5,
-            'indices_number_of_replicas{index="foo"}': 1,
-            'indices_active_primary_shards{index="foo"}': 5,
-            'indices_active_shards{index="foo"}': 5,
-            'indices_relocating_shards{index="foo"}': 0,
-            'indices_initializing_shards{index="foo"}': 0,
-            'indices_unassigned_shards{index="foo"}': 5,
-            'indices_shards_status{index="foo",shard="0"}': 1,
-            'indices_shards_primary_active{index="foo",shard="0"}': 1,
-            'indices_shards_active_shards{index="foo",shard="0"}': 1,
-            'indices_shards_relocating_shards{index="foo",shard="0"}': 0,
-            'indices_shards_initializing_shards{index="foo",shard="0"}': 0,
-            'indices_shards_unassigned_shards{index="foo",shard="0"}': 1,
-            'indices_shards_status{index="foo",shard="1"}': 1,
-            'indices_shards_primary_active{index="foo",shard="1"}': 1,
-            'indices_shards_active_shards{index="foo",shard="1"}': 1,
-            'indices_shards_relocating_shards{index="foo",shard="1"}': 0,
-            'indices_shards_initializing_shards{index="foo",shard="1"}': 0,
-            'indices_shards_unassigned_shards{index="foo",shard="1"}': 1,
-            'indices_shards_status{index="foo",shard="2"}': 1,
-            'indices_shards_primary_active{index="foo",shard="2"}': 1,
-            'indices_shards_active_shards{index="foo",shard="2"}': 1,
-            'indices_shards_relocating_shards{index="foo",shard="2"}': 0,
-            'indices_shards_initializing_shards{index="foo",shard="2"}': 0,
-            'indices_shards_unassigned_shards{index="foo",shard="2"}': 1,
-            'indices_shards_status{index="foo",shard="3"}': 1,
-            'indices_shards_primary_active{index="foo",shard="3"}': 1,
-            'indices_shards_active_shards{index="foo",shard="3"}': 1,
-            'indices_shards_relocating_shards{index="foo",shard="3"}': 0,
-            'indices_shards_initializing_shards{index="foo",shard="3"}': 0,
-            'indices_shards_unassigned_shards{index="foo",shard="3"}': 1,
-            'indices_shards_status{index="foo",shard="4"}': 1,
-            'indices_shards_primary_active{index="foo",shard="4"}': 1,
-            'indices_shards_active_shards{index="foo",shard="4"}': 1,
-            'indices_shards_relocating_shards{index="foo",shard="4"}': 0,
-            'indices_shards_initializing_shards{index="foo",shard="4"}': 0,
-            'indices_shards_unassigned_shards{index="foo",shard="4"}': 1,
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/tests/test_indices_stats_parser.py b/tests/test_indices_stats_parser.py
deleted file mode 100644
index ae41c62..0000000
--- a/tests/test_indices_stats_parser.py
+++ /dev/null
@@ -1,883 +0,0 @@
-import unittest
-
-from prometheus_es_exporter.indices_stats_parser import parse_response
-from tests.utils import convert_result
-
-
-# Sample responses generated by querying the endpoint on a Elasticsearch
-# server populated with the following data (http command = Httpie utility):
-# > http -v POST localhost:9200/foo/bar/1 val:=1 group1=a group2=a
-# > http -v POST localhost:9200/foo/bar/2 val:=2 group1=a group2=b
-# > http -v POST localhost:9200/foo/bar/3 val:=3 group1=b group2=b
-# Some details are instance specific, so mileage may vary!
-class Test(unittest.TestCase):
-    maxDiff = None
-
-    # Endpoint: /_stats?pretty
-    response = {
-        '_shards': {
-            'total': 10,
-            'successful': 5,
-            'failed': 0
-        },
-        '_all': {
-            'primaries': {
-                'docs': {
-                    'count': 3,
-                    'deleted': 0
-                },
-                'store': {
-                    'size_in_bytes': 12690,
-                    'throttle_time_in_millis': 0
-                },
-                'indexing': {
-                    'index_total': 3,
-                    'index_time_in_millis': 45,
-                    'index_current': 0,
-                    'index_failed': 0,
-                    'delete_total': 0,
-                    'delete_time_in_millis': 0,
-                    'delete_current': 0,
-                    'noop_update_total': 0,
-                    'is_throttled': False,
-                    'throttle_time_in_millis': 0
-                },
-                'get': {
-                    'total': 0,
-                    'time_in_millis': 0,
-                    'exists_total': 0,
-                    'exists_time_in_millis': 0,
-                    'missing_total': 0,
-                    'missing_time_in_millis': 0,
-                    'current': 0
-                },
-                'search': {
-                    'open_contexts': 0,
-                    'query_total': 0,
-                    'query_time_in_millis': 0,
-                    'query_current': 0,
-                    'fetch_total': 0,
-                    'fetch_time_in_millis': 0,
-                    'fetch_current': 0,
-                    'scroll_total': 0,
-                    'scroll_time_in_millis': 0,
-                    'scroll_current': 0,
-                    'suggest_total': 0,
-                    'suggest_time_in_millis': 0,
-                    'suggest_current': 0
-                },
-                'merges': {
-                    'current': 0,
-                    'current_docs': 0,
-                    'current_size_in_bytes': 0,
-                    'total': 0,
-                    'total_time_in_millis': 0,
-                    'total_docs': 0,
-                    'total_size_in_bytes': 0,
-                    'total_stopped_time_in_millis': 0,
-                    'total_throttled_time_in_millis': 0,
-                    'total_auto_throttle_in_bytes': 104857600
-                },
-                'refresh': {
-                    'total': 3,
-                    'total_time_in_millis': 107
-                },
-                'flush': {
-                    'total': 0,
-                    'total_time_in_millis': 0
-                },
-                'warmer': {
-                    'current': 0,
-                    'total': 8,
-                    'total_time_in_millis': 6
-                },
-                'query_cache': {
-                    'memory_size_in_bytes': 0,
-                    'total_count': 0,
-                    'hit_count': 0,
-                    'miss_count': 0,
-                    'cache_size': 0,
-                    'cache_count': 0,
-                    'evictions': 0
-                },
-                'fielddata': {
-                    'memory_size_in_bytes': 0,
-                    'evictions': 0,
-                    'fields': {
-                        'group1': {
-                            'memory_size_in_bytes': 1024
-                        },
-                        'group2': {
-                            'memory_size_in_bytes': 2048
-                        }
-                    }
-                },
-                'completion': {
-                    'size_in_bytes': 0
-                },
-                'segments': {
-                    'count': 3,
-                    'memory_in_bytes': 7908,
-                    'terms_memory_in_bytes': 5976,
-                    'stored_fields_memory_in_bytes': 936,
-                    'term_vectors_memory_in_bytes': 0,
-                    'norms_memory_in_bytes': 576,
-                    'points_memory_in_bytes': 144,
-                    'doc_values_memory_in_bytes': 276,
-                    'index_writer_memory_in_bytes': 0,
-                    'version_map_memory_in_bytes': 0,
-                    'fixed_bit_set_memory_in_bytes': 0,
-                    'max_unsafe_auto_id_timestamp': -1,
-                    'file_sizes': {}
-                },
-                'translog': {
-                    'operations': 3,
-                    'size_in_bytes': 491
-                },
-                'request_cache': {
-                    'memory_size_in_bytes': 0,
-                    'evictions': 0,
-                    'hit_count': 0,
-                    'miss_count': 0
-                },
-                'recovery': {
-                    'current_as_source': 0,
-                    'current_as_target': 0,
-                    'throttle_time_in_millis': 0
-                }
-            },
-            'total': {
-                'docs': {
-                    'count': 3,
-                    'deleted': 0
-                },
-                'store': {
-                    'size_in_bytes': 12690,
-                    'throttle_time_in_millis': 0
-                },
-                'indexing': {
-                    'index_total': 3,
-                    'index_time_in_millis': 45,
-                    'index_current': 0,
-                    'index_failed': 0,
-                    'delete_total': 0,
-                    'delete_time_in_millis': 0,
-                    'delete_current': 0,
-                    'noop_update_total': 0,
-                    'is_throttled': False,
-                    'throttle_time_in_millis': 0
-                },
-                'get': {
-                    'total': 0,
-                    'time_in_millis': 0,
-                    'exists_total': 0,
-                    'exists_time_in_millis': 0,
-                    'missing_total': 0,
-                    'missing_time_in_millis': 0,
-                    'current': 0
-                },
-                'search': {
-                    'open_contexts': 0,
-                    'query_total': 0,
-                    'query_time_in_millis': 0,
-                    'query_current': 0,
-                    'fetch_total': 0,
-                    'fetch_time_in_millis': 0,
-                    'fetch_current': 0,
-                    'scroll_total': 0,
-                    'scroll_time_in_millis': 0,
-                    'scroll_current': 0,
-                    'suggest_total': 0,
-                    'suggest_time_in_millis': 0,
-                    'suggest_current': 0
-                },
-                'merges': {
-                    'current': 0,
-                    'current_docs': 0,
-                    'current_size_in_bytes': 0,
-                    'total': 0,
-                    'total_time_in_millis': 0,
-                    'total_docs': 0,
-                    'total_size_in_bytes': 0,
-                    'total_stopped_time_in_millis': 0,
-                    'total_throttled_time_in_millis': 0,
-                    'total_auto_throttle_in_bytes': 104857600
-                },
-                'refresh': {
-                    'total': 3,
-                    'total_time_in_millis': 107
-                },
-                'flush': {
-                    'total': 0,
-                    'total_time_in_millis': 0
-                },
-                'warmer': {
-                    'current': 0,
-                    'total': 8,
-                    'total_time_in_millis': 6
-                },
-                'query_cache': {
-                    'memory_size_in_bytes': 0,
-                    'total_count': 0,
-                    'hit_count': 0,
-                    'miss_count': 0,
-                    'cache_size': 0,
-                    'cache_count': 0,
-                    'evictions': 0
-                },
-                'fielddata': {
-                    'memory_size_in_bytes': 0,
-                    'evictions': 0,
-                    'fields': {
-                        'group1': {
-                            'memory_size_in_bytes': 1024
-                        },
-                        'group2': {
-                            'memory_size_in_bytes': 2048
-                        }
-                    }
-                },
-                'completion': {
-                    'size_in_bytes': 0
-                },
-                'segments': {
-                    'count': 3,
-                    'memory_in_bytes': 7908,
-                    'terms_memory_in_bytes': 5976,
-                    'stored_fields_memory_in_bytes': 936,
-                    'term_vectors_memory_in_bytes': 0,
-                    'norms_memory_in_bytes': 576,
-                    'points_memory_in_bytes': 144,
-                    'doc_values_memory_in_bytes': 276,
-                    'index_writer_memory_in_bytes': 0,
-                    'version_map_memory_in_bytes': 0,
-                    'fixed_bit_set_memory_in_bytes': 0,
-                    'max_unsafe_auto_id_timestamp': -1,
-                    'file_sizes': {}
-                },
-                'translog': {
-                    'operations': 3,
-                    'size_in_bytes': 491
-                },
-                'request_cache': {
-                    'memory_size_in_bytes': 0,
-                    'evictions': 0,
-                    'hit_count': 0,
-                    'miss_count': 0
-                },
-                'recovery': {
-                    'current_as_source': 0,
-                    'current_as_target': 0,
-                    'throttle_time_in_millis': 0
-                }
-            }
-        },
-        'indices': {
-            'foo': {
-                'primaries': {
-                    'docs': {
-                        'count': 3,
-                        'deleted': 0
-                    },
-                    'store': {
-                        'size_in_bytes': 12690,
-                        'throttle_time_in_millis': 0
-                    },
-                    'indexing': {
-                        'index_total': 3,
-                        'index_time_in_millis': 45,
-                        'index_current': 0,
-                        'index_failed': 0,
-                        'delete_total': 0,
-                        'delete_time_in_millis': 0,
-                        'delete_current': 0,
-                        'noop_update_total': 0,
-                        'is_throttled': False,
-                        'throttle_time_in_millis': 0
-                    },
-                    'get': {
-                        'total': 0,
-                        'time_in_millis': 0,
-                        'exists_total': 0,
-                        'exists_time_in_millis': 0,
-                        'missing_total': 0,
-                        'missing_time_in_millis': 0,
-                        'current': 0
-                    },
-                    'search': {
-                        'open_contexts': 0,
-                        'query_total': 0,
-                        'query_time_in_millis': 0,
-                        'query_current': 0,
-                        'fetch_total': 0,
-                        'fetch_time_in_millis': 0,
-                        'fetch_current': 0,
-                        'scroll_total': 0,
-                        'scroll_time_in_millis': 0,
-                        'scroll_current': 0,
-                        'suggest_total': 0,
-                        'suggest_time_in_millis': 0,
-                        'suggest_current': 0
-                    },
-                    'merges': {
-                        'current': 0,
-                        'current_docs': 0,
-                        'current_size_in_bytes': 0,
-                        'total': 0,
-                        'total_time_in_millis': 0,
-                        'total_docs': 0,
-                        'total_size_in_bytes': 0,
-                        'total_stopped_time_in_millis': 0,
-                        'total_throttled_time_in_millis': 0,
-                        'total_auto_throttle_in_bytes': 104857600
-                    },
-                    'refresh': {
-                        'total': 3,
-                        'total_time_in_millis': 107
-                    },
-                    'flush': {
-                        'total': 0,
-                        'total_time_in_millis': 0
-                    },
-                    'warmer': {
-                        'current': 0,
-                        'total': 8,
-                        'total_time_in_millis': 6
-                    },
-                    'query_cache': {
-                        'memory_size_in_bytes': 0,
-                        'total_count': 0,
-                        'hit_count': 0,
-                        'miss_count': 0,
-                        'cache_size': 0,
-                        'cache_count': 0,
-                        'evictions': 0
-                    },
-                    'fielddata': {
-                        'memory_size_in_bytes': 0,
-                        'evictions': 0,
-                        'fields': {
-                            'group1': {
-                                'memory_size_in_bytes': 1024
-                            },
-                            'group2': {
-                                'memory_size_in_bytes': 2048
-                            }
-                        }
-                    },
-                    'completion': {
-                        'size_in_bytes': 0
-                    },
-                    'segments': {
-                        'count': 3,
-                        'memory_in_bytes': 7908,
-                        'terms_memory_in_bytes': 5976,
-                        'stored_fields_memory_in_bytes': 936,
-                        'term_vectors_memory_in_bytes': 0,
-                        'norms_memory_in_bytes': 576,
-                        'points_memory_in_bytes': 144,
-                        'doc_values_memory_in_bytes': 276,
-                        'index_writer_memory_in_bytes': 0,
-                        'version_map_memory_in_bytes': 0,
-                        'fixed_bit_set_memory_in_bytes': 0,
-                        'max_unsafe_auto_id_timestamp': -1,
-                        'file_sizes': {}
-                    },
-                    'translog': {
-                        'operations': 3,
-                        'size_in_bytes': 491
-                    },
-                    'request_cache': {
-                        'memory_size_in_bytes': 0,
-                        'evictions': 0,
-                        'hit_count': 0,
-                        'miss_count': 0
-                    },
-                    'recovery': {
-                        'current_as_source': 0,
-                        'current_as_target': 0,
-                        'throttle_time_in_millis': 0
-                    }
-                },
-                'total': {
-                    'docs': {
-                        'count': 3,
-                        'deleted': 0
-                    },
-                    'store': {
-                        'size_in_bytes': 12690,
-                        'throttle_time_in_millis': 0
-                    },
-                    'indexing': {
-                        'index_total': 3,
-                        'index_time_in_millis': 45,
-                        'index_current': 0,
-                        'index_failed': 0,
-                        'delete_total': 0,
-                        'delete_time_in_millis': 0,
-                        'delete_current': 0,
-                        'noop_update_total': 0,
-                        'is_throttled': False,
-                        'throttle_time_in_millis': 0
-                    },
-                    'get': {
-                        'total': 0,
-                        'time_in_millis': 0,
-                        'exists_total': 0,
-                        'exists_time_in_millis': 0,
-                        'missing_total': 0,
-                        'missing_time_in_millis': 0,
-                        'current': 0
-                    },
-                    'search': {
-                        'open_contexts': 0,
-                        'query_total': 0,
-                        'query_time_in_millis': 0,
-                        'query_current': 0,
-                        'fetch_total': 0,
-                        'fetch_time_in_millis': 0,
-                        'fetch_current': 0,
-                        'scroll_total': 0,
-                        'scroll_time_in_millis': 0,
-                        'scroll_current': 0,
-                        'suggest_total': 0,
-                        'suggest_time_in_millis': 0,
-                        'suggest_current': 0
-                    },
-                    'merges': {
-                        'current': 0,
-                        'current_docs': 0,
-                        'current_size_in_bytes': 0,
-                        'total': 0,
-                        'total_time_in_millis': 0,
-                        'total_docs': 0,
-                        'total_size_in_bytes': 0,
-                        'total_stopped_time_in_millis': 0,
-                        'total_throttled_time_in_millis': 0,
-                        'total_auto_throttle_in_bytes': 104857600
-                    },
-                    'refresh': {
-                        'total': 3,
-                        'total_time_in_millis': 107
-                    },
-                    'flush': {
-                        'total': 0,
-                        'total_time_in_millis': 0
-                    },
-                    'warmer': {
-                        'current': 0,
-                        'total': 8,
-                        'total_time_in_millis': 6
-                    },
-                    'query_cache': {
-                        'memory_size_in_bytes': 0,
-                        'total_count': 0,
-                        'hit_count': 0,
-                        'miss_count': 0,
-                        'cache_size': 0,
-                        'cache_count': 0,
-                        'evictions': 0
-                    },
-                    'fielddata': {
-                        'memory_size_in_bytes': 0,
-                        'evictions': 0,
-                        'fields': {
-                            'group1': {
-                                'memory_size_in_bytes': 1024
-                            },
-                            'group2': {
-                                'memory_size_in_bytes': 2048
-                            }
-                        }
-                    },
-                    'completion': {
-                        'size_in_bytes': 0
-                    },
-                    'segments': {
-                        'count': 3,
-                        'memory_in_bytes': 7908,
-                        'terms_memory_in_bytes': 5976,
-                        'stored_fields_memory_in_bytes': 936,
-                        'term_vectors_memory_in_bytes': 0,
-                        'norms_memory_in_bytes': 576,
-                        'points_memory_in_bytes': 144,
-                        'doc_values_memory_in_bytes': 276,
-                        'index_writer_memory_in_bytes': 0,
-                        'version_map_memory_in_bytes': 0,
-                        'fixed_bit_set_memory_in_bytes': 0,
-                        'max_unsafe_auto_id_timestamp': -1,
-                        'file_sizes': {}
-                    },
-                    'translog': {
-                        'operations': 3,
-                        'size_in_bytes': 491
-                    },
-                    'request_cache': {
-                        'memory_size_in_bytes': 0,
-                        'evictions': 0,
-                        'hit_count': 0,
-                        'miss_count': 0
-                    },
-                    'recovery': {
-                        'current_as_source': 0,
-                        'current_as_target': 0,
-                        'throttle_time_in_millis': 0
-                    }
-                }
-            }
-        }
-    }
-
-    def test_endpoint_cluster(self):
-
-        expected = {
-            'primaries_docs_count{index="_all"}': 3,
-            'primaries_docs_deleted{index="_all"}': 0,
-            'primaries_store_size_in_bytes{index="_all"}': 12690,
-            'primaries_store_throttle_time_in_millis{index="_all"}': 0,
-            'primaries_indexing_index_total{index="_all"}': 3,
-            'primaries_indexing_index_time_in_millis{index="_all"}': 45,
-            'primaries_indexing_index_current{index="_all"}': 0,
-            'primaries_indexing_index_failed{index="_all"}': 0,
-            'primaries_indexing_delete_total{index="_all"}': 0,
-            'primaries_indexing_delete_time_in_millis{index="_all"}': 0,
-            'primaries_indexing_delete_current{index="_all"}': 0,
-            'primaries_indexing_noop_update_total{index="_all"}': 0,
-            'primaries_indexing_is_throttled{index="_all"}': 0,
-            'primaries_indexing_throttle_time_in_millis{index="_all"}': 0,
-            'primaries_get_total{index="_all"}': 0,
-            'primaries_get_time_in_millis{index="_all"}': 0,
-            'primaries_get_exists_total{index="_all"}': 0,
-            'primaries_get_exists_time_in_millis{index="_all"}': 0,
-            'primaries_get_missing_total{index="_all"}': 0,
-            'primaries_get_missing_time_in_millis{index="_all"}': 0,
-            'primaries_get_current{index="_all"}': 0,
-            'primaries_search_open_contexts{index="_all"}': 0,
-            'primaries_search_query_total{index="_all"}': 0,
-            'primaries_search_query_time_in_millis{index="_all"}': 0,
-            'primaries_search_query_current{index="_all"}': 0,
-            'primaries_search_fetch_total{index="_all"}': 0,
-            'primaries_search_fetch_time_in_millis{index="_all"}': 0,
-            'primaries_search_fetch_current{index="_all"}': 0,
-            'primaries_search_scroll_total{index="_all"}': 0,
-            'primaries_search_scroll_time_in_millis{index="_all"}': 0,
-            'primaries_search_scroll_current{index="_all"}': 0,
-            'primaries_search_suggest_total{index="_all"}': 0,
-            'primaries_search_suggest_time_in_millis{index="_all"}': 0,
-            'primaries_search_suggest_current{index="_all"}': 0,
-            'primaries_merges_current{index="_all"}': 0,
-            'primaries_merges_current_docs{index="_all"}': 0,
-            'primaries_merges_current_size_in_bytes{index="_all"}': 0,
-            'primaries_merges_total{index="_all"}': 0,
-            'primaries_merges_total_time_in_millis{index="_all"}': 0,
-            'primaries_merges_total_docs{index="_all"}': 0,
-            'primaries_merges_total_size_in_bytes{index="_all"}': 0,
-            'primaries_merges_total_stopped_time_in_millis{index="_all"}': 0,
-            'primaries_merges_total_throttled_time_in_millis{index="_all"}': 0,
-            'primaries_merges_total_auto_throttle_in_bytes{index="_all"}': 104857600,
-            'primaries_refresh_total{index="_all"}': 3,
-            'primaries_refresh_total_time_in_millis{index="_all"}': 107,
-            'primaries_flush_total{index="_all"}': 0,
-            'primaries_flush_total_time_in_millis{index="_all"}': 0,
-            'primaries_warmer_current{index="_all"}': 0,
-            'primaries_warmer_total{index="_all"}': 8,
-            'primaries_warmer_total_time_in_millis{index="_all"}': 6,
-            'primaries_query_cache_memory_size_in_bytes{index="_all"}': 0,
-            'primaries_query_cache_total_count{index="_all"}': 0,
-            'primaries_query_cache_hit_count{index="_all"}': 0,
-            'primaries_query_cache_miss_count{index="_all"}': 0,
-            'primaries_query_cache_cache_size{index="_all"}': 0,
-            'primaries_query_cache_cache_count{index="_all"}': 0,
-            'primaries_query_cache_evictions{index="_all"}': 0,
-            'primaries_fielddata_memory_size_in_bytes{index="_all"}': 0,
-            'primaries_fielddata_evictions{index="_all"}': 0,
-            'primaries_fielddata_fields_memory_size_in_bytes{index="_all",field="group1"}': 1024,
-            'primaries_fielddata_fields_memory_size_in_bytes{index="_all",field="group2"}': 2048,
-            'primaries_completion_size_in_bytes{index="_all"}': 0,
-            'primaries_segments_count{index="_all"}': 3,
-            'primaries_segments_memory_in_bytes{index="_all"}': 7908,
-            'primaries_segments_terms_memory_in_bytes{index="_all"}': 5976,
-            'primaries_segments_stored_fields_memory_in_bytes{index="_all"}': 936,
-            'primaries_segments_term_vectors_memory_in_bytes{index="_all"}': 0,
-            'primaries_segments_norms_memory_in_bytes{index="_all"}': 576,
-            'primaries_segments_points_memory_in_bytes{index="_all"}': 144,
-            'primaries_segments_doc_values_memory_in_bytes{index="_all"}': 276,
-            'primaries_segments_index_writer_memory_in_bytes{index="_all"}': 0,
-            'primaries_segments_version_map_memory_in_bytes{index="_all"}': 0,
-            'primaries_segments_fixed_bit_set_memory_in_bytes{index="_all"}': 0,
-            'primaries_segments_max_unsafe_auto_id_timestamp{index="_all"}': -1,
-            'primaries_translog_operations{index="_all"}': 3,
-            'primaries_translog_size_in_bytes{index="_all"}': 491,
-            'primaries_request_cache_memory_size_in_bytes{index="_all"}': 0,
-            'primaries_request_cache_evictions{index="_all"}': 0,
-            'primaries_request_cache_hit_count{index="_all"}': 0,
-            'primaries_request_cache_miss_count{index="_all"}': 0,
-            'primaries_recovery_current_as_source{index="_all"}': 0,
-            'primaries_recovery_current_as_target{index="_all"}': 0,
-            'primaries_recovery_throttle_time_in_millis{index="_all"}': 0,
-            'total_docs_count{index="_all"}': 3,
-            'total_docs_deleted{index="_all"}': 0,
-            'total_store_size_in_bytes{index="_all"}': 12690,
-            'total_store_throttle_time_in_millis{index="_all"}': 0,
-            'total_indexing_index_total{index="_all"}': 3,
-            'total_indexing_index_time_in_millis{index="_all"}': 45,
-            'total_indexing_index_current{index="_all"}': 0,
-            'total_indexing_index_failed{index="_all"}': 0,
-            'total_indexing_delete_total{index="_all"}': 0,
-            'total_indexing_delete_time_in_millis{index="_all"}': 0,
-            'total_indexing_delete_current{index="_all"}': 0,
-            'total_indexing_noop_update_total{index="_all"}': 0,
-            'total_indexing_is_throttled{index="_all"}': 0,
-            'total_indexing_throttle_time_in_millis{index="_all"}': 0,
-            'total_get_total{index="_all"}': 0,
-            'total_get_time_in_millis{index="_all"}': 0,
-            'total_get_exists_total{index="_all"}': 0,
-            'total_get_exists_time_in_millis{index="_all"}': 0,
-            'total_get_missing_total{index="_all"}': 0,
-            'total_get_missing_time_in_millis{index="_all"}': 0,
-            'total_get_current{index="_all"}': 0,
-            'total_search_open_contexts{index="_all"}': 0,
-            'total_search_query_total{index="_all"}': 0,
-            'total_search_query_time_in_millis{index="_all"}': 0,
-            'total_search_query_current{index="_all"}': 0,
-            'total_search_fetch_total{index="_all"}': 0,
-            'total_search_fetch_time_in_millis{index="_all"}': 0,
-            'total_search_fetch_current{index="_all"}': 0,
-            'total_search_scroll_total{index="_all"}': 0,
-            'total_search_scroll_time_in_millis{index="_all"}': 0,
-            'total_search_scroll_current{index="_all"}': 0,
-            'total_search_suggest_total{index="_all"}': 0,
-            'total_search_suggest_time_in_millis{index="_all"}': 0,
-            'total_search_suggest_current{index="_all"}': 0,
-            'total_merges_current{index="_all"}': 0,
-            'total_merges_current_docs{index="_all"}': 0,
-            'total_merges_current_size_in_bytes{index="_all"}': 0,
-            'total_merges_total{index="_all"}': 0,
-            'total_merges_total_time_in_millis{index="_all"}': 0,
-            'total_merges_total_docs{index="_all"}': 0,
-            'total_merges_total_size_in_bytes{index="_all"}': 0,
-            'total_merges_total_stopped_time_in_millis{index="_all"}': 0,
-            'total_merges_total_throttled_time_in_millis{index="_all"}': 0,
-            'total_merges_total_auto_throttle_in_bytes{index="_all"}': 104857600,
-            'total_refresh_total{index="_all"}': 3,
-            'total_refresh_total_time_in_millis{index="_all"}': 107,
-            'total_flush_total{index="_all"}': 0,
-            'total_flush_total_time_in_millis{index="_all"}': 0,
-            'total_warmer_current{index="_all"}': 0,
-            'total_warmer_total{index="_all"}': 8,
-            'total_warmer_total_time_in_millis{index="_all"}': 6,
-            'total_query_cache_memory_size_in_bytes{index="_all"}': 0,
-            'total_query_cache_total_count{index="_all"}': 0,
-            'total_query_cache_hit_count{index="_all"}': 0,
-            'total_query_cache_miss_count{index="_all"}': 0,
-            'total_query_cache_cache_size{index="_all"}': 0,
-            'total_query_cache_cache_count{index="_all"}': 0,
-            'total_query_cache_evictions{index="_all"}': 0,
-            'total_fielddata_memory_size_in_bytes{index="_all"}': 0,
-            'total_fielddata_evictions{index="_all"}': 0,
-            'total_fielddata_fields_memory_size_in_bytes{index="_all",field="group1"}': 1024,
-            'total_fielddata_fields_memory_size_in_bytes{index="_all",field="group2"}': 2048,
-            'total_completion_size_in_bytes{index="_all"}': 0,
-            'total_segments_count{index="_all"}': 3,
-            'total_segments_memory_in_bytes{index="_all"}': 7908,
-            'total_segments_terms_memory_in_bytes{index="_all"}': 5976,
-            'total_segments_stored_fields_memory_in_bytes{index="_all"}': 936,
-            'total_segments_term_vectors_memory_in_bytes{index="_all"}': 0,
-            'total_segments_norms_memory_in_bytes{index="_all"}': 576,
-            'total_segments_points_memory_in_bytes{index="_all"}': 144,
-            'total_segments_doc_values_memory_in_bytes{index="_all"}': 276,
-            'total_segments_index_writer_memory_in_bytes{index="_all"}': 0,
-            'total_segments_version_map_memory_in_bytes{index="_all"}': 0,
-            'total_segments_fixed_bit_set_memory_in_bytes{index="_all"}': 0,
-            'total_segments_max_unsafe_auto_id_timestamp{index="_all"}': -1,
-            'total_translog_operations{index="_all"}': 3,
-            'total_translog_size_in_bytes{index="_all"}': 491,
-            'total_request_cache_memory_size_in_bytes{index="_all"}': 0,
-            'total_request_cache_evictions{index="_all"}': 0,
-            'total_request_cache_hit_count{index="_all"}': 0,
-            'total_request_cache_miss_count{index="_all"}': 0,
-            'total_recovery_current_as_source{index="_all"}': 0,
-            'total_recovery_current_as_target{index="_all"}': 0,
-            'total_recovery_throttle_time_in_millis{index="_all"}': 0,
-        }
-        result = convert_result(parse_response(self.response, parse_indices=False))
-        self.assertEqual(expected, result)
-
-    def test_endpoint_indices(self):
-
-        expected = {
-            'primaries_docs_count{index="foo"}': 3,
-            'primaries_docs_deleted{index="foo"}': 0,
-            'primaries_store_size_in_bytes{index="foo"}': 12690,
-            'primaries_store_throttle_time_in_millis{index="foo"}': 0,
-            'primaries_indexing_index_total{index="foo"}': 3,
-            'primaries_indexing_index_time_in_millis{index="foo"}': 45,
-            'primaries_indexing_index_current{index="foo"}': 0,
-            'primaries_indexing_index_failed{index="foo"}': 0,
-            'primaries_indexing_delete_total{index="foo"}': 0,
-            'primaries_indexing_delete_time_in_millis{index="foo"}': 0,
-            'primaries_indexing_delete_current{index="foo"}': 0,
-            'primaries_indexing_noop_update_total{index="foo"}': 0,
-            'primaries_indexing_is_throttled{index="foo"}': 0,
-            'primaries_indexing_throttle_time_in_millis{index="foo"}': 0,
-            'primaries_get_total{index="foo"}': 0,
-            'primaries_get_time_in_millis{index="foo"}': 0,
-            'primaries_get_exists_total{index="foo"}': 0,
-            'primaries_get_exists_time_in_millis{index="foo"}': 0,
-            'primaries_get_missing_total{index="foo"}': 0,
-            'primaries_get_missing_time_in_millis{index="foo"}': 0,
-            'primaries_get_current{index="foo"}': 0,
-            'primaries_search_open_contexts{index="foo"}': 0,
-            'primaries_search_query_total{index="foo"}': 0,
-            'primaries_search_query_time_in_millis{index="foo"}': 0,
-            'primaries_search_query_current{index="foo"}': 0,
-            'primaries_search_fetch_total{index="foo"}': 0,
-            'primaries_search_fetch_time_in_millis{index="foo"}': 0,
-            'primaries_search_fetch_current{index="foo"}': 0,
-            'primaries_search_scroll_total{index="foo"}': 0,
-            'primaries_search_scroll_time_in_millis{index="foo"}': 0,
-            'primaries_search_scroll_current{index="foo"}': 0,
-            'primaries_search_suggest_total{index="foo"}': 0,
-            'primaries_search_suggest_time_in_millis{index="foo"}': 0,
-            'primaries_search_suggest_current{index="foo"}': 0,
-            'primaries_merges_current{index="foo"}': 0,
-            'primaries_merges_current_docs{index="foo"}': 0,
-            'primaries_merges_current_size_in_bytes{index="foo"}': 0,
-            'primaries_merges_total{index="foo"}': 0,
-            'primaries_merges_total_time_in_millis{index="foo"}': 0,
-            'primaries_merges_total_docs{index="foo"}': 0,
-            'primaries_merges_total_size_in_bytes{index="foo"}': 0,
-            'primaries_merges_total_stopped_time_in_millis{index="foo"}': 0,
-            'primaries_merges_total_throttled_time_in_millis{index="foo"}': 0,
-            'primaries_merges_total_auto_throttle_in_bytes{index="foo"}': 104857600,
-            'primaries_refresh_total{index="foo"}': 3,
-            'primaries_refresh_total_time_in_millis{index="foo"}': 107,
-            'primaries_flush_total{index="foo"}': 0,
-            'primaries_flush_total_time_in_millis{index="foo"}': 0,
-            'primaries_warmer_current{index="foo"}': 0,
-            'primaries_warmer_total{index="foo"}': 8,
-            'primaries_warmer_total_time_in_millis{index="foo"}': 6,
-            'primaries_query_cache_memory_size_in_bytes{index="foo"}': 0,
-            'primaries_query_cache_total_count{index="foo"}': 0,
-            'primaries_query_cache_hit_count{index="foo"}': 0,
-            'primaries_query_cache_miss_count{index="foo"}': 0,
-            'primaries_query_cache_cache_size{index="foo"}': 0,
-            'primaries_query_cache_cache_count{index="foo"}': 0,
-            'primaries_query_cache_evictions{index="foo"}': 0,
-            'primaries_fielddata_memory_size_in_bytes{index="foo"}': 0,
-            'primaries_fielddata_evictions{index="foo"}': 0,
-            'primaries_fielddata_fields_memory_size_in_bytes{index="foo",field="group1"}': 1024,
-            'primaries_fielddata_fields_memory_size_in_bytes{index="foo",field="group2"}': 2048,
-            'primaries_completion_size_in_bytes{index="foo"}': 0,
-            'primaries_segments_count{index="foo"}': 3,
-            'primaries_segments_memory_in_bytes{index="foo"}': 7908,
-            'primaries_segments_terms_memory_in_bytes{index="foo"}': 5976,
-            'primaries_segments_stored_fields_memory_in_bytes{index="foo"}': 936,
-            'primaries_segments_term_vectors_memory_in_bytes{index="foo"}': 0,
-            'primaries_segments_norms_memory_in_bytes{index="foo"}': 576,
-            'primaries_segments_points_memory_in_bytes{index="foo"}': 144,
-            'primaries_segments_doc_values_memory_in_bytes{index="foo"}': 276,
-            'primaries_segments_index_writer_memory_in_bytes{index="foo"}': 0,
-            'primaries_segments_version_map_memory_in_bytes{index="foo"}': 0,
-            'primaries_segments_fixed_bit_set_memory_in_bytes{index="foo"}': 0,
-            'primaries_segments_max_unsafe_auto_id_timestamp{index="foo"}': -1,
-            'primaries_translog_operations{index="foo"}': 3,
-            'primaries_translog_size_in_bytes{index="foo"}': 491,
-            'primaries_request_cache_memory_size_in_bytes{index="foo"}': 0,
-            'primaries_request_cache_evictions{index="foo"}': 0,
-            'primaries_request_cache_hit_count{index="foo"}': 0,
-            'primaries_request_cache_miss_count{index="foo"}': 0,
-            'primaries_recovery_current_as_source{index="foo"}': 0,
-            'primaries_recovery_current_as_target{index="foo"}': 0,
-            'primaries_recovery_throttle_time_in_millis{index="foo"}': 0,
-            'total_docs_count{index="foo"}': 3,
-            'total_docs_deleted{index="foo"}': 0,
-            'total_store_size_in_bytes{index="foo"}': 12690,
-            'total_store_throttle_time_in_millis{index="foo"}': 0,
-            'total_indexing_index_total{index="foo"}': 3,
-            'total_indexing_index_time_in_millis{index="foo"}': 45,
-            'total_indexing_index_current{index="foo"}': 0,
-            'total_indexing_index_failed{index="foo"}': 0,
-            'total_indexing_delete_total{index="foo"}': 0,
-            'total_indexing_delete_time_in_millis{index="foo"}': 0,
-            'total_indexing_delete_current{index="foo"}': 0,
-            'total_indexing_noop_update_total{index="foo"}': 0,
-            'total_indexing_is_throttled{index="foo"}': 0,
-            'total_indexing_throttle_time_in_millis{index="foo"}': 0,
-            'total_get_total{index="foo"}': 0,
-            'total_get_time_in_millis{index="foo"}': 0,
-            'total_get_exists_total{index="foo"}': 0,
-            'total_get_exists_time_in_millis{index="foo"}': 0,
-            'total_get_missing_total{index="foo"}': 0,
-            'total_get_missing_time_in_millis{index="foo"}': 0,
-            'total_get_current{index="foo"}': 0,
-            'total_search_open_contexts{index="foo"}': 0,
-            'total_search_query_total{index="foo"}': 0,
-            'total_search_query_time_in_millis{index="foo"}': 0,
-            'total_search_query_current{index="foo"}': 0,
-            'total_search_fetch_total{index="foo"}': 0,
-            'total_search_fetch_time_in_millis{index="foo"}': 0,
-            'total_search_fetch_current{index="foo"}': 0,
-            'total_search_scroll_total{index="foo"}': 0,
-            'total_search_scroll_time_in_millis{index="foo"}': 0,
-            'total_search_scroll_current{index="foo"}': 0,
-            'total_search_suggest_total{index="foo"}': 0,
-            'total_search_suggest_time_in_millis{index="foo"}': 0,
-            'total_search_suggest_current{index="foo"}': 0,
-            'total_merges_current{index="foo"}': 0,
-            'total_merges_current_docs{index="foo"}': 0,
-            'total_merges_current_size_in_bytes{index="foo"}': 0,
-            'total_merges_total{index="foo"}': 0,
-            'total_merges_total_time_in_millis{index="foo"}': 0,
-            'total_merges_total_docs{index="foo"}': 0,
-            'total_merges_total_size_in_bytes{index="foo"}': 0,
-            'total_merges_total_stopped_time_in_millis{index="foo"}': 0,
-            'total_merges_total_throttled_time_in_millis{index="foo"}': 0,
-            'total_merges_total_auto_throttle_in_bytes{index="foo"}': 104857600,
-            'total_refresh_total{index="foo"}': 3,
-            'total_refresh_total_time_in_millis{index="foo"}': 107,
-            'total_flush_total{index="foo"}': 0,
-            'total_flush_total_time_in_millis{index="foo"}': 0,
-            'total_warmer_current{index="foo"}': 0,
-            'total_warmer_total{index="foo"}': 8,
-            'total_warmer_total_time_in_millis{index="foo"}': 6,
-            'total_query_cache_memory_size_in_bytes{index="foo"}': 0,
-            'total_query_cache_total_count{index="foo"}': 0,
-            'total_query_cache_hit_count{index="foo"}': 0,
-            'total_query_cache_miss_count{index="foo"}': 0,
-            'total_query_cache_cache_size{index="foo"}': 0,
-            'total_query_cache_cache_count{index="foo"}': 0,
-            'total_query_cache_evictions{index="foo"}': 0,
-            'total_fielddata_memory_size_in_bytes{index="foo"}': 0,
-            'total_fielddata_evictions{index="foo"}': 0,
-            'total_fielddata_fields_memory_size_in_bytes{index="foo",field="group1"}': 1024,
-            'total_fielddata_fields_memory_size_in_bytes{index="foo",field="group2"}': 2048,
-            'total_completion_size_in_bytes{index="foo"}': 0,
-            'total_segments_count{index="foo"}': 3,
-            'total_segments_memory_in_bytes{index="foo"}': 7908,
-            'total_segments_terms_memory_in_bytes{index="foo"}': 5976,
-            'total_segments_stored_fields_memory_in_bytes{index="foo"}': 936,
-            'total_segments_term_vectors_memory_in_bytes{index="foo"}': 0,
-            'total_segments_norms_memory_in_bytes{index="foo"}': 576,
-            'total_segments_points_memory_in_bytes{index="foo"}': 144,
-            'total_segments_doc_values_memory_in_bytes{index="foo"}': 276,
-            'total_segments_index_writer_memory_in_bytes{index="foo"}': 0,
-            'total_segments_version_map_memory_in_bytes{index="foo"}': 0,
-            'total_segments_fixed_bit_set_memory_in_bytes{index="foo"}': 0,
-            'total_segments_max_unsafe_auto_id_timestamp{index="foo"}': -1,
-            'total_translog_operations{index="foo"}': 3,
-            'total_translog_size_in_bytes{index="foo"}': 491,
-            'total_request_cache_memory_size_in_bytes{index="foo"}': 0,
-            'total_request_cache_evictions{index="foo"}': 0,
-            'total_request_cache_hit_count{index="foo"}': 0,
-            'total_request_cache_miss_count{index="foo"}': 0,
-            'total_recovery_current_as_source{index="foo"}': 0,
-            'total_recovery_current_as_target{index="foo"}': 0,
-            'total_recovery_throttle_time_in_millis{index="foo"}': 0,
-        }
-        result = convert_result(parse_response(self.response, parse_indices=True))
-        self.assertEqual(expected, result)
-
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/tests/test_nodes_stats_parser.py b/tests/test_nodes_stats_parser.py
deleted file mode 100644
index 3432b7c..0000000
--- a/tests/test_nodes_stats_parser.py
+++ /dev/null
@@ -1,748 +0,0 @@
-import unittest
-
-from prometheus_es_exporter.nodes_stats_parser import parse_response
-from tests.utils import convert_result
-
-
-# Sample responses generated by querying the endpoint on a Elasticsearch
-# server populated with the following data (http command = Httpie utility):
-# > http -v POST localhost:9200/foo/bar/1 val:=1 group1=a group2=a
-# > http -v POST localhost:9200/foo/bar/2 val:=2 group1=a group2=b
-# > http -v POST localhost:9200/foo/bar/3 val:=3 group1=b group2=b
-# Some details are instance specific, so mileage may vary!
-class Test(unittest.TestCase):
-    maxDiff = None
-
-    def test_endpoint(self):
-        # Endpoint: /_nodes/stats?pretty
-        response = {
-            '_nodes': {
-                'total': 1,
-                'successful': 1,
-                'failed': 0
-            },
-            'cluster_name': 'elasticsearch',
-            'nodes': {
-                'bRcKq5zUTAuwNf4qvnXzIQ': {
-                    'timestamp': 1484861642281,
-                    'name': 'bRcKq5z',
-                    'transport_address': '127.0.0.1:9300',
-                    'host': '127.0.0.1',
-                    'ip': '127.0.0.1:9300',
-                    'roles': [
-                        'master',
-                        'data',
-                        'ingest'
-                    ],
-                    'indices': {
-                        'docs': {
-                            'count': 3,
-                            'deleted': 0
-                        },
-                        'store': {
-                            'size_in_bytes': 12972,
-                            'throttle_time_in_millis': 0
-                        },
-                        'indexing': {
-                            'index_total': 3,
-                            'index_time_in_millis': 95,
-                            'index_current': 0,
-                            'index_failed': 0,
-                            'delete_total': 0,
-                            'delete_time_in_millis': 0,
-                            'delete_current': 0,
-                            'noop_update_total': 0,
-                            'is_throttled': False,
-                            'throttle_time_in_millis': 0
-                        },
-                        'get': {
-                            'total': 0,
-                            'time_in_millis': 0,
-                            'exists_total': 0,
-                            'exists_time_in_millis': 0,
-                            'missing_total': 0,
-                            'missing_time_in_millis': 0,
-                            'current': 0
-                        },
-                        'search': {
-                            'open_contexts': 0,
-                            'query_total': 0,
-                            'query_time_in_millis': 0,
-                            'query_current': 0,
-                            'fetch_total': 0,
-                            'fetch_time_in_millis': 0,
-                            'fetch_current': 0,
-                            'scroll_total': 0,
-                            'scroll_time_in_millis': 0,
-                            'scroll_current': 0,
-                            'suggest_total': 0,
-                            'suggest_time_in_millis': 0,
-                            'suggest_current': 0
-                        },
-                        'merges': {
-                            'current': 0,
-                            'current_docs': 0,
-                            'current_size_in_bytes': 0,
-                            'total': 0,
-                            'total_time_in_millis': 0,
-                            'total_docs': 0,
-                            'total_size_in_bytes': 0,
-                            'total_stopped_time_in_millis': 0,
-                            'total_throttled_time_in_millis': 0,
-                            'total_auto_throttle_in_bytes': 104857600
-                        },
-                        'refresh': {
-                            'total': 6,
-                            'total_time_in_millis': 304
-                        },
-                        'flush': {
-                            'total': 3,
-                            'total_time_in_millis': 72
-                        },
-                        'warmer': {
-                            'current': 0,
-                            'total': 14,
-                            'total_time_in_millis': 19
-                        },
-                        'query_cache': {
-                            'memory_size_in_bytes': 0,
-                            'total_count': 0,
-                            'hit_count': 0,
-                            'miss_count': 0,
-                            'cache_size': 0,
-                            'cache_count': 0,
-                            'evictions': 0
-                        },
-                        'fielddata': {
-                            'memory_size_in_bytes': 0,
-                            'evictions': 0
-                        },
-                        'completion': {
-                            'size_in_bytes': 0
-                        },
-                        'segments': {
-                            'count': 3,
-                            'memory_in_bytes': 7908,
-                            'terms_memory_in_bytes': 5976,
-                            'stored_fields_memory_in_bytes': 936,
-                            'term_vectors_memory_in_bytes': 0,
-                            'norms_memory_in_bytes': 576,
-                            'points_memory_in_bytes': 144,
-                            'doc_values_memory_in_bytes': 276,
-                            'index_writer_memory_in_bytes': 0,
-                            'version_map_memory_in_bytes': 0,
-                            'fixed_bit_set_memory_in_bytes': 0,
-                            'max_unsafe_auto_id_timestamp': -1,
-                            'file_sizes': {}
-                        },
-                        'translog': {
-                            'operations': 0,
-                            'size_in_bytes': 215
-                        },
-                        'request_cache': {
-                            'memory_size_in_bytes': 0,
-                            'evictions': 0,
-                            'hit_count': 0,
-                            'miss_count': 0
-                        },
-                        'recovery': {
-                            'current_as_source': 0,
-                            'current_as_target': 0,
-                            'throttle_time_in_millis': 0
-                        }
-                    },
-                    'os': {
-                        'timestamp': 1484861642359,
-                        'cpu': {
-                            'percent': 53,
-                            'load_average': {
-                                '1m': 2.53,
-                                '5m': 2.3,
-                                '15m': 2.23
-                            }
-                        },
-                        'mem': {
-                            'total_in_bytes': 16703762432,
-                            'free_in_bytes': 164323328,
-                            'used_in_bytes': 16539439104,
-                            'free_percent': 1,
-                            'used_percent': 99
-                        },
-                        'swap': {
-                            'total_in_bytes': 17054035968,
-                            'free_in_bytes': 12281872384,
-                            'used_in_bytes': 4772163584
-                        }
-                    },
-                    'process': {
-                        'timestamp': 1484861642360,
-                        'open_file_descriptors': 180,
-                        'max_file_descriptors': 1048576,
-                        'cpu': {
-                            'percent': 0,
-                            'total_in_millis': 28270
-                        },
-                        'mem': {
-                            'total_virtual_in_bytes': 5947977728
-                        }
-                    },
-                    'jvm': {
-                        'timestamp': 1484861642361,
-                        'uptime_in_millis': 614767,
-                        'mem': {
-                            'heap_used_in_bytes': 233688144,
-                            'heap_used_percent': 11,
-                            'heap_committed_in_bytes': 2112618496,
-                            'heap_max_in_bytes': 2112618496,
-                            'non_heap_used_in_bytes': 67167936,
-                            'non_heap_committed_in_bytes': 71741440,
-                            'pools': {
-                                'young': {
-                                    'used_in_bytes': 189809608,
-                                    'max_in_bytes': 279183360,
-                                    'peak_used_in_bytes': 279183360,
-                                    'peak_max_in_bytes': 279183360
-                                },
-                                'survivor': {
-                                    'used_in_bytes': 34865136,
-                                    'max_in_bytes': 34865152,
-                                    'peak_used_in_bytes': 34865136,
-                                    'peak_max_in_bytes': 34865152
-                                },
-                                'old': {
-                                    'used_in_bytes': 9013400,
-                                    'max_in_bytes': 1798569984,
-                                    'peak_used_in_bytes': 9013400,
-                                    'peak_max_in_bytes': 1798569984
-                                }
-                            }
-                        },
-                        'threads': {
-                            'count': 40,
-                            'peak_count': 46
-                        },
-                        'gc': {
-                            'collectors': {
-                                'young': {
-                                    'collection_count': 2,
-                                    'collection_time_in_millis': 189
-                                },
-                                'old': {
-                                    'collection_count': 1,
-                                    'collection_time_in_millis': 143
-                                }
-                            }
-                        },
-                        'buffer_pools': {
-                            'direct': {
-                                'count': 29,
-                                'used_in_bytes': 87069546,
-                                'total_capacity_in_bytes': 87069545
-                            },
-                            'mapped': {
-                                'count': 3,
-                                'used_in_bytes': 9658,
-                                'total_capacity_in_bytes': 9658
-                            }
-                        },
-                        'classes': {
-                            'current_loaded_count': 10236,
-                            'total_loaded_count': 10236,
-                            'total_unloaded_count': 0
-                        }
-                    },
-                    'thread_pool': {
-                        'bulk': {
-                            'threads': 0,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 0,
-                            'completed': 0
-                        },
-                        'fetch_shard_started': {
-                            'threads': 0,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 0,
-                            'completed': 0
-                        },
-                        'fetch_shard_store': {
-                            'threads': 0,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 0,
-                            'completed': 0
-                        },
-                        'flush': {
-                            'threads': 2,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 2,
-                            'completed': 6
-                        },
-                        'force_merge': {
-                            'threads': 0,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 0,
-                            'completed': 0
-                        },
-                        'generic': {
-                            'threads': 4,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 4,
-                            'completed': 73
-                        },
-                        'get': {
-                            'threads': 0,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 0,
-                            'completed': 0
-                        },
-                        'index': {
-                            'threads': 3,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 3,
-                            'completed': 3
-                        },
-                        'listener': {
-                            'threads': 0,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 0,
-                            'completed': 0
-                        },
-                        'management': {
-                            'threads': 3,
-                            'queue': 0,
-                            'active': 1,
-                            'rejected': 0,
-                            'largest': 3,
-                            'completed': 77
-                        },
-                        'refresh': {
-                            'threads': 1,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 1,
-                            'completed': 588
-                        },
-                        'search': {
-                            'threads': 0,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 0,
-                            'completed': 0
-                        },
-                        'snapshot': {
-                            'threads': 0,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 0,
-                            'completed': 0
-                        },
-                        'warmer': {
-                            'threads': 1,
-                            'queue': 0,
-                            'active': 0,
-                            'rejected': 0,
-                            'largest': 1,
-                            'completed': 9
-                        }
-                    },
-                    'fs': {
-                        'timestamp': 1484861642369,
-                        'total': {
-                            'total_in_bytes': 233134567424,
-                            'free_in_bytes': 92206276608,
-                            'available_in_bytes': 80292356096,
-                            'spins': 'true'
-                        },
-                        'data': [
-                            {
-                                'path': '/usr/share/elasticsearch/data/nodes/0',
-                                'mount': '/usr/share/elasticsearch/data (/dev/mapper/ubuntu--vg-root)',
-                                'type': 'ext4',
-                                'total_in_bytes': 233134567424,
-                                'free_in_bytes': 92206276608,
-                                'available_in_bytes': 80292356096,
-                                'spins': 'true'
-                            }
-                        ],
-                        'io_stats': {
-                            'devices': [
-                                {
-                                    'device_name': 'dm-0',
-                                    'operations': 22045,
-                                    'read_operations': 14349,
-                                    'write_operations': 7696,
-                                    'read_kilobytes': 294732,
-                                    'write_kilobytes': 113424
-                                }
-                            ],
-                            'total': {
-                                'operations': 22045,
-                                'read_operations': 14349,
-                                'write_operations': 7696,
-                                'read_kilobytes': 294732,
-                                'write_kilobytes': 113424
-                            }
-                        }
-                    },
-                    'transport': {
-                        'server_open': 0,
-                        'rx_count': 8,
-                        'rx_size_in_bytes': 3607,
-                        'tx_count': 8,
-                        'tx_size_in_bytes': 3607
-                    },
-                    'http': {
-                        'current_open': 1,
-                        'total_opened': 4
-                    },
-                    'breakers': {
-                        'request': {
-                            'limit_size_in_bytes': 1267571097,
-                            'limit_size': '1.1gb',
-                            'estimated_size_in_bytes': 0,
-                            'estimated_size': '0b',
-                            'overhead': 1.0,
-                            'tripped': 0
-                        },
-                        'fielddata': {
-                            'limit_size_in_bytes': 1267571097,
-                            'limit_size': '1.1gb',
-                            'estimated_size_in_bytes': 0,
-                            'estimated_size': '0b',
-                            'overhead': 1.03,
-                            'tripped': 0
-                        },
-                        'in_flight_requests': {
-                            'limit_size_in_bytes': 2112618496,
-                            'limit_size': '1.9gb',
-                            'estimated_size_in_bytes': 0,
-                            'estimated_size': '0b',
-                            'overhead': 1.0,
-                            'tripped': 0
-                        },
-                        'parent': {
-                            'limit_size_in_bytes': 1478832947,
-                            'limit_size': '1.3gb',
-                            'estimated_size_in_bytes': 0,
-                            'estimated_size': '0b',
-                            'overhead': 1.0,
-                            'tripped': 0
-                        }
-                    },
-                    'script': {
-                        'compilations': 0,
-                        'cache_evictions': 0
-                    },
-                    'discovery': {
-                        'cluster_state_queue': {
-                            'total': 0,
-                            'pending': 0,
-                            'committed': 0
-                        }
-                    },
-                    'ingest': {
-                        'total': {
-                            'count': 0,
-                            'time_in_millis': 0,
-                            'current': 0,
-                            'failed': 0
-                        },
-                        'pipelines': {}
-                    }
-                }
-            }
-        }
-
-        expected = {
-            'indices_docs_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 3,
-            'indices_docs_deleted{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_store_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 12972,
-            'indices_store_throttle_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_indexing_index_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 3,
-            'indices_indexing_index_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 95,
-            'indices_indexing_index_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_indexing_index_failed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_indexing_delete_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_indexing_delete_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_indexing_delete_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_indexing_noop_update_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_indexing_is_throttled{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_indexing_throttle_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_get_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_get_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_get_exists_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_get_exists_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_get_missing_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_get_missing_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_get_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_open_contexts{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_query_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_query_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_query_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_fetch_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_fetch_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_fetch_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_scroll_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_scroll_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_scroll_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_suggest_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_suggest_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_search_suggest_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_current_docs{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_current_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_total_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_total_docs{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_total_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_total_stopped_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_total_throttled_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_merges_total_auto_throttle_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 104857600,
-            'indices_refresh_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 6,
-            'indices_refresh_total_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 304,
-            'indices_flush_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 3,
-            'indices_flush_total_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 72,
-            'indices_warmer_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 14,
-            'indices_warmer_total_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 19,
-            'indices_warmer_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_query_cache_memory_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_query_cache_total_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_query_cache_hit_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_query_cache_miss_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_query_cache_cache_size{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_query_cache_cache_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_query_cache_evictions{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_fielddata_memory_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_fielddata_evictions{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_completion_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_segments_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 3,
-            'indices_segments_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 7908,
-            'indices_segments_terms_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 5976,
-            'indices_segments_stored_fields_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 936,
-            'indices_segments_term_vectors_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_segments_norms_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 576,
-            'indices_segments_points_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 144,
-            'indices_segments_doc_values_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 276,
-            'indices_segments_index_writer_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_segments_version_map_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_segments_fixed_bit_set_memory_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_segments_max_unsafe_auto_id_timestamp{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': -1,
-            'indices_translog_operations{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_translog_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 215,
-            'indices_request_cache_memory_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_request_cache_evictions{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_request_cache_hit_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_request_cache_miss_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_recovery_current_as_source{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_recovery_current_as_target{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'indices_recovery_throttle_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'os_cpu_percent{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 53,
-            'os_cpu_load_average_1m{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 2.53,
-            'os_cpu_load_average_5m{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 2.3,
-            'os_cpu_load_average_15m{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 2.23,
-            'os_mem_total_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 16703762432,
-            'os_mem_free_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 164323328,
-            'os_mem_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 16539439104,
-            'os_mem_free_percent{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1,
-            'os_mem_used_percent{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 99,
-            'os_swap_free_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 12281872384,
-            'os_swap_total_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 17054035968,
-            'os_swap_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 4772163584,
-            'process_open_file_descriptors{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 180,
-            'process_max_file_descriptors{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1048576,
-            'process_cpu_percent{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'process_cpu_total_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 28270,
-            'process_mem_total_virtual_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 5947977728,
-            'jvm_uptime_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 614767,
-            'jvm_mem_heap_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 233688144,
-            'jvm_mem_heap_used_percent{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 11,
-            'jvm_mem_heap_committed_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 2112618496,
-            'jvm_mem_heap_max_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 2112618496,
-            'jvm_mem_non_heap_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 67167936,
-            'jvm_mem_non_heap_committed_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 71741440,
-            'jvm_mem_pools_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="young"}': 189809608,
-            'jvm_mem_pools_max_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="young"}': 279183360,
-            'jvm_mem_pools_peak_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="young"}': 279183360,
-            'jvm_mem_pools_peak_max_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="young"}': 279183360,
-            'jvm_mem_pools_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="survivor"}': 34865136,
-            'jvm_mem_pools_max_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="survivor"}': 34865152,
-            'jvm_mem_pools_peak_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="survivor"}': 34865136,
-            'jvm_mem_pools_peak_max_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="survivor"}': 34865152,
-            'jvm_mem_pools_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="old"}': 9013400,
-            'jvm_mem_pools_max_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="old"}': 1798569984,
-            'jvm_mem_pools_peak_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="old"}': 9013400,
-            'jvm_mem_pools_peak_max_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",pool="old"}': 1798569984,
-            'jvm_threads_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 40,
-            'jvm_threads_peak_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 46,
-            'jvm_gc_collectors_collection_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",collector="young"}': 2,
-            'jvm_gc_collectors_collection_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",collector="young"}': 189,
-            'jvm_gc_collectors_collection_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",collector="old"}': 1,
-            'jvm_gc_collectors_collection_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",collector="old"}': 143,
-            'jvm_buffer_pools_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",buffer_pool="direct"}': 29,
-            'jvm_buffer_pools_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",buffer_pool="direct"}': 87069546,
-            'jvm_buffer_pools_total_capacity_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",buffer_pool="direct"}': 87069545,
-            'jvm_buffer_pools_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",buffer_pool="mapped"}': 3,
-            'jvm_buffer_pools_used_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",buffer_pool="mapped"}': 9658,
-            'jvm_buffer_pools_total_capacity_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",buffer_pool="mapped"}': 9658,
-            'jvm_classes_current_loaded_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 10236,
-            'jvm_classes_total_loaded_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 10236,
-            'jvm_classes_total_unloaded_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="bulk"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="bulk"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="bulk"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="bulk"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="bulk"}': 0,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="bulk"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_started"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_started"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_started"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_started"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_started"}': 0,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_started"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_store"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_store"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_store"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_store"}': 0,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_store"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="fetch_shard_store"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="flush"}': 2,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="flush"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="flush"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="flush"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="flush"}': 2,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="flush"}': 6,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="force_merge"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="force_merge"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="force_merge"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="force_merge"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="force_merge"}': 0,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="force_merge"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="generic"}': 4,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="generic"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="generic"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="generic"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="generic"}': 4,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="generic"}': 73,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="get"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="get"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="get"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="get"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="get"}': 0,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="get"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="index"}': 3,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="index"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="index"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="index"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="index"}': 3,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="index"}': 3,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="listener"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="listener"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="listener"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="listener"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="listener"}': 0,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="listener"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="management"}': 3,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="management"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="management"}': 1,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="management"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="management"}': 3,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="management"}': 77,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="refresh"}': 1,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="refresh"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="refresh"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="refresh"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="refresh"}': 1,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="refresh"}': 588,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="search"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="search"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="search"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="search"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="search"}': 0,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="search"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="snapshot"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="snapshot"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="snapshot"}': 0,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="snapshot"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="snapshot"}': 0,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="snapshot"}': 0,
-            'thread_pool_threads{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="warmer"}': 1,
-            'thread_pool_rejected{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="warmer"}': 0,
-            'thread_pool_active{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="warmer"}': 0,
-            'thread_pool_queue{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="warmer"}': 0,
-            'thread_pool_largest{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="warmer"}': 1,
-            'thread_pool_completed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",thread_pool="warmer"}': 9,
-            'fs_total_total_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 233134567424,
-            'fs_total_free_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 92206276608,
-            'fs_total_available_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 80292356096,
-            'fs_data_total_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",path="/usr/share/elasticsearch/data/nodes/0"}': 233134567424,
-            'fs_data_free_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",path="/usr/share/elasticsearch/data/nodes/0"}': 92206276608,
-            'fs_data_available_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",path="/usr/share/elasticsearch/data/nodes/0"}': 80292356096,
-            'fs_io_stats_devices_operations{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",device_name="dm-0"}': 22045,
-            'fs_io_stats_devices_read_operations{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",device_name="dm-0"}': 14349,
-            'fs_io_stats_devices_write_operations{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",device_name="dm-0"}': 7696,
-            'fs_io_stats_devices_read_kilobytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",device_name="dm-0"}': 294732,
-            'fs_io_stats_devices_write_kilobytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z",device_name="dm-0"}': 113424,
-            'fs_io_stats_total_operations{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 22045,
-            'fs_io_stats_total_read_operations{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 14349,
-            'fs_io_stats_total_write_operations{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 7696,
-            'fs_io_stats_total_read_kilobytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 294732,
-            'fs_io_stats_total_write_kilobytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 113424,
-            'transport_server_open{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'transport_rx_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 8,
-            'transport_rx_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 3607,
-            'transport_tx_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 8,
-            'transport_tx_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 3607,
-            'http_current_open{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1,
-            'http_total_opened{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 4,
-            'breakers_request_limit_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1267571097,
-            'breakers_request_estimated_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'breakers_request_overhead{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1.0,
-            'breakers_request_tripped{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'breakers_fielddata_limit_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1267571097,
-            'breakers_fielddata_estimated_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'breakers_fielddata_overhead{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1.03,
-            'breakers_fielddata_tripped{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'breakers_in_flight_requests_limit_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 2112618496,
-            'breakers_in_flight_requests_estimated_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'breakers_in_flight_requests_overhead{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1.0,
-            'breakers_in_flight_requests_tripped{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'breakers_parent_limit_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1478832947,
-            'breakers_parent_estimated_size_in_bytes{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'breakers_parent_overhead{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 1.0,
-            'breakers_parent_tripped{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'script_compilations{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'script_cache_evictions{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'discovery_cluster_state_queue_total{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'discovery_cluster_state_queue_pending{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'discovery_cluster_state_queue_committed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'ingest_total_count{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'ingest_total_time_in_millis{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'ingest_total_current{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-            'ingest_total_failed{node_id="bRcKq5zUTAuwNf4qvnXzIQ",node_name="bRcKq5z"}': 0,
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/tests/test_parser.py b/tests/test_parser.py
deleted file mode 100644
index ed5de99..0000000
--- a/tests/test_parser.py
+++ /dev/null
@@ -1,778 +0,0 @@
-import unittest
-
-from prometheus_es_exporter.parser import parse_response
-from tests.utils import convert_result
-
-
-# Sample responses generated by running the provided queries on a Elasticsearch
-# server populated with the following data (http command = Httpie utility):
-# > http -v POST localhost:9200/foo/bar/1 val:=1 group1=a group2=a
-# > http -v POST localhost:9200/foo/bar/2 val:=2 group1=a group2=b
-# > http -v POST localhost:9200/foo/bar/3 val:=3 group1=b group2=b
-class Test(unittest.TestCase):
-    maxDiff = None
-
-    def test_query(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    # effectively tests other singe-value metrics: max,min,sum,cardinality
-    def test_avg(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "val_avg": {
-        #             "avg": {"field": "val"}
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "val_avg": {
-                    "value": 2.0
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1,
-            'val_avg_value': 2
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    # effecively tests other mult-value metrics: percentile_ranks
-    def test_percentiles(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "val_percentiles": {
-        #             "percentiles": {"field": "val"}
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "val_percentiles": {
-                    "values": {
-                        "1.0": 1.02,
-                        "25.0": 1.5,
-                        "5.0": 1.1,
-                        "50.0": 2.0,
-                        "75.0": 2.5,
-                        "95.0": 2.9,
-                        "99.0": 2.98
-                    }
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1,
-            'val_percentiles_values_1_0': 1.02,
-            'val_percentiles_values_5_0': 1.1,
-            'val_percentiles_values_25_0': 1.5,
-            'val_percentiles_values_50_0': 2.0,
-            'val_percentiles_values_75_0': 2.5,
-            'val_percentiles_values_95_0': 2.9,
-            'val_percentiles_values_99_0': 2.98
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    def test_stats(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "val_stats": {
-        #             "stats": {"field": "val"}
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "val_stats": {
-                    "avg": 2.0,
-                    "count": 3,
-                    "max": 3.0,
-                    "min": 1.0,
-                    "sum": 6.0
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1,
-            'val_stats_avg': 2.0,
-            'val_stats_count': 3,
-            'val_stats_max': 3.0,
-            'val_stats_min': 1.0,
-            'val_stats_sum': 6.0
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    def test_extended_stats(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "val_extended_stats": {
-        #             "extended_stats": {"field": "val"}
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "val_extended_stats": {
-                    "avg": 2.0,
-                    "count": 3,
-                    "max": 3.0,
-                    "min": 1.0,
-                    "std_deviation": 0.816496580927726,
-                    "std_deviation_bounds": {
-                        "lower": 0.36700683814454793,
-                        "upper": 3.632993161855452
-                    },
-                    "sum": 6.0,
-                    "sum_of_squares": 14.0,
-                    "variance": 0.6666666666666666
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1,
-            'val_extended_stats_avg': 2.0,
-            'val_extended_stats_count': 3,
-            'val_extended_stats_max': 3.0,
-            'val_extended_stats_min': 1.0,
-            'val_extended_stats_sum': 6.0,
-            'val_extended_stats_std_deviation': 0.816496580927726,
-            'val_extended_stats_std_deviation_bounds_lower': 0.36700683814454793,
-            'val_extended_stats_std_deviation_bounds_upper': 3.632993161855452,
-            'val_extended_stats_sum_of_squares': 14.0,
-            'val_extended_stats_variance': 0.6666666666666666
-
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    def test_filter(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "group1_filter": {
-        #             "filter": {"term": {"group1": "a"}},
-        #             "aggs": {
-        #                 "val_sum": {
-        #                     "sum": {"field": "val"}
-        #                 }
-        #             }
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "group1_filter": {
-                    "doc_count": 2,
-                    "val_sum": {
-                        "value": 3.0
-                    }
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1,
-            'group1_filter_doc_count': 2,
-            'group1_filter_val_sum_value': 3.0
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    def test_filters(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "group_filter": {
-        #             "filters": {
-        #                 "filters": {
-        #                     "group_a": {"term": {"group1": "a"}},
-        #                     "group_b": {"term": {"group1": "b"}}
-        #                 }
-        #             },
-        #             "aggs": {
-        #                 "val_sum": {
-        #                     "sum": {"field": "val"}
-        #                 }
-        #             }
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "group_filter": {
-                    "buckets": {
-                        "group_a": {
-                            "doc_count": 2,
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        },
-                        "group_b": {
-                            "doc_count": 1,
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        }
-                    }
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1,
-            'group_filter_doc_count{group_filter="group_a"}': 2,
-            'group_filter_doc_count{group_filter="group_b"}': 1,
-            'group_filter_val_sum_value{group_filter="group_a"}': 3.0,
-            'group_filter_val_sum_value{group_filter="group_b"}': 3.0
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    def test_filters_anonymous(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "group_filter": {
-        #             "filters": {
-        #                 "filters": [
-        #                     {"term": {"group1": "a"}},
-        #                     {"term": {"group1": "b"}}
-        #                 ]
-        #             },
-        #             "aggs": {
-        #                 "val_sum": {
-        #                     "sum": {"field": "val"}
-        #                 }
-        #             }
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "group_filter": {
-                    "buckets": [
-                        {
-                            "doc_count": 2,
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        },
-                        {
-                            "doc_count": 1,
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        }
-                    ]
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1,
-            'group_filter_doc_count{group_filter="filter_0"}': 2,
-            'group_filter_doc_count{group_filter="filter_1"}': 1,
-            'group_filter_val_sum_value{group_filter="filter_0"}': 3.0,
-            'group_filter_val_sum_value{group_filter="filter_1"}': 3.0
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    def test_terms(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "group1_term": {
-        #             "terms": {"field": "group1"},
-        #             "aggs": {
-        #                 "val_sum": {
-        #                     "sum": {"field": "val"}
-        #                 }
-        #             }
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "group1_term": {
-                    "buckets": [
-                        {
-                            "doc_count": 2,
-                            "key": "a",
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        },
-                        {
-                            "doc_count": 1,
-                            "key": "b",
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        }
-                    ],
-                    "doc_count_error_upper_bound": 0,
-                    "sum_other_doc_count": 0
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 2
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 2,
-            'group1_term_doc_count_error_upper_bound': 0,
-            'group1_term_sum_other_doc_count': 0,
-            'group1_term_doc_count{group1_term="a"}': 2,
-            'group1_term_val_sum_value{group1_term="a"}': 3.0,
-            'group1_term_doc_count{group1_term="b"}': 1,
-            'group1_term_val_sum_value{group1_term="b"}': 3.0
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    def test_terms_numeric(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "val_terms": {
-        #             "terms": {"field": "val"},
-        #             "aggs": {
-        #                 "val_sum": {
-        #                     "sum": {"field": "val"}
-        #                 }
-        #             }
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "total": 5,
-                "successful": 5,
-                "failed": 0
-            },
-            "aggregations": {
-                "val_terms": {
-                    "doc_count_error_upper_bound": 0,
-                    "sum_other_doc_count": 0,
-                    "buckets": [
-                        {
-                            "key": 1,
-                            "doc_count": 1,
-                            "val_sum": {
-                                "value": 1.0
-                            }
-                        },
-                        {
-                            "key": 2,
-                            "doc_count": 1,
-                            "val_sum": {
-                                "value": 2.0
-                            }
-                        },
-                        {
-                            "key": 3,
-                            "doc_count": 1,
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        }
-                    ]
-                }
-            },
-            "hits": {
-                "total": 3,
-                "max_score": 0.0,
-                "hits": []
-            },
-            "timed_out": False,
-            "took": 4
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 4,
-            'val_terms_doc_count_error_upper_bound': 0,
-            'val_terms_sum_other_doc_count': 0,
-            'val_terms_doc_count{val_terms="1"}': 1,
-            'val_terms_val_sum_value{val_terms="1"}': 1.0,
-            'val_terms_doc_count{val_terms="2"}': 1,
-            'val_terms_val_sum_value{val_terms="2"}': 2.0,
-            'val_terms_doc_count{val_terms="3"}': 1,
-            'val_terms_val_sum_value{val_terms="3"}': 3.0
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    def test_nested_terms(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "group1_term": {
-        #             "terms": {"field": "group1"},
-        #             "aggs": {
-        #                 "val_sum": {
-        #                     "sum": {"field": "val"}
-        #                 },
-        #                 "group2_term": {
-        #                     "terms": {"field": "group2"},
-        #                     "aggs": {
-        #                         "val_sum": {
-        #                             "sum": {"field": "val"}
-        #                         }
-        #                     }
-        #                 }
-        #             }
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "group1_term": {
-                    "buckets": [
-                        {
-                            "doc_count": 2,
-                            "group2_term": {
-                                "buckets": [
-                                    {
-                                        "doc_count": 1,
-                                        "key": "a",
-                                        "val_sum": {
-                                            "value": 1.0
-                                        }
-                                    },
-                                    {
-                                        "doc_count": 1,
-                                        "key": "b",
-                                        "val_sum": {
-                                            "value": 2.0
-                                        }
-                                    }
-                                ],
-                                "doc_count_error_upper_bound": 0,
-                                "sum_other_doc_count": 0
-                            },
-                            "key": "a",
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        },
-                        {
-                            "doc_count": 1,
-                            "group2_term": {
-                                "buckets": [
-                                    {
-                                        "doc_count": 1,
-                                        "key": "b",
-                                        "val_sum": {
-                                            "value": 3.0
-                                        }
-                                    }
-                                ],
-                                "doc_count_error_upper_bound": 0,
-                                "sum_other_doc_count": 0
-                            },
-                            "key": "b",
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        }
-                    ],
-                    "doc_count_error_upper_bound": 0,
-                    "sum_other_doc_count": 0
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 2
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 2,
-            'group1_term_doc_count_error_upper_bound': 0,
-            'group1_term_sum_other_doc_count': 0,
-            'group1_term_doc_count{group1_term="a"}': 2,
-            'group1_term_val_sum_value{group1_term="a"}': 3.0,
-            'group1_term_group2_term_doc_count_error_upper_bound{group1_term="a"}': 0,
-            'group1_term_group2_term_sum_other_doc_count{group1_term="a"}': 0,
-            'group1_term_group2_term_doc_count{group1_term="a",group2_term="a"}': 1,
-            'group1_term_group2_term_val_sum_value{group1_term="a",group2_term="a"}': 1.0,
-            'group1_term_group2_term_doc_count{group1_term="a",group2_term="b"}': 1,
-            'group1_term_group2_term_val_sum_value{group1_term="a",group2_term="b"}': 2.0,
-            'group1_term_doc_count{group1_term="b"}': 1,
-            'group1_term_val_sum_value{group1_term="b"}': 3.0,
-            'group1_term_group2_term_doc_count_error_upper_bound{group1_term="b"}': 0,
-            'group1_term_group2_term_sum_other_doc_count{group1_term="b"}': 0,
-            'group1_term_group2_term_doc_count{group1_term="b",group2_term="b"}': 1,
-            'group1_term_group2_term_val_sum_value{group1_term="b",group2_term="b"}': 3.0,
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-    # Tests handling of disallowed characters in labels and metric names
-    # The '-'s in the aggregation name aren't allowed in metric names or
-    # label keys, so need to be substituted.
-    # The number at the start of the aggregation name isn't allowed at
-    # the start of metric names or label keys.
-    # A double '_' at the start of the label key (post substitutions)
-    # is also not allowed.
-    def test_bad_chars(self):
-        # Query:
-        # {
-        #     "size": 0,
-        #     "query": {
-        #         "match_all": {}
-        #     },
-        #     "aggs": {
-        #         "1-group-filter-1": {
-        #             "filters": {
-        #                 "filters": {
-        #                     "group_a": {"term": {"group1": "a"}},
-        #                     "group_b": {"term": {"group1": "b"}}
-        #                 }
-        #             },
-        #             "aggs": {
-        #                 "val_sum": {
-        #                     "sum": {"field": "val"}
-        #                 }
-        #             }
-        #         }
-        #     }
-        # }
-        response = {
-            "_shards": {
-                "failed": 0,
-                "successful": 5,
-                "total": 5
-            },
-            "aggregations": {
-                "1-group-filter-1": {
-                    "buckets": {
-                        "group_a": {
-                            "doc_count": 2,
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        },
-                        "group_b": {
-                            "doc_count": 1,
-                            "val_sum": {
-                                "value": 3.0
-                            }
-                        }
-                    }
-                }
-            },
-            "hits": {
-                "hits": [],
-                "max_score": 0.0,
-                "total": 3
-            },
-            "timed_out": False,
-            "took": 1
-        }
-
-        expected = {
-            'hits': 3,
-            'took_milliseconds': 1,
-            '__group_filter_1_doc_count{_group_filter_1="group_a"}': 2,
-            '__group_filter_1_doc_count{_group_filter_1="group_b"}': 1,
-            '__group_filter_1_val_sum_value{_group_filter_1="group_a"}': 3.0,
-            '__group_filter_1_val_sum_value{_group_filter_1="group_b"}': 3.0
-        }
-        result = convert_result(parse_response(response))
-        self.assertEqual(expected, result)
-
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/tests/utils.py b/tests/utils.py
deleted file mode 100644
index c81177f..0000000
--- a/tests/utils.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from prometheus_es_exporter import group_metrics
-
-
-def format_label(key, value):
-    return key + '="' + value + '"'
-
-
-def format_metrics(metric_name, label_keys, value_dict):
-    metrics = {}
-
-    for label_values, value in value_dict.items():
-        if len(label_keys) > 0:
-            # sorted_keys = sorted(label_keys)
-            labels = '{'
-            labels += ','.join([format_label(label_keys[i], label_values[i])
-                                for i in range(len(label_keys))])
-            labels += '}'
-        else:
-            labels = ''
-
-        metrics[metric_name + labels] = value
-
-    return metrics
-
-
-# Converts the parse_response() result into a psuedo-prometheus format
-# that is useful for comparing results in tests.
-# Uses the 'group_metrics()' function used by the exporter, so effectively
-# tests that function.
-def convert_result(result):
-    metric_dict = group_metrics(result)
-    return {
-        metric: value
-        for metric_name, (label_keys, value_dict) in metric_dict.items()
-        for metric, value in format_metrics(metric_name, label_keys, value_dict).items()
-    }