Backport cvp-sanity from master to 2019.2.0
Related-Prod; #PROD-29210(PROD:29210)

Squashed commit of the following:

commit 8c05e2703aa328d9e22bc09360ea30723dc0dd74
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Wed Apr 24 11:47:31 2019 +0300

    Add test steps into the stdout if test failed.

    Related-Prod:#PROD-29995(PROD:29995)
    Change-Id: Ie0a03d4d8896c7d7836cfd57736778f3896bcb87

commit a14488d565790992e8453d643a6fbea14bb25311
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Tue Apr 30 15:08:33 2019 +0300

    Fix incorrect counting of backends

    Split tests for cinder services into two tests

    Change-Id: I74137b4cc31a82718fc2a17f5abfd117aacf9963
    Fix-Issue:#PROD-29913(PROD:29913)

commit 10e2db4420d74db51259f55cc5b98482b53b116b
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Thu May 2 13:17:00 2019 +0300

    Run opencontrain tests for OpenStack-type deployment only

    Change-Id: I2b36bf33c4d3fde3fac37d669a4a2e8e449d4caf
    Fix-Prod: #PROD-27782(PROD:27782

commit 1db3888a0df328e8c41f3f465c9ed28bb1f95763
Merge: 80514de 50a2167
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Wed May 1 20:43:52 2019 +0000

    Merge "test_oss launched if cicd node  available"

commit 80514de4b630141ba42e6f4bb85bf5f6e0a15f72
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Thu Apr 25 12:33:28 2019 +0300

    Exclude kdt-nodes from test_mounts

    Change-Id: I1cb9c2521fff6e9cffe8d4d86c0abf149233c296
    Related-Prod: #PROD-29774(PROD:29774)

commit 864f2326856b128aacad5ccba13227938541ce78
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Mon Apr 29 15:48:12 2019 -0500

    [CVP] Add ext_net parameter

    Change-Id: Ie0d80d86b6d527f5593b9525cf22bc8343b84839
    Related-PROD: PROD-26972

commit dd17609d8f4e3a6a080b6cc1858139a0d3cf5057
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Fri Apr 26 15:29:10 2019 -0500

    [CVP] Fix parameter name in test_check_services

    Change-Id: I338bea5bb180ef9999d22b5acefc5af74f877ba3
    Related-PROD: PROD-29928

commit 10b360319fafb711391884af9f2b484a15412c0d
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Wed Apr 24 18:16:43 2019 -0500

    [CVP] Add sanity test to check vip address presence for groups

    Change-Id: I8b26a8e30de7eadf76254f35afb0e2621b73ea52
    Related-PROD: PROD-29845

commit 577453f143d140353d8d62f6bd2f51a4b7011888
Merge: bcb27cd 4a79efd
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Tue Apr 30 22:52:31 2019 +0000

    Merge "Added tests to check Drivetrain on K8s"

commit bcb27cd48482ba8daee5a2466482d2d9a30d0091
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Tue Apr 23 17:04:20 2019 -0500

    [CVP] Disable public url check for gerrit and jenkins on k8s envs

    Change-Id: Iab1637d234e8d597635758c886f7a40165928597
    Related-PROD: PROD-28324

commit 50a2167b35f743c27432e6ac6a4dc3634c3b6acb
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Thu Apr 25 12:20:52 2019 +0300

    test_oss launched if cicd node  available

    Change-Id: Ief119e18851b5ec39103195ca183db1d82fc5eb8
    Related-Prod: #PROD-29775(PROD:29775)

commit 67aaec97464e5750388d760cb5d35672fd194419
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Mon Apr 15 18:05:13 2019 -0500

    [CVP] Do not skip test_jenkins_jobs_branch by default

    Change-Id: I2b636e089d77d17833f4839f55808369e1f1ebce
    Related-PROD: PROD-29505

commit 4a79efda8e8151760cd54f2cc4b0561aaf536bc0
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Wed Apr 24 11:12:55 2019 +0300

    Added tests to check Drivetrain on K8s

    Change-Id: I86b9bbccf771cee6d6d294bb76f0c3979e269e86
    Related-Prod: #PROD-29625(PROD:29625)

commit 4bfd2ee3f0e1b83ebb6928ea5a490be19b4c9166
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Wed Apr 10 21:56:58 2019 -0500

    [CVP] Refactor salt client class

    Change-Id: I91cfffe1c8d5df0224657ce9e36be9063b56f0b3
    Related-PROD: PROD-28981
    Related-PROD: PROD-28729
    Related-PROD: PROD-28624
    Related-PROD: PROD-29286

commit b7e866cfa45c2887c7b3671463774c3dc78cab26
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Wed Apr 10 13:49:56 2019 +0300

    Set requirement for existed cicd-nodes in drivetrain  tests

    Related-Task: #PROD-28514(PROD:28514)

    Change-Id: I95268fae93cb1fe0eed5276468d0e8e1512c92d2

commit 45ae6b65ca436867fcf5b6ac7144e9f837299ad3
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Tue Mar 5 18:52:44 2019 +0300

    Added test to check mounted file systems on control plane VMs

    Problem: sometimes it happens that after KVM host is rebooted,
    its VMs come up with missing mounts (e.g. ctl node can have
    keystone volumes missed). We had such an issue. It can happen
    because of the hardware / host system performance issue or
    network misconfiguration, but such issue can appear after
    e.g. HA testing, or when KVM is rebooted. So, such test
    will detect the inconsistent mounts.

    Created the test to check that mounted file systems are
    consistent on the virtual control plane nodes (e.g. on ctl,
    prx, etc nodes). The nodes like kvm, cmp, ceph OSD, nodes
    with docker (like k8s nodes, cid, mon) are skipped.

    To skip other nodes if needed, add the node or group in the
    config (skipped_nodes, skipped_groups).

    Change-Id: Iab5311060790bd2fdfc8587e4cb8fc63cc3a0a13
    Related-PROD: PROD-28247

commit 835b0cb957748e49e21bafd43c0ca9da60707e92
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Wed Apr 10 17:10:20 2019 +0300

    [test_cinder_services] Verify backend existence before testing
    Added docstring to the method

    Change-Id: I511b9876e5a65f21a4cc823e616a29166b5b9cb4
    Fixes-bug:#PROD-28523(PROD:28523)

commit 16a8f414ac8cc8d43404995b2002d3a943f893ca
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Mon Apr 8 17:10:38 2019 +0300

    Move test_drivetrain_jenkins_job to the end of drivetrain tests queue to
    avoid the failing of the test_drivetrain_jenkins_job

    Additional changes:
    * added docstring to test methods
    * fixed pep8
    Related-Bug: #PROD-27372(PROD:27372)

    Change-Id: I34c679d66e483c107e6dda583b3c2e1ceeca5ced

commit b91c3147e20eb00e5429beefbb8e9a2e157bd3c0
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Tue Mar 26 16:49:44 2019 -0500

    [CVP] Fix test_drivetrain_components_and_versions for new MCP updates logic

    Related-PROD: PROD-28954

    Change-Id: I9ea99b36115834da7d7110de8811730d11df4da4

commit cbf1f3ae648129b26fdd5183878ce7abab9cc794
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Tue Apr 9 20:02:10 2019 +0300

    [cvp-spt] change adm_tenant in the create_subnet function

    Change-Id: I8ebf04b658d5f17846c23f13670b7b63c1e9c771
    Fixes-Issue: #PROD-29311(PROD:29311)

commit d52b5fe2722ea50eac65c5f8f2a55bab9f1db583
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Thu Mar 28 11:11:35 2019 -0500

    [CVP] Fix test_jenkins_jobs_branch according to new MCP updates logic

    Related-PROD: PROD-28954

    Change-Id: I023fd6f57ac5f52642aa963cef5cbc9fc1a74264

commit ab919649a64e1a379e11d84d3c21604d027e9645
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Wed Mar 27 20:05:38 2019 +0200

    [test_check_services] Change logic to check services

    Change-Id: I1eb0ff077d497f95a0004bfd8ff4f25538acbfd6
    Fix-bug: #PROD-26431(PROD:26431)

commit 8fd295c1a4b037b9aad5c1fe485351d4f9ed457c
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Thu Mar 7 13:46:43 2019 +0200

    Add possibility to define list of services/modules/packages to skip

    Change-Id: Ice289221e6e99181682ddf9155f390c388e590ad
    Related-Prod: #PROD-27215(PROD:27215)

commit f139db45e7fe0cc9b62178dc8cd1f799344723a1
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Tue Mar 5 11:18:48 2019 +0300

    Added test to check nova services, hosts are consistent

    Added test to check nova hosts are consistent in nova-services,
    openstack hosts and hypervisors. While deploying clouds, we faced
    several times when nova hosts were inconsistent after deployment
    (due to incorrect deployment steps), in this case hypervisor list
    has some computes missing, but they are present in nova-services.
    So, there can be some issues like "host is not mapped to any cell",
    or boot VM error. So, it is better to check these nova lists are
    consistent.

    Related-PROD: PROD-28210

    Change-Id: I9705417817e6075455dc4ccf5e25f2ab3439108c

commit 04ac2000016338fa283b9c34931ec3e96c595302
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Fri Mar 1 13:12:41 2019 +0200

    cvp-spt, size check image to check Glance upload/download speed can be
    changed using env var
    It set to 2000 MB by default (because of free space on cid* nodes)

    Test vm2vm gracefully skips test if no image found

    Change-Id: I3aa5f50bf75b48df528de8c4196ae51c23de4b9e
    Fixes-bug: #PROD-27763(PROD:27763)

commit 1ee3a651d10d6b32e1b34adef8c703e2036ffae1
Merge: 90ed2ea c4f520c
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Fri Mar 1 23:33:22 2019 +0000

    Merge "Remove accidentally added file"

commit c4f520c98136b8aa35d3ec02f93244bb090da5c3
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Tue Feb 26 17:58:48 2019 -0600

    Remove accidentally added file

    Related-PROD: PROD-28153

    Change-Id: Iee4c7b8fc59fd88fb148ed24776cae1af54998f1

commit 90ed2eadd9c1ce0f2f83d70b34a37144dc0da791
Merge: d006dbf 9b74486
Author: harhipova <harhipova@mirantis.com>
Date:   Fri Mar 1 15:39:05 2019 +0000

    Merge "Add ntp_skipped_nodes parameter, do not compare time on nodes with salt master"

commit d006dbf457567ab7e43d165e751cae5bf9fe64ff
Merge: 6661b23 24b71aa
Author: harhipova <harhipova@mirantis.com>
Date:   Fri Mar 1 15:38:22 2019 +0000

    Merge "Do not add node without virbr0* interfaces for comparison"

commit 6661b2332faad465a3e50bd6bf38f05731a95c9d
Merge: 5a0d02b 25215d9
Author: harhipova <harhipova@mirantis.com>
Date:   Fri Mar 1 15:37:37 2019 +0000

    Merge "Add more public url tests for UIs"

commit 24b71aa285748e8912fd780673f321f32e09a8c8
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Wed Feb 27 17:02:05 2019 -0600

    Do not add node without virbr0* interfaces for comparison

    Related-PROD: PROD-27217

    Change-Id: I704290f5b0708b96e03cbbb96674fc4355639723

commit 9b74486023b04708c9db2ee45ba4d0f0f6410c6b
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Tue Feb 26 17:33:43 2019 -0600

    Add ntp_skipped_nodes parameter, do not compare time on nodes with salt master

    Related-PROD: PROD-21993
    Related-PROD: PROD-27182

    Change-Id: Id8247d0b28301d098569f2ae3bd08ff7cfcad154

commit 5a0d02b3f0dfc8e525e2bd49736a352a1e101d06
Merge: e792be5 90fdfb5
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Fri Feb 22 18:10:30 2019 +0000

    Merge "Added test to check nodes status in MAAS"

commit 25215d9ededf612f3e9354e9a6232eea6b958bc6
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Thu Jan 31 16:35:57 2019 -0600

    Add more public url tests for UIs

    Related-PROD: PROD-23746

    Change-Id: Ie680d0d934cf36f4147b9d9a079f53469d26eccc

commit 90fdfb5e3cccbba22f8fe60a2fe119cab7308b37
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Sun Jan 27 23:01:07 2019 +0300

    Added test to check nodes status in MAAS

    MAAS should have the nodes in 'Deployed' status. At the same time,
    QA engineer can add some nodes in the skipped_nodes list and ignore
    checking them.

    Change-Id: I5407523f700fd76bb88cd5383c73cfce55cdd907

commit e792be50fa47222389e2e55f7d46e01b59a88e52
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Wed Feb 13 13:28:11 2019 +0200

    Fix version parsing in test_drivetrain_components_and_versions

    Change-Id: I3f036a7e3c324be8c50d6c5d7071ee12a5b3127e
    Fixes-Bug: #PROD-27454(PROD:27454)
    Closes-Task: #PROD-27253(PROD:27253)

commit 6baf78783bad9dbdf1fb1928077507f5f9a70a1a
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Fri Jan 25 19:09:30 2019 +0300

    Added test to check all packages are latest

    Added test to check that all packages on the nodes are latest and
    are not upgradable. Added the possibility to skip some packages
    in global_config if they should not be upgraded.

    Problem description:
    The 'test_check_package_versions' checks that the versions are
    consistent across the nodes of the same group. But we have no
    test that the actual versions are really correct and from correct
    MCP release. I had the cloud that had some packages installed
    from the wrong repositories, not from the required MCP release, so
    the fix was to upgrade packages. So there is the need to have
    the test to check that the packages are latest.
    At the same time if several packages should not be upgraded and
    have correct version even if Installed!=Candidate, there is
    possibility to skip packages.

    Currently the test is skipped by default ("skip_test": True in
    global_config.yaml file). Set False to run the test.

    Change-Id: Iddfab8b3d7fb4e72870aa0791e9da95a66f0ccfd

commit c48585ffded98bf907b98a69b61635829c48f2c4
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Mon Feb 4 19:38:54 2019 +0300

    Added test to check ntpq peers state

    The existing test 'test_ntp_sync' check the time is equal across
    the nodes. Sometimes there can be some NTP issue, but the time
    can be still correct. For example, Contrail can have "NTP state
    unsynchronized" when noone from remote peers is chosen. So there
    is some need to check "ntpq -pn" on the nodes and check the peers
    state.
    The new test gets ntpq peers state and check the system peer is
    declared.

    Change-Id: Icb8799b2323a446a3ec3dc6db54fd1d9de0356e5

commit ae0e72af8b63a65fb9e1fcfb7a626532da4c14b1
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Tue Feb 12 13:57:26 2019 +0200

    Disabled test result for test_duplicate_ips. It reacts to ens3 networks

    Relalet-Bug: #PROD-27449(PROD:27449)
    Change-Id: Ia28dcf09a89b4a6bee8a746a7ce1a069b74ce8cf

commit 47e42daa5287c858daefbab8eeefe2d8f406feb5
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Tue Feb 12 11:49:26 2019 +0200

    Disabled test results for  test_cinder_services. It affects to test_drivetrain job voting

    Fixes-Bug:#PROD-27436(PROD:27436)

    Change-Id: I0da3365d7f51a8863b10d9450321c7f5119b842e

commit f9a95caa34f0eb1043e2c9655d096d0d69a6d4c2
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Wed Jan 30 15:47:00 2019 +0200

    Add stacklight tests from stackligth-pytest repo

    Change-Id: I2d2ea6201b6495c35bed57d71450b30b0e0ff49f
    Relates-Task: #PROD-21318(PROD:21318)

commit f2660bdee650fa0240a3e9b34ca2b92f7d1d1e00
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Fri Feb 8 17:25:39 2019 +0200

    Retry test for docker services replicas

    Change-Id: Id4b983fe575516f33be4b401a005b23097c0fe96
    Fixes-Bug: #PROD-27372(PROD:27372)

commit 6f34fbbfcb424f99e2a6c81ac4eb73ac4e40ce6b
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Fri Feb 8 11:19:41 2019 +0200

    Change jenkins_jobs_branch test to check release branches

    Change-Id: I2d0551f6291f79dc20b2d031e4e669c4009d0aa3

commit 42ed43a37b96846cddb1d69985f1e15780c8a697
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Sun Jan 27 23:58:35 2019 +0300

    Added timeout in iperf command for Vm2Vm test

    Added timeout in iperf command for Vm2Vm test for having some
    statistics: sometimes 10s timeout is not enough when the network
    speed is unstable.

    Change-Id: I4912ccf8ba346a8b427cf6bd6181ce6e6c180fb2

commit 7c5f3fdef6477ac08dec4ace6630662b8adfe458
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Tue Feb 5 18:01:33 2019 +0300

    Added test for K8S dashboard availability

    In MCP the K8S dashboard is enabled by default. Need to check that
    the dashboard is available.

    Change-Id: I5b94ecce46d5f43491c9cf65a15a50461214e9c4

commit b8ec40e14917ec3b69dfcfe6ddcf36500dbc4754
Merge: 6dc2b00 ac4a14e
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Thu Jan 31 18:37:51 2019 +0000

    Merge "Add a new test to check for duplicate IPs in an env"

commit 6dc2b00bc4b059968daa3d49775ec77e00b903ed
Merge: 09b1ae8 03af292
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Tue Jan 29 22:47:00 2019 +0000

    Merge "Small fix: test nodes in ElasticSearch will get > than 500 nodes"

commit ac4a14e24f5bc1f53096627ae7d4f4cb60183ea0
Author: Dmitriy Kruglov <dkruglov@mirantis.com>
Date:   Wed Jan 23 09:37:13 2019 +0100

    Add a new test to check for duplicate IPs in an env

    Change-Id: I08ad6b22f252a0f8ea5bc4a4edd2fe566826868b
    Closes-PROD: #PROD-24347

commit 09b1ae88229cb8055a6c291097b3f6b0e0eb63c8
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Mon Jan 28 13:06:01 2019 +0300

    Added StackLight UI tests for public endpoints

    Change-Id: Ib60f278b77cc6673394c70b6b0ab16f74bc74366

commit df243ef14cbb7ab2707d2e7a2c292863f5010760
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Thu Nov 8 18:17:17 2018 +0300

    Added UI tests for Alerta: internal and public addresses

    The related fix https://gerrit.mcp.mirantis.com/#/c/34661
    adds tests for public endpoints for the rest of
    StackLight UI.

    Change-Id: Ie94ea242b19e30b7ed7143e01444125182fb6305

commit ac850455f686f1092077e2c95c9ab0d466f099c6
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Sun Jan 27 22:31:38 2019 +0300

    Added test to check minions status

    The CVP Sanity tests skip the nodes automatically if this minion
    does not respond in 1 sec (salt_timeout in config) time.
    Sometimes all the tests can pass, but some KVM nodes along with its
    Control plane VMs can be down and CVP tests will not test this and
    will not inform about this.
    The tests check that all minions are up.

    Change-Id: Ib8495aeb043448b36aea85bb31ee2650d655075e

commit 03af292569edc29db72bbdf97a331eceab3dc05c
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Mon Jan 28 15:55:02 2019 +0300

    Small fix: test nodes in ElasticSearch will get > than 500 nodes

    Some big Production clouds have more than 500 nodes in total.
    So the test is not valid for such cloud: it will fetch only
    500 nodes instead of all nodes of the cloud. Changing the request
    to fetch 1000 nodes.

    Change-Id: I58493fc55e1deb2c988d61e7c8a4f8ed971a60d4

commit 16e93fb7375fdfb87901b4a074f17ef09e722e56
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Wed Jan 23 19:03:01 2019 +0200

    Renamed folder with tests to make them consistent with cvp-runner.groovy
    and CVP jobs in cluster Jenkins
    Return rsync service into inconsistency_rule

    Related-Task: #PROD-23604(PROD:23604)

    Change-Id: I94afe350bd1d9c184bafe8e9e270aeb4c6c24c50

commit 27a41d814cc9d4f5bbc7f780a3d9e6042a6aaa4c
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Thu Jan 17 17:40:40 2019 +0200

    Check kubectl on kubernetes:master only

    Change-Id: I8ae308eb903694feffe65b14f7f857dfaf6b689c
    Fixes-Bug: #PROD-26555(PROD:26555)

commit 55cc129f3e93a3801a4abf620b40c1e5d7c53fe7
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Tue Jan 8 14:22:18 2019 +0200

    Common Dockerfile for CVP-Sanity and CVP-SPT

    Related-Task: #PROD-26312(PROD:26312)

    Change-Id: I457a8d5c6ff73d944518f6b0c2c568f8286728a9

commit 753a03e19780b090776ce5e2c27d74c44c5750a3
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Tue Jan 15 17:35:25 2019 -0600

    [CVP] Add checks for docker_registry, docker_visualizer and cvp jobs version

    Related-PROD: PROD-21801

    Change-Id: I79c8c5eb0833aca6d077129e3ec81ff3afb06143

commit 7b70537c2b7bfe29d1dc84915a21da5238f120f0
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Tue Jan 15 18:40:29 2019 -0600

    [CVP] Get drivetrain_version parameter from reclass

    Related-PROD: PROD-21801

    Change-Id: I628480b053e7b03c09c55d5b997e9dc74aa98c90

commit aaa8e6e95861e4e3f51c4d28dc7fcb0ed8ab8578
Merge: c0a7f0c 30bd90c
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Fri Jan 11 16:45:10 2019 +0000

    Merge "Add assert for return code"

commit 30bd90c986234343aabf03ec5f174026d02d4988
Author: Tatyana Leontovich <tleontovich@mirantis.com>
Date:   Fri Jan 11 16:26:32 2019 +0200

    Add assert for return code

    * Add assertion that reposne was sucess
      before process response.text()
    * Add header to request to avoid 406 error

    Change-Id: If41598e8c1ef5d9bf36847a750008d1203b4ed84
    Closes-Prod: PROD-26423

commit c0a7f0c01adc6a391f662dc306902cde138658ce
Author: Tatyana Leontovich <tleontovich@mirantis.com>
Date:   Fri Jan 11 16:08:50 2019 +0200

    Remove rsync service from inconsistency_rule

    Rsync service exists on all kvm nodes, so that
    remove it from inconsistency_rule to avail false -negative results

    Change-Id: I25ce5db2990645992c8fa7fb6cc33f082903b295
    Closes-PROD: PROD-26431

commit 5d965b230b4b5348d425510dc4667ced0c7e8ec3
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Wed Jan 9 16:29:31 2019 -0600

    Fix ceph tests filename

    Change-Id: I67ffc9f4da27d8b64c0334f3a6ae3f8f05dcd3b2

commit 0763a040044b20f8229292b547798d4ed99ca7e3
Merge: b8d04d5 f77b50b
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date:   Wed Jan 9 21:47:45 2019 +0000

    Merge "Added Ceph health test"

commit b8d04d575b1cb8b05aace823de64d74e0d0e4c48
Author: Hanna Arhipova <harhipova@mirantis.com>
Date:   Fri Dec 28 13:19:17 2018 +0200

    Remove ldap_server component from test_drivetrain_components_and_versions

    Change-Id: Idc8e8581f828334db511b0ca2149ad812e71a6c3
    Fixes-Bug: #PROD-26151(PROD:26151)

commit f77b50bdb2b1fb2b747ac8e1b1262ee88fdfd2ed
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date:   Wed Dec 12 19:41:15 2018 +0300

    Added Ceph health test

    Change-Id: If2f318169e841cdd278c76c237f040b09d7b87ea

Change-Id: I9687de3dbdae9d5dc3deb94dbd2afcd5e7f0ec7d
diff --git a/test_set/cvp-sanity/README.md b/test_set/cvp-sanity/README.md
new file mode 100644
index 0000000..564f236
--- /dev/null
+++ b/test_set/cvp-sanity/README.md
@@ -0,0 +1,65 @@
+MCP sanity checks
+========================
+
+This is salt-based set of tests for basic verification of MCP deployments
+
+How to start
+=======================
+
+1) Clone repo to any node (node must have an access via http to salt master):
+```bash
+   # root@cfg-01:~/# git clone https://github.com/Mirantis/cvp-sanity-checks
+   # cd cvp-sanity-checks
+```
+Use git config --global http.proxy http://proxyuser:proxypwd@proxy.server.com:8080
+if needed.
+
+2) Install virtualenv
+```bash
+   # curl -O https://pypi.python.org/packages/source/v/virtualenv/virtualenv-X.X.tar.gz
+   # tar xvfz virtualenv-X.X.tar.gz
+   # cd virtualenv-X.X
+   # sudo python setup.py install
+```
+or
+1```bash
+   # apt-get install python-virtualenv
+```
+
+3) Create virtualenv and install requirements and package:
+
+```bash
+   # virtualenv --system-site-packages .venv
+   # source .venv/bin/activate
+   # pip install --proxy http://$PROXY:8678 -r requirements.txt
+   # python setup.py install
+   # python setup.py develop
+```
+
+4) Configure:
+```bash
+   # vim cvp-sanity/global_config.yaml
+```
+SALT credentials are mandatory for tests.
+
+
+Other settings are optional (please keep uncommented with default values)
+
+
+Alternatively, you can specify these settings via env variables:
+```bash
+export SALT_URL=http://10.0.0.1:6969
+```
+For array-type settings please do:
+```bash
+export skipped_nodes='ctl01.example.com,ctl02.example.com'
+```
+
+5) Start tests:
+```bash
+   # pytest --tb=short -sv cvp-sanity/tests/
+```
+or
+```bash
+   # pytest -sv cvp-sanity/tests/ --ignore cvp-sanity/tests/test_mtu.py
+```
diff --git a/test_set/cvp-sanity/__init__.py b/test_set/cvp-sanity/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/test_set/cvp-sanity/__init__.py
diff --git a/test_set/cvp-sanity/conftest.py b/test_set/cvp-sanity/conftest.py
new file mode 100644
index 0000000..7c85d62
--- /dev/null
+++ b/test_set/cvp-sanity/conftest.py
@@ -0,0 +1,29 @@
+from fixtures.base import *
+
+
+@pytest.hookimpl(tryfirst=True, hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+    outcome = yield
+
+    rep = outcome.get_result()
+    setattr(item, "rep_" + rep.when, rep)
+    rep.description = "{}".format(str(item.function.__doc__))
+    setattr(item, 'description', item.function.__doc__)
+
+
+@pytest.fixture(autouse=True)
+def show_test_steps(request):
+    yield
+    # request.node is an "item" because we use the default
+    # "function" scope
+    if request.node.description is None or request.node.description == "None":
+        return
+    try:
+        if request.node.rep_setup.failed:
+            print("setup failed. The following steps were attempted: \n  {steps}".format(steps=request.node.description))
+        elif request.node.rep_setup.passed:
+            if request.node.rep_call.failed:
+                print("test execution failed! The following steps were attempted: \n {steps}".format(steps=request.node.description))
+    except BaseException as e:
+        print("Error in show_test_steps fixture: {}".format(e))
+        pass
diff --git a/test_set/cvp-sanity/fixtures/__init__.py b/test_set/cvp-sanity/fixtures/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/test_set/cvp-sanity/fixtures/__init__.py
diff --git a/test_set/cvp-sanity/fixtures/base.py b/test_set/cvp-sanity/fixtures/base.py
new file mode 100644
index 0000000..8e3b130
--- /dev/null
+++ b/test_set/cvp-sanity/fixtures/base.py
@@ -0,0 +1,182 @@
+import pytest
+import atexit
+import utils
+
+
+@pytest.fixture(scope='session')
+def local_salt_client():
+    return utils.init_salt_client()
+
+nodes = utils.calculate_groups()
+
+
+@pytest.fixture(scope='session', params=nodes.values(), ids=nodes.keys())
+def nodes_in_group(request):
+    return request.param
+
+
+@pytest.fixture(scope='session')
+def ctl_nodes_pillar(local_salt_client):
+    '''Return controller node pillars (OS or k8s ctls).
+       This will help to identify nodes to use for UI curl tests.
+       If no platform is installed (no OS or k8s) we need to skip
+       the test (product team use case).
+    '''
+    salt_output = local_salt_client.test_ping(tgt='keystone:server')
+    if salt_output:
+        return "keystone:server"
+    else:
+        salt_output = local_salt_client.test_ping(tgt='etcd:server')
+        return "etcd:server" if salt_output else pytest.skip("Neither \
+            Openstack nor k8s is found. Skipping test")
+
+
+@pytest.fixture(scope='session')
+def check_openstack(local_salt_client):
+    salt_output = local_salt_client.test_ping(tgt='keystone:server')
+    if not salt_output:
+        pytest.skip("Openstack not found or keystone:server pillar \
+          are not found on this environment.")
+
+
+@pytest.fixture(scope='session')
+def check_drivetrain(local_salt_client):
+    salt_output = local_salt_client.test_ping(tgt='I@jenkins:client and not I@salt:master',
+                                              expr_form='compound')
+    if not salt_output:
+        pytest.skip("Drivetrain service or jenkins:client pillar \
+          are not found on this environment.")
+
+
+@pytest.fixture(scope='session')
+def check_prometheus(local_salt_client):
+    salt_output = local_salt_client.test_ping(tgt='prometheus:server')
+    if not salt_output:
+        pytest.skip("Prometheus service or prometheus:server pillar \
+          are not found on this environment.")
+
+
+@pytest.fixture(scope='session')
+def check_alerta(local_salt_client):
+    salt_output = local_salt_client.test_ping(tgt='prometheus:alerta')
+    if not salt_output:
+        pytest.skip("Alerta service or prometheus:alerta pillar \
+              are not found on this environment.")
+
+
+@pytest.fixture(scope='session')
+def check_kibana(local_salt_client):
+    salt_output = local_salt_client.test_ping(tgt='kibana:server')
+    if not salt_output:
+        pytest.skip("Kibana service or kibana:server pillar \
+          are not found on this environment.")
+
+
+@pytest.fixture(scope='session')
+def check_grafana(local_salt_client):
+    salt_output = local_salt_client.test_ping(tgt='grafana:client')
+    if not salt_output:
+        pytest.skip("Grafana service or grafana:client pillar \
+          are not found on this environment.")
+
+
+@pytest.fixture(scope='session')
+def check_cinder_backends(local_salt_client):
+    backends_cinder_available = local_salt_client.test_ping(tgt='cinder:controller')
+    if not backends_cinder_available or not any(backends_cinder_available.values()):
+        pytest.skip("Cinder service or cinder:controller:backend pillar \
+        are not found on this environment.")
+
+
+def pytest_namespace():
+    return {'contrail': None}
+
+
+@pytest.fixture(scope='module')
+def contrail(local_salt_client):
+    probe = local_salt_client.cmd(
+        tgt='opencontrail:control',
+        fun='pillar.get',
+        param='opencontrail:control:version',
+        expr_form='pillar')
+    if not probe:
+        pytest.skip("Contrail is not found on this environment")
+    versions = set(probe.values())
+    if len(versions) != 1:
+        pytest.fail('Contrail versions are not the same: {}'.format(probe))
+    pytest.contrail = str(versions.pop())[:1]
+
+
+@pytest.fixture(scope='session')
+def check_kdt(local_salt_client):
+    kdt_nodes_available = local_salt_client.test_ping(
+        tgt="I@gerrit:client and I@kubernetes:pool and not I@salt:master",
+        expr_form='compound'
+    )
+    if not kdt_nodes_available:
+        pytest.skip("No 'kdt' nodes found. Skipping this test...")
+    return kdt_nodes_available.keys()
+
+
+@pytest.fixture(scope='session')
+def check_kfg(local_salt_client):
+    kfg_nodes_available = local_salt_client.cmd(
+        tgt="I@kubernetes:pool and I@salt:master",
+        expr_form='compound'
+    )
+    if not kfg_nodes_available:
+        pytest.skip("No cfg-under-Kubernetes nodes found. Skipping this test...")
+    return kfg_nodes_available.keys()
+
+
+@pytest.fixture(scope='session')
+def check_cicd(local_salt_client):
+    cicd_nodes_available = local_salt_client.test_ping(
+        tgt="I@gerrit:client and I@docker:swarm",
+        expr_form='compound'
+    )
+    if not cicd_nodes_available:
+        pytest.skip("No 'cid' nodes found. Skipping this test...")
+
+
+@pytest.fixture(autouse=True, scope='session')
+def print_node_version(local_salt_client):
+    """
+        Gets info about each node using salt command, info is represented as a dictionary with :
+        {node_name1: output1, node_name2: ...}
+
+        :print to output the table with results after completing all tests if nodes and salt output exist.
+                Prints nothing otherwise
+        :return None
+    """
+    try:
+        filename_with_versions = "/etc/image_version"
+        cat_image_version_file = "if [ -f '{name}' ]; then \
+                                        cat {name}; \
+                                    else \
+                                        echo BUILD_TIMESTAMP='no {name}'; \
+                                        echo BUILD_TIMESTAMP_RFC='no {name}'; \
+                                    fi ".format(name=filename_with_versions)
+
+        list_version = local_salt_client.cmd(
+            tgt='*',
+            param='echo "NODE_INFO=$(uname -sr)" && ' + cat_image_version_file,
+            expr_form='compound')
+        if list_version.__len__() == 0:
+            yield
+        parsed = {k: v.split('\n') for k, v in list_version.items()}
+        columns = [name.split('=')[0] for name in parsed.values()[0]]
+
+        template = "{:<40} | {:<25} | {:<25} | {:<25}\n"
+
+        report_text = template.format("NODE", *columns)
+        for node, data in sorted(parsed.items()):
+            report_text += template.format(node, *[item.split("=")[1] for item in data])
+
+        def write_report():
+            print(report_text)
+        atexit.register(write_report)
+        yield
+    except Exception as e:
+        print("print_node_version:: some error occurred: {}".format(e))
+        yield
diff --git a/test_set/cvp-sanity/global_config.yaml b/test_set/cvp-sanity/global_config.yaml
new file mode 100644
index 0000000..813b82d
--- /dev/null
+++ b/test_set/cvp-sanity/global_config.yaml
@@ -0,0 +1,102 @@
+---
+# MANDATORY: Credentials for Salt Master
+# SALT_URL should consist of url and port.
+# For example: http://10.0.0.1:6969
+# 6969 - default Salt Master port to listen
+# Can be found on cfg* node using
+# "salt-call pillar.get _param:salt_master_host"
+# and "salt-call pillar.get _param:salt_master_port"
+# or "salt-call pillar.get _param:jenkins_salt_api_url"
+# SALT_USERNAME by default: salt
+# It can be verified with "salt-call shadow.info salt"
+# SALT_PASSWORD you can find on cfg* node using
+# "salt-call pillar.get _param:salt_api_password"
+# or "grep -r salt_api_password /srv/salt/reclass/classes"
+SALT_URL: <salt_url>
+SALT_USERNAME: <salt_usr>
+SALT_PASSWORD: <salt_pwd>
+
+# How many seconds to wait for salt-minion to respond
+salt_timeout: 1
+
+# List of nodes (full fqdn) to skip in ALL tests
+# Use as env variable as
+# export skipped_nodes=mtr01.local,log02.local
+# TEMPORARY: please do not comment this setting.
+skipped_nodes: [""]
+
+# List of groups (short name, e.g. dbs) to skip in group tests
+# Use as env variable as
+# export skipped_groups=mtr,log
+# TEMPORARY: please do not comment this setting.
+skipped_groups: [""]
+
+# Groups can be defined using pillars.
+# Uncomment this section to enable this.
+# Otherwise groups will be discovered automaticaly
+# Tips:
+# 1) you don't need to separate kvm and kvm_glusterfs nodes
+# 2) Use I@pillar or mask like ctl* for targetting nodes
+
+groups: {
+         cmp: 'I@nova:compute',
+         ctl: 'I@keystone:server',
+         msg: 'I@rabbitmq:server',
+         dbs: 'I@galera:*',
+         prx: 'I@nginx:server',
+         mon: 'I@prometheus:server and not I@influxdb:server',
+         log: 'I@kibana:server',
+         mtr: 'I@influxdb:server',
+         kvm: 'I@salt:control',
+         cid: 'I@docker:host and not I@prometheus:server and not I@kubernetes:*',
+         ntw: 'I@opencontrail:database',
+         ceph_mon: 'I@ceph:mon',
+         ceph_osd: 'I@ceph:osd',
+         k8-ctl: 'I@etcd:server',
+         k8-cmp: 'I@kubernetes:* and not I@etcd:*',
+         cfg: 'I@salt:master',
+         gtw: 'I@neutron:gateway'
+}
+
+# mtu test setting
+# this test may skip groups (see example)
+test_mtu:
+  { #"skipped_groups": ["dbs"]
+    "skipped_ifaces": ["bonding_masters", "lo", "veth", "tap", "cali", "qv", "qb", "br-int", "vxlan"]}
+# mask for interfaces to skip
+
+# test duplicate ips
+# do not comment this section
+test_duplicate_ips:
+  {
+    "skipped_ifaces": ["lo", "virbr0", "docker_gwbridge", "docker0"]}
+
+# packages test 'test_packages_are_latest' setting
+# this can skip scecial packages
+# True value for 'skip_test' will skip this test. Set False to run the test.
+# TODO: remove default False value when prod env is fixed
+test_packages:
+  { # "skipped_packages": ["update-notifier-common", "wget"]
+    "skipped_packages": [""],
+    "skip_test": True
+  }
+
+# specify what mcp version (tag) is deployed
+drivetrain_version: ''
+
+# jenkins job to run during the test
+jenkins_test_job: 'DT-test-job'
+jenkins_cvp_job: 'cvp-sanity'
+
+# ntp test setting
+# this test may skip specific node (use fqdn)
+ntp_skipped_nodes: [""]
+
+# packages need to skip in
+# test_check_package_versions
+skipped_packages: [""]
+# test_check_module_versions
+skipped_modules: [""]
+# test_check_services
+skipped_services: [""]
+
diff --git a/test_set/cvp-sanity/pytest.ini b/test_set/cvp-sanity/pytest.ini
new file mode 100644
index 0000000..7d6dde9
--- /dev/null
+++ b/test_set/cvp-sanity/pytest.ini
@@ -0,0 +1,3 @@
+[pytest]
+norecursedirs = venv
+addopts = -vv --tb=short
\ No newline at end of file
diff --git a/test_set/cvp-sanity/requirements.txt b/test_set/cvp-sanity/requirements.txt
new file mode 100644
index 0000000..eea162a
--- /dev/null
+++ b/test_set/cvp-sanity/requirements.txt
@@ -0,0 +1,8 @@
+pytest==3.0.6
+requests==2.10.0
+flake8
+PyYAML
+python-jenkins==0.4.11
+pygerrit2==2.0.6
+gitpython
+python-ldap
diff --git a/test_set/cvp-sanity/tests/__init__.py b/test_set/cvp-sanity/tests/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/test_set/cvp-sanity/tests/__init__.py
diff --git a/test_set/cvp-sanity/tests/ceph/test_ceph_haproxy.py b/test_set/cvp-sanity/tests/ceph/test_ceph_haproxy.py
new file mode 100644
index 0000000..4d2566c
--- /dev/null
+++ b/test_set/cvp-sanity/tests/ceph/test_ceph_haproxy.py
@@ -0,0 +1,22 @@
+import pytest
+
+
+def test_ceph_haproxy(local_salt_client):
+    pytest.skip("This test doesn't work. Skipped")
+    fail = {}
+
+    monitor_info = local_salt_client.cmd(
+        tgt='ceph:mon',
+        param="echo 'show stat' | nc -U "
+              "/var/run/haproxy/admin.sock | "
+              "grep ceph_mon_radosgw_cluster",
+        expr_form='pillar')
+    if not monitor_info:
+        pytest.skip("Ceph is not found on this environment")
+
+    for name, info in monitor_info.iteritems():
+        if "OPEN" and "UP" in info:
+            continue
+        else:
+            fail[name] = info
+    assert not fail, "Failed monitors: {}".format(fail)
diff --git a/test_set/cvp-sanity/tests/ceph/test_ceph_pg_count.py b/test_set/cvp-sanity/tests/ceph/test_ceph_pg_count.py
new file mode 100644
index 0000000..28783e8
--- /dev/null
+++ b/test_set/cvp-sanity/tests/ceph/test_ceph_pg_count.py
@@ -0,0 +1,94 @@
+import pytest
+import math
+
+def __next_power_of2(total_pg):
+	count = 0
+	if (total_pg and not(total_pg & (total_pg - 1))):
+		return total_pg	
+	while( total_pg != 0):
+		total_pg >>= 1
+		count += 1
+	
+	return 1 << count
+
+
+def test_ceph_pg_count(local_salt_client):
+    """
+    Test aimed to calculate placement groups for Ceph cluster
+    according formula below.
+    Formula to calculate PG num:
+    Total PGs = 
+    (Total_number_of_OSD * 100) / max_replication_count / pool count
+    pg_num and pgp_num should be the same and 
+    set according formula to higher value of powered 2
+    """
+    pytest.skip("This test needs redesign. Skipped for now")
+    ceph_monitors = local_salt_client.cmd(
+        'ceph:mon', 
+        'test.ping', 
+        expr_form='pillar')
+    
+    if not ceph_monitors:
+        pytest.skip("Ceph is not found on this environment")
+
+    monitor = ceph_monitors.keys()[0]
+    pools = local_salt_client.cmd(
+        monitor, 'cmd.run', 
+        ["rados lspools"], 
+        expr_form='glob').get(
+            ceph_monitors.keys()[0]).split('\n')
+    
+    total_osds = int(local_salt_client.cmd(
+        monitor, 
+        'cmd.run', 
+        ['ceph osd tree | grep osd | grep "up\|down" | wc -l'], 
+        expr_form='glob').get(ceph_monitors.keys()[0]))
+    
+    raw_pool_replications = local_salt_client.cmd(
+        monitor, 
+        'cmd.run', 
+        ["ceph osd dump | grep size | awk '{print $3, $6}'"], 
+        expr_form='glob').get(ceph_monitors.keys()[0]).split('\n')
+    
+    pool_replications = {}
+    for replication in raw_pool_replications:
+        pool_replications[replication.split()[0]] = int(replication.split()[1])
+    
+    max_replication_value = 0
+    for repl_value in pool_replications.values():
+        if repl_value > max_replication_value:
+            max_replication_value = repl_value
+
+    total_pg = (total_osds * 100) / max_replication_value / len(pools)
+    correct_pg_num = __next_power_of2(total_pg)
+    
+    pools_pg_num = {}
+    pools_pgp_num = {}
+    for pool in pools:
+        pg_num = int(local_salt_client.cmd(
+            monitor, 
+            'cmd.run', 
+            ["ceph osd pool get {} pg_num".format(pool)], 
+            expr_form='glob').get(ceph_monitors.keys()[0]).split()[1])
+        pools_pg_num[pool] = pg_num
+        pgp_num = int(local_salt_client.cmd(
+            monitor, 
+            'cmd.run', 
+            ["ceph osd pool get {} pgp_num".format(pool)], 
+            expr_form='glob').get(ceph_monitors.keys()[0]).split()[1])
+        pools_pgp_num[pool] = pgp_num
+
+    wrong_pg_num_pools = [] 
+    pg_pgp_not_equal_pools = []
+    for pool in pools:
+        if pools_pg_num[pool] != pools_pgp_num[pool]:
+            pg_pgp_not_equal_pools.append(pool)
+        if pools_pg_num[pool] < correct_pg_num:
+            wrong_pg_num_pools.append(pool)
+
+    assert not pg_pgp_not_equal_pools, \
+    "For pools {} PG and PGP are not equal " \
+    "but should be".format(pg_pgp_not_equal_pools)
+    assert not wrong_pg_num_pools, "For pools {} " \
+    "PG number lower than Correct PG number, " \
+    "but should be equal or higher".format(wrong_pg_num_pools)
diff --git a/test_set/cvp-sanity/tests/ceph/test_ceph_replicas.py b/test_set/cvp-sanity/tests/ceph/test_ceph_replicas.py
new file mode 100644
index 0000000..4c93fe6
--- /dev/null
+++ b/test_set/cvp-sanity/tests/ceph/test_ceph_replicas.py
@@ -0,0 +1,43 @@
+import pytest
+
+
+def test_ceph_replicas(local_salt_client):
+    """
+    Test aimed to check number of replicas
+    for most of deployments if there is no
+    special requirement for that.
+    """
+
+    ceph_monitors = local_salt_client.test_ping(tgt='ceph:mon')
+
+    if not ceph_monitors:
+        pytest.skip("Ceph is not found on this environment")
+
+    monitor = ceph_monitors.keys()[0]
+
+    raw_pool_replicas = local_salt_client.cmd_any(
+        tgt='ceph:mon',
+        param="ceph osd dump | grep size | " \
+              "awk '{print $3, $5, $6, $7, $8}'").split('\n')
+
+    pools_replicas = {}
+    for pool in raw_pool_replicas:
+        pool_name = pool.split(" ", 1)[0]
+        pool_replicas = {}
+        raw_replicas = pool.split(" ", 1)[1].split()
+        for elem in raw_replicas:
+            pool_replicas[raw_replicas[0]] = int(raw_replicas[1])
+            pool_replicas[raw_replicas[2]] = int(raw_replicas[3])
+        pools_replicas[pool_name] = pool_replicas
+    
+    error = []
+    for pool, replicas in pools_replicas.items():
+        for replica, value in replicas.items():
+            if replica == 'min_size' and value < 2:
+                error.append(pool + " " + replica + " " 
+                + str(value) + " must be 2")
+            if replica == 'size' and value < 3:
+                error.append(pool + " " + replica + " " 
+                + str(value) + " must be 3")
+    
+    assert not error, "Wrong pool replicas found\n{}".format(error)
diff --git a/test_set/cvp-sanity/tests/ceph/test_ceph_status.py b/test_set/cvp-sanity/tests/ceph/test_ceph_status.py
new file mode 100644
index 0000000..0c0ef0c
--- /dev/null
+++ b/test_set/cvp-sanity/tests/ceph/test_ceph_status.py
@@ -0,0 +1,36 @@
+import json
+import pytest
+
+
+def test_ceph_osd(local_salt_client):
+    osd_fail = local_salt_client.cmd(
+        tgt='ceph:osd',
+        param='ceph osd tree | grep down',
+        expr_form='pillar')
+    if not osd_fail:
+        pytest.skip("Ceph is not found on this environment")
+    assert not osd_fail.values()[0], \
+        "Some osds are in down state or ceph is not found".format(
+        osd_fail.values()[0])
+
+
+def test_ceph_health(local_salt_client):
+    get_status = local_salt_client.cmd(
+        tgt='ceph:mon',
+        param='ceph -s -f json',
+        expr_form='pillar')
+    if not get_status:
+        pytest.skip("Ceph is not found on this environment")
+    status = json.loads(get_status.values()[0])["health"]
+    health = status["status"] if 'status' in status \
+        else status["overall_status"]
+
+    # Health structure depends on Ceph version, so condition is needed:
+    if 'checks' in status:
+        summary = "Summary: {}".format(
+            [i["summary"]["message"] for i in status["checks"].values()])
+    else:
+        summary = status["summary"]
+
+    assert health == "HEALTH_OK",\
+        "Ceph status is not expected. {}".format(summary)
diff --git a/test_set/cvp-sanity/tests/ceph/test_ceph_tell_bench.py b/test_set/cvp-sanity/tests/ceph/test_ceph_tell_bench.py
new file mode 100644
index 0000000..b275022
--- /dev/null
+++ b/test_set/cvp-sanity/tests/ceph/test_ceph_tell_bench.py
@@ -0,0 +1,56 @@
+import pytest
+import json
+import math
+
+
+def test_ceph_tell_bench(local_salt_client):
+    """
+    Test checks that each OSD MB per second speed 
+    is not lower than 10 MB comparing with AVG. 
+    Bench command by default writes 1Gb on each OSD 
+    with the default values of 4M 
+    and gives the "bytes_per_sec" speed for each OSD.
+
+    """
+    pytest.skip("This test needs redesign. Skipped for now")
+    ceph_monitors = local_salt_client.cmd(
+        'ceph:mon', 
+        'test.ping', 
+        expr_form='pillar')
+
+    if not ceph_monitors:
+        pytest.skip("Ceph is not found on this environment")
+
+    cmd_result = local_salt_client.cmd(
+        ceph_monitors.keys()[0], 
+        'cmd.run', ["ceph tell osd.* bench -f json"], 
+        expr_form='glob').get(
+            ceph_monitors.keys()[0]).split('\n')
+
+    cmd_result = filter(None, cmd_result)
+
+    osd_pool = {}
+    for osd in cmd_result:
+        osd_ = osd.split(" ")
+        osd_pool[osd_[0]] = osd_[1]
+
+    mbps_sum = 0
+    osd_count = 0
+    for osd in osd_pool:
+        osd_count += 1
+        mbps_sum += json.loads(
+            osd_pool[osd])['bytes_per_sec'] / 1000000
+
+    mbps_avg = mbps_sum / osd_count
+    result = {}
+    for osd in osd_pool:
+        mbps = json.loads(
+            osd_pool[osd])['bytes_per_sec'] / 1000000
+        if math.fabs(mbps_avg - mbps) > 10:
+            result[osd] = osd_pool[osd]
+
+    assert len(result) == 0, \
+    "Performance of {0} OSD(s) lower " \
+    "than AVG performance ({1} mbps), " \
+    "please check Ceph for possible problems".format(
+        json.dumps(result, indent=4), mbps_avg)
diff --git a/test_set/cvp-sanity/tests/test_cinder_services.py b/test_set/cvp-sanity/tests/test_cinder_services.py
new file mode 100644
index 0000000..a83a3f9
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_cinder_services.py
@@ -0,0 +1,34 @@
+import pytest
+
+
+def test_cinder_services_are_up(local_salt_client, check_cinder_backends):
+    """
+        # Make sure that cinder backend exists with next command: `salt -C "I@cinder:controller" pillar.get cinder:controller:backend`
+        # Check that all services has 'Up' status in output of `cinder service-list` on keystone:server nodes
+    """
+    service_down = local_salt_client.cmd_any(
+        tgt='keystone:server',
+        param='. /root/keystonercv3; cinder service-list | grep "down\|disabled"')
+    assert service_down == '', \
+        '''Some cinder services are in wrong state'''
+
+
+def test_cinder_services_has_all_backends(local_salt_client, check_cinder_backends):
+    """
+        # Make sure that cinder backend exists with next command: `salt -C "I@cinder:controller" pillar.get cinder:controller:backend`
+        # Check that quantity of backend in cinder:controller:backend pillar is similar to list of volumes in cinder service-list
+    """
+    backends_cinder = local_salt_client.pillar_get(
+        tgt='cinder:controller',
+        param='cinder:controller:backend'
+    )
+    cinder_volume = local_salt_client.cmd_any(
+        tgt='keystone:server',
+        param='. /root/keystonercv3; cinder service-list | grep "volume" |grep -c -v -e "lvm"')
+    print(backends_cinder)
+    print(cinder_volume)
+    backends_num = len(backends_cinder.keys())
+    assert cinder_volume == str(backends_num), \
+        'Number of cinder-volume services ({0}) does not match ' \
+        'number of volume backends ({1})'.format(
+        cinder_volume, str(backends_num))
\ No newline at end of file
diff --git a/test_set/cvp-sanity/tests/test_contrail.py b/test_set/cvp-sanity/tests/test_contrail.py
new file mode 100644
index 0000000..fcb96f9
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_contrail.py
@@ -0,0 +1,101 @@
+import pytest
+import json
+import utils
+
+pytestmark = pytest.mark.usefixtures("contrail")
+
+STATUS_FILTER = r'grep -Pv "(==|^$|Disk|unix|support|boot|\*\*|FOR NODE)"'
+STATUS_COMMAND = "contrail-status -t 10"
+
+def get_contrail_status(salt_client, pillar, command, processor):
+    return salt_client.cmd(
+        tgt=pillar,
+        param='{} | {}'.format(command, processor),
+        expr_form='pillar'
+    )
+
+def test_contrail_compute_status(local_salt_client, check_openstack):
+    cs = get_contrail_status(local_salt_client, 'nova:compute',
+                             STATUS_COMMAND, STATUS_FILTER)
+    broken_services = []
+
+    for node in cs:
+        for line in cs[node].split('\n'):
+            line = line.strip()
+            if len (line.split(None, 1)) == 1:
+                err_msg = "{0}: {1}".format(
+                    node, line)
+                broken_services.append(err_msg)
+                continue
+            name, status = line.split(None, 1)
+            if status not in {'active'}:
+                err_msg = "{node}:{service} - {status}".format(
+                    node=node, service=name, status=status)
+                broken_services.append(err_msg)
+
+    assert not broken_services, 'Broken services: {}'.format(json.dumps(
+                                                             broken_services,
+                                                             indent=4))
+
+
+def test_contrail_node_status(local_salt_client, check_openstack):
+    command = STATUS_COMMAND
+
+    # TODO: what will be in OpenContrail 5?
+    if pytest.contrail == '4':
+        command = "doctrail all " + command
+    cs = get_contrail_status(local_salt_client,
+                             'opencontrail:client:analytics_node',
+                             command, STATUS_FILTER)
+    cs.update(get_contrail_status(local_salt_client, 'opencontrail:control',
+                                  command, STATUS_FILTER)
+    )
+    broken_services = []
+    for node in cs:
+        for line in cs[node].split('\n'):
+            line = line.strip()
+            if 'crashes/core.java.' not in line:
+                name, status = line.split(None, 1)
+            else:
+                name, status = line, 'FATAL'
+            if status not in {'active', 'backup'}:
+                err_msg = "{node}:{service} - {status}".format(
+                    node=node, service=name, status=status)
+                broken_services.append(err_msg)
+
+    assert not broken_services, 'Broken services: {}'.format(json.dumps(
+                                                             broken_services,
+                                                             indent=4))
+
+
+def test_contrail_vrouter_count(local_salt_client, check_openstack):
+    cs = get_contrail_status(local_salt_client, 'nova:compute',
+                             STATUS_COMMAND, STATUS_FILTER)
+
+    # TODO: what if compute lacks these service unintentionally?
+    if not cs:
+        pytest.skip("Contrail services were not found on compute nodes")
+
+    actual_vrouter_count = 0
+    for node in cs:
+        for line in cs[node].split('\n'):
+            if 'contrail-vrouter-nodemgr' in line:
+                actual_vrouter_count += 1
+
+    assert actual_vrouter_count == len(cs.keys()),\
+        'The length of vRouters {} differs' \
+        ' from the length of compute nodes {}'.format(actual_vrouter_count,
+                                                      len(cs.keys()))
+
+
+def test_public_ui_contrail(local_salt_client, ctl_nodes_pillar, check_openstack):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '8143'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd_any(
+        tgt=ctl_nodes_pillar,
+        param='curl -k {}/ 2>&1 | \
+               grep Contrail'.format(url))
+    assert len(result) != 0, \
+        'Public Contrail UI is not reachable on {} from ctl nodes'.format(url)
diff --git a/test_set/cvp-sanity/tests/test_default_gateway.py b/test_set/cvp-sanity/tests/test_default_gateway.py
new file mode 100644
index 0000000..8cea880
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_default_gateway.py
@@ -0,0 +1,24 @@
+import json
+
+
+def test_check_default_gateways(local_salt_client, nodes_in_group):
+    netstat_info = local_salt_client.cmd(
+        tgt="L@"+','.join(nodes_in_group),
+        param='ip r | sed -n 1p',
+        expr_form='compound')
+
+    gateways = {}
+
+    for node in netstat_info.keys():
+        gateway = netstat_info[node]
+        if isinstance(gateway, bool):
+            gateway = 'Cannot access node(-s)'
+        if gateway not in gateways:
+            gateways[gateway] = [node]
+        else:
+            gateways[gateway].append(node)
+
+    assert len(gateways.keys()) == 1, \
+        "There were found few gateways: {gw}".format(
+        gw=json.dumps(gateways, indent=4)
+    )
diff --git a/test_set/cvp-sanity/tests/test_drivetrain.py b/test_set/cvp-sanity/tests/test_drivetrain.py
new file mode 100644
index 0000000..3a9f1b6
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_drivetrain.py
@@ -0,0 +1,435 @@
+import jenkins
+from xml.dom import minidom
+import utils
+import json
+import pytest
+import time
+import os
+from pygerrit2 import GerritRestAPI, HTTPBasicAuth
+from requests import HTTPError
+import git
+import ldap
+import ldap.modlist as modlist
+
+
+def join_to_gerrit(local_salt_client, gerrit_user, gerrit_password):
+    gerrit_port = local_salt_client.pillar_get(
+        tgt='I@gerrit:client and not I@salt:master',
+        param='_param:haproxy_gerrit_bind_port',
+        expr_form='compound')
+    gerrit_address = local_salt_client.pillar_get(
+        tgt='I@gerrit:client and not I@salt:master',
+        param='_param:haproxy_gerrit_bind_host',
+        expr_form='compound')
+    url = 'http://{0}:{1}'.format(gerrit_address,gerrit_port)
+    auth = HTTPBasicAuth(gerrit_user, gerrit_password)
+    rest = GerritRestAPI(url=url, auth=auth)
+    return rest
+
+
+def join_to_jenkins(local_salt_client, jenkins_user, jenkins_password):
+    jenkins_port = local_salt_client.pillar_get(
+        tgt='I@jenkins:client and not I@salt:master',
+        param='_param:haproxy_jenkins_bind_port',
+        expr_form='compound')
+    jenkins_address = local_salt_client.pillar_get(
+        tgt='I@jenkins:client and not I@salt:master',
+        param='_param:haproxy_jenkins_bind_host',
+        expr_form='compound')
+    jenkins_url = 'http://{0}:{1}'.format(jenkins_address,jenkins_port)
+    server = jenkins.Jenkins(jenkins_url, username=jenkins_user, password=jenkins_password)
+    return server
+
+
+def get_password(local_salt_client,service):
+    password = local_salt_client.pillar_get(
+        tgt=service,
+        param='_param:openldap_admin_password')
+    return password
+
+
+def test_drivetrain_gerrit(local_salt_client, check_cicd):
+    gerrit_password = get_password(local_salt_client,'gerrit:client')
+    gerrit_error = ''
+    current_date = time.strftime("%Y%m%d-%H.%M.%S", time.localtime())
+    test_proj_name = "test-dt-{0}".format(current_date)
+    gerrit_port = local_salt_client.pillar_get(
+        tgt='I@gerrit:client and not I@salt:master',
+        param='_param:haproxy_gerrit_bind_port',
+        expr_form='compound')
+    gerrit_address = local_salt_client.pillar_get(
+        tgt='I@gerrit:client and not I@salt:master',
+        param='_param:haproxy_gerrit_bind_host',
+        expr_form='compound')
+    try:
+        #Connecting to gerrit and check connection
+        server = join_to_gerrit(local_salt_client,'admin',gerrit_password)
+        gerrit_check = server.get("/changes/?q=owner:self%20status:open")
+        #Check deleteproject plugin and skip test if the plugin is not installed
+        gerrit_plugins = server.get("/plugins/?all")
+        if 'deleteproject' not in gerrit_plugins:
+            pytest.skip("Delete-project plugin is not installed")
+        #Create test project and add description
+        server.put("/projects/"+test_proj_name)
+        server.put("/projects/"+test_proj_name+"/description",json={"description":"Test DriveTrain project","commit_message": "Update the project description"})
+    except HTTPError, e:
+        gerrit_error = e
+    try:
+        #Create test folder and init git
+        repo_dir = os.path.join(os.getcwd(),test_proj_name)
+        file_name = os.path.join(repo_dir, current_date)
+        repo = git.Repo.init(repo_dir)
+        #Add remote url for this git repo
+        origin = repo.create_remote('origin', 'http://admin:{1}@{2}:{3}/{0}.git'.format(test_proj_name,gerrit_password,gerrit_address,gerrit_port))
+        #Add commit-msg hook to automatically add Change-Id to our commit
+        os.system("curl -Lo {0}/.git/hooks/commit-msg 'http://admin:{1}@{2}:{3}/tools/hooks/commit-msg' > /dev/null 2>&1".format(repo_dir,gerrit_password,gerrit_address,gerrit_port))
+        os.system("chmod u+x {0}/.git/hooks/commit-msg".format(repo_dir))
+        #Create a test file
+        f = open(file_name, 'w+')
+        f.write("This is a test file for DriveTrain test")
+        f.close()
+        #Add file to git and commit it to Gerrit for review
+        repo.index.add([file_name])
+        repo.index.commit("This is a test commit for DriveTrain test")
+        repo.git.push("origin", "HEAD:refs/for/master")
+        #Get change id from Gerrit. Set Code-Review +2 and submit this change
+        changes = server.get("/changes/?q=project:{0}".format(test_proj_name))
+        last_change = changes[0].get('change_id')
+        server.post("/changes/{0}/revisions/1/review".format(last_change),json={"message": "All is good","labels":{"Code-Review":"+2"}})
+        server.post("/changes/{0}/submit".format(last_change))
+    except HTTPError, e:
+        gerrit_error = e
+    finally:
+        #Delete test project
+        server.post("/projects/"+test_proj_name+"/deleteproject~delete")
+    assert gerrit_error == '',\
+        'Something is wrong with Gerrit'.format(gerrit_error)
+
+
+def test_drivetrain_openldap(local_salt_client, check_cicd):
+    """
+         1. Create a test user 'DT_test_user' in openldap
+         2. Add the user to admin group
+         3. Login using the user to Jenkins
+         4. Check that no error occurred
+         5. Add the user to devops group in Gerrit and then login to Gerrit
+        using test_user credentials.
+         6 Start job in jenkins from this user
+         7. Get info from gerrit  from this user
+         6. Finally, delete the user from admin
+        group and openldap
+    """
+
+    # TODO split to several test cases. One check - per one test method. Make the login process in fixture
+    ldap_password = get_password(local_salt_client,'openldap:client')
+    #Check that ldap_password is exists, otherwise skip test
+    if not ldap_password:
+        pytest.skip("Openldap service or openldap:client pillar \
+        are not found on this environment.")
+    ldap_port = local_salt_client.pillar_get(
+        tgt='I@openldap:client and not I@salt:master',
+        param='_param:haproxy_openldap_bind_port',
+        expr_form='compound')
+    ldap_address = local_salt_client.pillar_get(
+        tgt='I@openldap:client and not I@salt:master',
+        param='_param:haproxy_openldap_bind_host',
+        expr_form='compound')
+    ldap_dc = local_salt_client.pillar_get(
+        tgt='openldap:client',
+        param='_param:openldap_dn')
+    ldap_con_admin = local_salt_client.pillar_get(
+        tgt='openldap:client',
+        param='openldap:client:server:auth:user')
+    ldap_url = 'ldap://{0}:{1}'.format(ldap_address,ldap_port)
+    ldap_error = ''
+    ldap_result = ''
+    gerrit_result = ''
+    gerrit_error = ''
+    jenkins_error = ''
+    #Test user's CN
+    test_user_name = 'DT_test_user'
+    test_user = 'cn={0},ou=people,{1}'.format(test_user_name,ldap_dc)
+    #Admins group CN
+    admin_gr_dn = 'cn=admins,ou=groups,{0}'.format(ldap_dc)
+    #List of attributes for test user
+    attrs = {}
+    attrs['objectclass'] = ['organizationalRole', 'simpleSecurityObject', 'shadowAccount']
+    attrs['cn'] = test_user_name
+    attrs['uid'] = test_user_name
+    attrs['userPassword'] = 'aSecretPassw'
+    attrs['description'] = 'Test user for CVP DT test'
+    searchFilter = 'cn={0}'.format(test_user_name)
+    #Get a test job name from config
+    config = utils.get_configuration()
+    jenkins_cvp_job = config['jenkins_cvp_job']
+    #Open connection to ldap and creating test user in admins group
+    try:
+        ldap_server = ldap.initialize(ldap_url)
+        ldap_server.simple_bind_s(ldap_con_admin,ldap_password)
+        ldif = modlist.addModlist(attrs)
+        ldap_server.add_s(test_user,ldif)
+        ldap_server.modify_s(admin_gr_dn,[(ldap.MOD_ADD, 'memberUid', [test_user_name],)],)
+        #Check search test user in LDAP
+        searchScope = ldap.SCOPE_SUBTREE
+        ldap_result = ldap_server.search_s(ldap_dc, searchScope, searchFilter)
+    except ldap.LDAPError, e:
+        ldap_error = e
+    try:
+        #Check connection between Jenkins and LDAP
+        jenkins_server = join_to_jenkins(local_salt_client,test_user_name,'aSecretPassw')
+        jenkins_version = jenkins_server.get_job_name(jenkins_cvp_job)
+        #Check connection between Gerrit and LDAP
+        gerrit_server = join_to_gerrit(local_salt_client,'admin',ldap_password)
+        gerrit_check = gerrit_server.get("/changes/?q=owner:self%20status:open")
+        #Add test user to devops-contrib group in Gerrit and check login
+        _link = "/groups/devops-contrib/members/{0}".format(test_user_name)
+        gerrit_add_user = gerrit_server.put(_link)
+        gerrit_server = join_to_gerrit(local_salt_client,test_user_name,'aSecretPassw')
+        gerrit_result = gerrit_server.get("/changes/?q=owner:self%20status:open")
+    except HTTPError, e:
+        gerrit_error = e
+    except jenkins.JenkinsException, e:
+        jenkins_error = e
+    finally:
+        ldap_server.modify_s(admin_gr_dn,[(ldap.MOD_DELETE, 'memberUid', [test_user_name],)],)
+        ldap_server.delete_s(test_user)
+        ldap_server.unbind_s()
+    assert ldap_error == '', \
+        '''Something is wrong with connection to LDAP:
+            {0}'''.format(e)
+    assert jenkins_error == '', \
+        '''Connection to Jenkins was not established:
+            {0}'''.format(e)
+    assert gerrit_error == '', \
+        '''Connection to Gerrit was not established:
+            {0}'''.format(e)
+    assert ldap_result !=[], \
+        '''Test user was not found'''
+
+
+def test_drivetrain_services_replicas(local_salt_client, check_cicd):
+    """
+        # Execute ` salt -C 'I@gerrit:client' cmd.run 'docker service ls'` command to get info  for each docker service like that:
+        "x5nzktxsdlm6        jenkins_slave02     replicated          0/1                 docker-prod-local.artifactory.mirantis.com/mirantis/cicd/jnlp-slave:2019.2.0         "
+        # Check that each service has all replicas
+    """
+    # TODO: replace with rerunfalures plugin
+    wrong_items = []
+    for _ in range(4):
+        docker_services_by_nodes = local_salt_client.cmd(
+            tgt='I@gerrit:client',
+            param='docker service ls',
+            expr_form='compound')
+        wrong_items = []
+        for line in docker_services_by_nodes[docker_services_by_nodes.keys()[0]].split('\n'):
+            if line[line.find('/') - 1] != line[line.find('/') + 1] \
+               and 'replicated' in line:
+                wrong_items.append(line)
+        if len(wrong_items) == 0:
+            break
+        else:
+            print('''Some DriveTrain services doesn't have expected number of replicas:
+                  {}\n'''.format(json.dumps(wrong_items, indent=4)))
+            time.sleep(5)
+    assert len(wrong_items) == 0
+
+
+def test_drivetrain_components_and_versions(local_salt_client, check_cicd):
+    """
+        1. Execute command `docker service ls --format "{{.Image}}"'` on  the 'I@gerrit:client' target
+        2. Execute  ` salt -C 'I@gerrit:client' pillar.get docker:client:images`
+        3. Check that list of images from step 1 is the same as a list from the step2
+        4. Check that all docker services has label that equals to mcp_version
+
+    """
+    config = utils.get_configuration()
+    if not config['drivetrain_version']:
+        expected_version = \
+            local_salt_client.pillar_get(param='_param:mcp_version') or \
+            local_salt_client.pillar_get(param='_param:apt_mk_version')
+        if not expected_version:
+            pytest.skip("drivetrain_version is not defined. Skipping")
+    else:
+        expected_version = config['drivetrain_version']
+    table_with_docker_services = local_salt_client.cmd(tgt='I@gerrit:client',
+                                                       param='docker service ls --format "{{.Image}}"',
+                                                       expr_form='compound')
+    expected_images = local_salt_client.pillar_get(tgt='gerrit:client',
+                                                   param='docker:client:images')
+    mismatch = {}
+    actual_images = {}
+    for image in set(table_with_docker_services[table_with_docker_services.keys()[0]].split('\n')):
+        actual_images[image.split(":")[0]] = image.split(":")[-1]
+    for image in set(expected_images):
+        im_name = image.split(":")[0]
+        if im_name not in actual_images:
+            mismatch[im_name] = 'not found on env'
+        elif image.split(":")[-1] != actual_images[im_name]:
+            mismatch[im_name] = 'has {actual} version instead of {expected}'.format(
+                actual=actual_images[im_name], expected=image.split(":")[-1])
+    assert len(mismatch) == 0, \
+        '''Some DriveTrain components do not have expected versions:
+              {}'''.format(json.dumps(mismatch, indent=4))
+
+
+def test_jenkins_jobs_branch(local_salt_client, check_cicd):
+    """ This test compares Jenkins jobs versions
+        collected from the cloud vs collected from pillars.
+    """
+    excludes = ['upgrade-mcp-release', 'deploy-update-salt',
+                'git-mirror-downstream-mk-pipelines',
+                'git-mirror-downstream-pipeline-library']
+
+    config = utils.get_configuration()
+    drivetrain_version = config.get('drivetrain_version', '')
+    jenkins_password = get_password(local_salt_client, 'jenkins:client')
+    version_mismatch = []
+    server = join_to_jenkins(local_salt_client, 'admin', jenkins_password)
+    for job_instance in server.get_jobs():
+        job_name = job_instance.get('name')
+        if job_name in excludes:
+            continue
+
+        job_config = server.get_job_config(job_name)
+        xml_data = minidom.parseString(job_config)
+        BranchSpec = xml_data.getElementsByTagName('hudson.plugins.git.BranchSpec')
+
+        # We use master branch for pipeline-library in case of 'testing,stable,nighlty' versions
+        # Leave proposed version as is
+        # in other cases we get release/{drivetrain_version}  (e.g release/2019.2.0)
+        if drivetrain_version in ['testing', 'nightly', 'stable']:
+            expected_version = 'master'
+        else:
+            expected_version = local_salt_client.pillar_get(
+                tgt='gerrit:client',
+                param='jenkins:client:job:{}:scm:branch'.format(job_name))
+
+        if not BranchSpec:
+            print("No BranchSpec has found for {} job".format(job_name))
+            continue
+
+        actual_version = BranchSpec[0].getElementsByTagName('name')[0].childNodes[0].data
+        if actual_version not in expected_version and expected_version != '':
+            version_mismatch.append("Job {0} has {1} branch."
+                                    "Expected {2}".format(job_name,
+                                                          actual_version,
+                                                          expected_version))
+    assert len(version_mismatch) == 0, \
+        '''Some DriveTrain jobs have version/branch mismatch:
+              {}'''.format(json.dumps(version_mismatch, indent=4))
+
+
+def test_drivetrain_jenkins_job(local_salt_client, check_cicd):
+    """
+        # Login to Jenkins on jenkins:client
+        # Read the name of jobs from configuration 'jenkins_test_job'
+        # Start job
+        # Wait till the job completed
+        # Check that job has completed with "SUCCESS" result
+    """
+    job_result = None
+
+    jenkins_password = get_password(local_salt_client, 'jenkins:client')
+    server = join_to_jenkins(local_salt_client, 'admin', jenkins_password)
+    # Getting Jenkins test job name from configuration
+    config = utils.get_configuration()
+    jenkins_test_job = config['jenkins_test_job']
+    if not server.get_job_name(jenkins_test_job):
+        server.create_job(jenkins_test_job, jenkins.EMPTY_CONFIG_XML)
+    if server.get_job_name(jenkins_test_job):
+        next_build_num = server.get_job_info(jenkins_test_job)['nextBuildNumber']
+        # If this is first build number skip building check
+        if next_build_num != 1:
+            # Check that test job is not running at this moment,
+            # Otherwise skip the test
+            last_build_num = server.get_job_info(jenkins_test_job)['lastBuild'].get('number')
+            last_build_status = server.get_build_info(jenkins_test_job, last_build_num)['building']
+            if last_build_status:
+                pytest.skip("Test job {0} is already running").format(jenkins_test_job)
+        server.build_job(jenkins_test_job)
+        timeout = 0
+        # Use job status True by default to exclude timeout between build job and start job.
+        job_status = True
+        while job_status and (timeout < 180):
+            time.sleep(10)
+            timeout += 10
+            job_status = server.get_build_info(jenkins_test_job, next_build_num)['building']
+        job_result = server.get_build_info(jenkins_test_job, next_build_num)['result']
+    else:
+        pytest.skip("The job {0} was not found").format(jenkins_test_job)
+    assert job_result == 'SUCCESS', \
+        '''Test job '{0}' build was not successful or timeout is too small
+         '''.format(jenkins_test_job)
+
+
+def test_kdt_all_pods_are_available(local_salt_client, check_kdt):
+    """
+     # Run kubectl get pods -n drivetrain on kdt-nodes to get status for each pod
+     # Check that each pod has fulfilled status in the READY column
+
+    """
+    pods_statuses_output = local_salt_client.cmd_any(
+        tgt='L@'+','.join(check_kdt),
+        param='kubectl get pods -n drivetrain |  awk {\'print $1"; "$2\'} | column -t',
+        expr_form='compound')
+
+    assert pods_statuses_output != "/bin/sh: 1: kubectl: not found", \
+        "Nodes {} don't have kubectl".format(check_kdt)
+    # Convert string to list and remove first row with column names
+    pods_statuses = pods_statuses_output.split('\n')
+    pods_statuses = pods_statuses[1:]
+
+    report_with_errors = ""
+    for pod_status in pods_statuses:
+        pod, status = pod_status.split('; ')
+        actual_replica, expected_replica = status.split('/')
+
+        if actual_replica.strip() != expected_replica.strip():
+            report_with_errors += "Pod [{pod}] doesn't have all containers. Expected {expected} containers, actual {actual}\n".format(
+                pod=pod,
+                expected=expected_replica,
+                actual=actual_replica
+            )
+
+    print report_with_errors
+    assert report_with_errors == "", \
+        "\n{sep}{kubectl_output}{sep} \n\n {report} ".format(
+            sep="\n" + "-"*20 + "\n",
+            kubectl_output=pods_statuses_output,
+            report=report_with_errors
+        )
+
+def test_kfg_all_pods_are_available(local_salt_client, check_kfg):
+    """
+     # Run kubectl get pods -n drivetrain on cfg node to get status for each pod
+     # Check that each pod has fulfilled status in the READY column
+
+    """
+    # TODO collapse similar tests into one to check pods and add new fixture
+    pods_statuses_output = local_salt_client.cmd_any(
+        tgt='L@' + ','.join(check_kfg),
+        param='kubectl get pods -n drivetrain |  awk {\'print $1"; "$2\'} | column -t',
+        expr_form='compound')
+    # Convert string to list and remove first row with column names
+    pods_statuses = pods_statuses_output.split('\n')
+    pods_statuses = pods_statuses[1:]
+
+    report_with_errors = ""
+    for pod_status in pods_statuses:
+        pod, status = pod_status.split('; ')
+        actual_replica, expected_replica = status.split('/')
+
+        if actual_replica.strip() == expected_replica.strip():
+            report_with_errors += "Pod [{pod}] doesn't have all containers. Expected {expected} containers, actual {actual}\n".format(
+                pod=pod,
+                expected=expected_replica,
+                actual=actual_replica
+            )
+
+    print report_with_errors
+    assert report_with_errors != "", \
+        "\n{sep}{kubectl_output}{sep} \n\n {report} ".format(
+            sep="\n" + "-" * 20 + "\n",
+            kubectl_output=pods_statuses_output,
+            report=report_with_errors
+        )
\ No newline at end of file
diff --git a/test_set/cvp-sanity/tests/test_duplicate_ips.py b/test_set/cvp-sanity/tests/test_duplicate_ips.py
new file mode 100644
index 0000000..3b55a26
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_duplicate_ips.py
@@ -0,0 +1,52 @@
+from collections import Counter
+from pprint import pformat
+import os
+
+import utils
+
+
+def get_duplicate_ifaces(nodes, ips):
+    dup_ifaces = {}
+    for node in nodes:
+        for iface in nodes[node]['ip4_interfaces']:
+            if set(nodes[node]['ip4_interfaces'][iface]) & set(ips):
+                dup_ifaces[node] = {iface: nodes[node]['ip4_interfaces'][iface]}
+    return dup_ifaces
+
+
+def test_duplicate_ips(local_salt_client):
+    testname = os.path.basename(__file__).split('.')[0]
+    config = utils.get_configuration()
+    skipped_ifaces = config.get(testname)["skipped_ifaces"]
+
+    local_salt_client.cmd(tgt='*',
+                          fun='saltutil.refresh_grains',
+                          expr_form='compound')
+    nodes = local_salt_client.cmd(tgt='*',
+                                  fun='grains.item',
+                                  param='ip4_interfaces',
+                                  expr_form='compound')
+
+    ipv4_list = []
+    for node in nodes:
+        if isinstance(nodes[node], bool):
+            # TODO: do not skip node
+            print ("{} node is skipped".format(node))
+            continue
+        for iface in nodes[node]['ip4_interfaces']:
+            # Omit 'ip-less' ifaces
+            if not nodes[node]['ip4_interfaces'][iface]:
+                continue
+            if iface in skipped_ifaces:
+                continue
+            ipv4_list.extend(nodes[node]['ip4_interfaces'][iface])
+    no_dups = (len(ipv4_list) == len(set(ipv4_list)))
+    if not no_dups:
+        ips_count = Counter(ipv4_list).most_common()
+        dup_ips = filter(lambda x: x[1] > 1, ips_count)
+        dup_ifaces = get_duplicate_ifaces(nodes, [v[0] for v in dup_ips])
+
+        msg = ("\nDuplicate IP addresses found:\n{}"
+               "\n\nThe following interfaces are affected:\n{}"
+                "".format(pformat(dup_ips), pformat(dup_ifaces)))
+        assert no_dups, msg
diff --git a/test_set/cvp-sanity/tests/test_etc_hosts.py b/test_set/cvp-sanity/tests/test_etc_hosts.py
new file mode 100644
index 0000000..8850ab7
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_etc_hosts.py
@@ -0,0 +1,22 @@
+import json
+
+
+def test_etc_hosts(local_salt_client):
+    nodes_info = local_salt_client.cmd(
+        tgt='*',
+        param='cat /etc/hosts',
+        expr_form='compound')
+    result = {}
+    for node in nodes_info.keys():
+        if isinstance(nodes_info[node], bool):
+            result[node] = 'Cannot access this node'
+            continue
+        for nd in nodes_info.keys():
+            if nd not in nodes_info[node]:
+                if node in result:
+                    result[node] += ',' + nd
+                else:
+                    result[node] = nd
+    assert len(result) <= 1, \
+        "Some hosts are not presented in /etc/hosts: {0}".format(
+        json.dumps(result, indent=4))
\ No newline at end of file
diff --git a/test_set/cvp-sanity/tests/test_galera_cluster.py b/test_set/cvp-sanity/tests/test_galera_cluster.py
new file mode 100644
index 0000000..73f4932
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_galera_cluster.py
@@ -0,0 +1,23 @@
+import pytest
+
+
+def test_galera_cluster_status(local_salt_client):
+    gs = local_salt_client.cmd(
+        tgt='galera:*',
+        param='salt-call mysql.status | grep -A1 wsrep_cluster_size | tail -n1',
+        expr_form='pillar')
+
+    if not gs:
+        pytest.skip("Galera service or galera:* pillar \
+        are not found on this environment.")
+
+    size_cluster = []
+    amount = len(gs)
+
+    for item in gs.values():
+        size_cluster.append(item.split('\n')[-1].strip())
+
+    assert all(item == str(amount) for item in size_cluster), \
+        '''There found inconsistency within cloud. MySQL galera cluster
+              is probably broken, the cluster size gathered from nodes:
+              {}'''.format(gs)
diff --git a/test_set/cvp-sanity/tests/test_k8s.py b/test_set/cvp-sanity/tests/test_k8s.py
new file mode 100644
index 0000000..97c3490
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_k8s.py
@@ -0,0 +1,184 @@
+import pytest
+import json
+import os
+
+
+def test_k8s_get_cs_status(local_salt_client):
+    result = local_salt_client.cmd(
+        tgt='etcd:server',
+        param='kubectl get cs',
+        expr_form='pillar'
+    )
+    errors = []
+    if not result:
+        pytest.skip("k8s is not found on this environment")
+    for node in result:
+        for line in result[node].split('\n'):
+            line = line.strip()
+            if 'MESSAGE' in line or 'proto' in line:
+                continue
+            else:
+                if 'Healthy' not in line:
+                    errors.append(line)
+        break
+    assert not errors, 'k8s is not healthy: {}'.format(json.dumps(
+                                                       errors,
+                                                       indent=4))
+
+
+@pytest.mark.xfail
+def test_k8s_get_nodes_status(local_salt_client):
+    result = local_salt_client.cmd(
+        tgt='etcd:server',
+        param='kubectl get nodes',
+        expr_form='pillar'
+    )
+    errors = []
+    if not result:
+        pytest.skip("k8s is not found on this environment")
+    for node in result:
+        for line in result[node].split('\n'):
+            line = line.strip()
+            if 'STATUS' in line or 'proto' in line:
+                continue
+            else:
+                if 'Ready' != line.split()[1]:
+                    errors.append(line)
+        break
+    assert not errors, 'k8s is not healthy: {}'.format(json.dumps(
+                                                       errors,
+                                                       indent=4))
+
+
+def test_k8s_get_calico_status(local_salt_client):
+    result = local_salt_client.cmd(
+        tgt='kubernetes:pool',
+        param='calicoctl node status',
+        expr_form='pillar'
+    )
+    errors = []
+    if not result:
+        pytest.skip("k8s is not found on this environment")
+    for node in result:
+        for line in result[node].split('\n'):
+            line = line.strip('|')
+            if 'STATE' in line or '| ' not in line:
+                continue
+            else:
+                if 'up' not in line or 'Established' not in line:
+                    errors.append(line)
+    assert not errors, 'Calico node status is not good: {}'.format(json.dumps(
+                                                                   errors,
+                                                                   indent=4))
+
+
+def test_k8s_cluster_status(local_salt_client):
+    result = local_salt_client.cmd(
+        tgt='kubernetes:master',
+        param='kubectl cluster-info',
+        expr_form='pillar'
+    )
+    errors = []
+    if not result:
+        pytest.skip("k8s is not found on this environment")
+    for node in result:
+        for line in result[node].split('\n'):
+            if 'proto' in line or 'further' in line or line == '':
+                continue
+            else:
+                if 'is running' not in line:
+                    errors.append(line)
+        break
+    assert not errors, 'k8s cluster info is not good: {}'.format(json.dumps(
+                                                                 errors,
+                                                                 indent=4))
+
+
+def test_k8s_kubelet_status(local_salt_client):
+    result = local_salt_client.cmd(
+        tgt='kubernetes:pool',
+        fun='service.status',
+        param='kubelet',
+        expr_form='pillar'
+    )
+    errors = []
+    if not result:
+        pytest.skip("k8s is not found on this environment")
+    for node in result:
+        if not result[node]:
+            errors.append(node)
+    assert not errors, 'Kublete is not running on these nodes: {}'.format(
+                       errors)
+
+
+def test_k8s_check_system_pods_status(local_salt_client):
+    result = local_salt_client.cmd(
+        tgt='etcd:server',
+        param='kubectl --namespace="kube-system" get pods',
+        expr_form='pillar'
+    )
+    errors = []
+    if not result:
+        pytest.skip("k8s is not found on this environment")
+    for node in result:
+        for line in result[node].split('\n'):
+            line = line.strip('|')
+            if 'STATUS' in line or 'proto' in line:
+                continue
+            else:
+                if 'Running' not in line:
+                    errors.append(line)
+        break
+    assert not errors, 'Some system pods are not running: {}'.format(json.dumps(
+                                                                   errors,
+                                                                   indent=4))
+
+
+def test_check_k8s_image_availability(local_salt_client):
+    # not a test actually
+    hostname = 'https://docker-dev-virtual.docker.mirantis.net/artifactory/webapp/'
+    response = os.system('curl -s --insecure {} > /dev/null'.format(hostname))
+    if response == 0:
+        print '{} is AVAILABLE'.format(hostname)
+    else:
+        print '{} IS NOT AVAILABLE'.format(hostname)
+
+
+def test_k8s_dashboard_available(local_salt_client):
+    """
+        # Check is kubernetes enabled on the cluster with command  `salt -C 'etcd:server' cmd.run 'kubectl get svc -n kube-system'`
+        # If yes then check Dashboard addon with next command: `salt -C 'etcd:server' pillar.get kubernetes:common:addons:dashboard:enabled`
+        # If dashboard enabled get its IP from pillar `salt -C 'etcd:server' pillar.get kubernetes:common:addons:dashboard:public_ip`
+        # Check that public_ip exists
+        # Check that public_ip:8443 is accessible with curl
+    """
+    result = local_salt_client.cmd(
+        tgt='etcd:server',
+        param='kubectl get svc -n kube-system',
+        expr_form='pillar'
+    )
+    if not result:
+        pytest.skip("k8s is not found on this environment")
+
+    # service name 'kubernetes-dashboard' is hardcoded in kubernetes formula
+    dashboard_enabled = local_salt_client.pillar_get(
+        tgt='etcd:server',
+        param='kubernetes:common:addons:dashboard:enabled',)
+    if not dashboard_enabled:
+        pytest.skip("Kubernetes dashboard is not enabled in the cluster.")
+
+    external_ip = local_salt_client.pillar_get(
+        tgt='etcd:server',
+        param='kubernetes:common:addons:dashboard:public_ip')
+
+    assert external_ip.__len__() > 0, "Kubernetes dashboard is enabled but not defined in pillars"
+    # dashboard port 8443 is hardcoded in kubernetes formula
+    url = "https://{}:8443".format(external_ip)
+    check = local_salt_client.cmd(
+        tgt='etcd:server',
+        param='curl {} 2>&1 | grep kubernetesDashboard'.format(url),
+        expr_form='pillar'
+    )
+    assert len(check.values()[0]) != 0, \
+        'Kubernetes dashboard is not reachable on {} ' \
+        'from ctl nodes'.format(url)
diff --git a/test_set/cvp-sanity/tests/test_mounts.py b/test_set/cvp-sanity/tests/test_mounts.py
new file mode 100644
index 0000000..c9ba9ce
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_mounts.py
@@ -0,0 +1,43 @@
+import json
+import pytest
+
+
+def test_mounted_file_systems(local_salt_client, nodes_in_group):
+    """
+        # Get all mount points from each node in the group  with the next command: `df -h | awk '{print $1}'`
+        # Check that all mount points are similar for each node in the group
+    """
+    mounts_by_nodes = local_salt_client.cmd(tgt="L@"+','.join(nodes_in_group),
+                                            param="df -h | awk '{print $1}'",
+                                            expr_form='compound')
+
+    # Let's exclude cmp, kvm, ceph OSD nodes, mon, cid, k8s-ctl, k8s-cmp nodes
+    # These nodes will have different mounts and this is expected
+    exclude_nodes = local_salt_client.test_ping(
+         tgt="I@nova:compute or "
+             "I@ceph:osd or "
+             "I@salt:control or "
+             "I@prometheus:server and not I@influxdb:server or "
+             "I@kubernetes:* and not I@etcd:* or "
+             "I@docker:host and not I@prometheus:server and not I@kubernetes:* or "
+             "I@gerrit:client and I@kubernetes:pool and not I@salt:master",
+         expr_form='compound').keys()
+
+    if len(mounts_by_nodes.keys()) < 2:
+        pytest.skip("Nothing to compare - only 1 node")
+
+    result = {}
+    pretty_result = {}
+
+    for node in mounts_by_nodes:
+        if node in exclude_nodes:
+            continue
+        result[node] = "\n".join(sorted(mounts_by_nodes[node].split()))
+        pretty_result[node] = sorted(mounts_by_nodes[node].split())
+
+    if not result:
+        pytest.skip("These nodes are skipped")
+
+    assert len(set(result.values())) == 1,\
+        "The nodes in the same group have different mounts:\n{}".format(
+            json.dumps(pretty_result, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_mtu.py b/test_set/cvp-sanity/tests/test_mtu.py
new file mode 100644
index 0000000..0a3d2d0
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_mtu.py
@@ -0,0 +1,72 @@
+import pytest
+import json
+import utils
+import os
+
+
+def test_mtu(local_salt_client, nodes_in_group):
+    testname = os.path.basename(__file__).split('.')[0]
+    config = utils.get_configuration()
+    skipped_ifaces = config.get(testname)["skipped_ifaces"] or \
+        ["bonding_masters", "lo", "veth", "tap", "cali", "qv", "qb", "br-int", "vxlan"]
+    total = {}
+    network_info = local_salt_client.cmd(
+        tgt="L@"+','.join(nodes_in_group),
+        param='ls /sys/class/net/',
+        expr_form='compound')
+
+    kvm_nodes = local_salt_client.test_ping(tgt='salt:control').keys()
+
+    if len(network_info.keys()) < 2:
+        pytest.skip("Nothing to compare - only 1 node")
+
+    for node, ifaces_info in network_info.iteritems():
+        if isinstance(ifaces_info, bool):
+            print ("{} node is skipped".format(node))
+            continue
+        if node in kvm_nodes:
+            kvm_info = local_salt_client.cmd(tgt=node,
+                                             param="virsh list | "
+                                                   "awk '{print $2}' | "
+                                                   "xargs -n1 virsh domiflist | "
+                                                   "grep -v br-pxe | grep br- | "
+                                                   "awk '{print $1}'")
+            ifaces_info = kvm_info.get(node)
+        node_ifaces = ifaces_info.split('\n')
+        ifaces = {}
+        for iface in node_ifaces:
+            for skipped_iface in skipped_ifaces:
+                if skipped_iface in iface:
+                    break
+            else:
+                iface_mtu = local_salt_client.cmd(tgt=node,
+                                                  param='cat /sys/class/'
+                                                        'net/{}/mtu'.format(iface))
+                ifaces[iface] = iface_mtu.get(node)
+        total[node] = ifaces
+
+    nodes = []
+    mtu_data = []
+    my_set = set()
+
+    for node in total:
+        nodes.append(node)
+        my_set.update(total[node].keys())
+    for interf in my_set:
+        diff = []
+        row = []
+        for node in nodes:
+            if interf in total[node].keys():
+                diff.append(total[node][interf])
+                row.append("{}: {}".format(node, total[node][interf]))
+            else:
+                # skip node with no virbr0 or virbr0-nic interfaces
+                if interf not in ['virbr0', 'virbr0-nic']:
+                    row.append("{}: No interface".format(node))
+        if diff.count(diff[0]) < len(nodes):
+            row.sort()
+            row.insert(0, interf)
+            mtu_data.append(row)
+    assert len(mtu_data) == 0, \
+        "Several problems found: {0}".format(
+        json.dumps(mtu_data, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_nodes.py b/test_set/cvp-sanity/tests/test_nodes.py
new file mode 100644
index 0000000..687f3ae
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_nodes.py
@@ -0,0 +1,18 @@
+import json
+import pytest
+
+
+def test_minions_status(local_salt_client):
+    result = local_salt_client.cmd(
+        tgt='salt:master',
+        param='salt-run manage.status timeout=10 --out=json',
+        expr_form='pillar', check_status=True)
+    statuses = {}
+    try:
+        statuses = json.loads(result.values()[0])
+    except Exception as e:
+        pytest.fail(
+            "Could not check the result: {}\n"
+            "Nodes status result: {}".format(e, result))
+    assert not statuses["down"], "Some minions are down:\n {}".format(
+        statuses["down"])
diff --git a/test_set/cvp-sanity/tests/test_nodes_in_maas.py b/test_set/cvp-sanity/tests/test_nodes_in_maas.py
new file mode 100644
index 0000000..fafd150
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_nodes_in_maas.py
@@ -0,0 +1,69 @@
+import json
+import pytest
+import utils
+
+
+def get_maas_logged_in_profiles(local_salt_client):
+    get_apis = local_salt_client.cmd_any(
+        tgt='maas:cluster',
+        param='maas list')
+    return get_apis
+
+
+def login_to_maas(local_salt_client, user):
+    login = local_salt_client.cmd_any(
+        tgt='maas:cluster',
+        param="source /var/lib/maas/.maas_login.sh  ; echo {}=${{PROFILE}}"
+              "".format(user))
+    return login
+
+
+def test_nodes_deployed_in_maas(local_salt_client):
+    config = utils.get_configuration()
+
+    # 1. Check MAAS is present on some node
+    check_maas = local_salt_client.test_ping(tgt='maas:cluster')
+    if not check_maas:
+        pytest.skip("Could not find MAAS on the environment")
+
+    # 2. Get MAAS admin user from model
+    maas_admin_user = local_salt_client.pillar_get(
+        tgt='maas:cluster',
+        param='_param:maas_admin_username')
+    if not maas_admin_user:
+        pytest.skip("Could not find MAAS admin user in the model by parameter "
+                    "'maas_admin_username'")
+
+    # 3. Check maas has logged in profiles and try to log in if not
+    logged_profiles = get_maas_logged_in_profiles(local_salt_client)
+    if maas_admin_user not in logged_profiles:
+        login = login_to_maas(local_salt_client, maas_admin_user)
+        newly_logged = get_maas_logged_in_profiles(local_salt_client)
+        if maas_admin_user not in newly_logged:
+            pytest.skip(
+                "Could not find '{}' profile in MAAS and could not log in.\n"
+                "Current MAAS logged in profiles: {}.\nLogin output: {}"
+                "".format(maas_admin_user, newly_logged, login))
+
+    # 4. Get nodes in MAAS
+    get_nodes = local_salt_client.cmd(
+        tgt='maas:cluster',
+        param='maas {} nodes read'.format(maas_admin_user),
+        expr_form='pillar')
+    result = ""
+    try:
+        result = json.loads(get_nodes.values()[0])
+    except ValueError as e:
+        assert result, "Could not get nodes: {}\n{}". \
+            format(get_nodes, e)
+
+    # 5. Check all nodes are in Deployed status
+    failed_nodes = []
+    for node in result:
+        if node["fqdn"] in config.get("skipped_nodes"):
+            continue
+        if "status_name" in node.keys():
+            if node["status_name"] != 'Deployed':
+                failed_nodes.append({node["fqdn"]: node["status_name"]})
+    assert not failed_nodes, "Some nodes have unexpected status in MAAS:" \
+                             "\n{}".format(json.dumps(failed_nodes, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_nova_services.py b/test_set/cvp-sanity/tests/test_nova_services.py
new file mode 100644
index 0000000..6505d30
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_nova_services.py
@@ -0,0 +1,39 @@
+import pytest
+
+
+@pytest.mark.usefixtures('check_openstack')
+def test_nova_services_status(local_salt_client):
+    result = local_salt_client.cmd_any(
+        tgt='keystone:server',
+        param='. /root/keystonercv3;'
+              'nova service-list | grep "down\|disabled" | grep -v "Forced down"')
+
+    assert result == '', \
+        '''Some nova services are in wrong state'''
+
+
+@pytest.mark.usefixtures('check_openstack')
+def test_nova_hosts_consistent(local_salt_client):
+    all_cmp_services = local_salt_client.cmd_any(
+        tgt='keystone:server',
+        param='. /root/keystonercv3;'
+              'nova service-list | grep "nova-compute" | wc -l')
+    enabled_cmp_services = local_salt_client.cmd_any(
+        tgt='keystone:server',
+        param='. /root/keystonercv3;'
+              'nova service-list | grep "nova-compute" | grep "enabled" | wc -l')
+    hosts = local_salt_client.cmd_any(
+        tgt='keystone:server',
+        param='. /root/keystonercv3;'
+              'openstack host list | grep "compute" | wc -l')
+    hypervisors = local_salt_client.cmd_any(
+        tgt='keystone:server',
+        param='. /root/keystonercv3;'
+              'openstack hypervisor list | egrep -v "\-----|ID" | wc -l')
+
+    assert all_cmp_services == hypervisors, \
+        "Number of nova-compute services ({}) does not match number of " \
+        "hypervisors ({}).".format(all_cmp_services, hypervisors)
+    assert enabled_cmp_services == hosts, \
+        "Number of enabled nova-compute services ({}) does not match number \
+        of hosts ({}).".format(enabled_cmp_services, hosts)
diff --git a/test_set/cvp-sanity/tests/test_ntp_sync.py b/test_set/cvp-sanity/tests/test_ntp_sync.py
new file mode 100644
index 0000000..abf0d8a
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_ntp_sync.py
@@ -0,0 +1,62 @@
+import json
+import utils
+import pytest
+
+@pytest.mark.xfail
+def test_ntp_sync(local_salt_client):
+    """Test checks that system time is the same across all nodes"""
+
+    config = utils.get_configuration()
+    nodes_time = local_salt_client.cmd(
+        tgt='*',
+        param='date +%s',
+        expr_form='compound')
+    result = {}
+    for node, time in nodes_time.iteritems():
+        if isinstance(nodes_time[node], bool):
+            time = 'Cannot access node(-s)'
+        if node in config.get("ntp_skipped_nodes"):
+            continue
+        if time in result:
+            result[time].append(node)
+            result[time].sort()
+        else:
+            result[time] = [node]
+    assert len(result) <= 1, 'Not all nodes have the same time:\n {}'.format(
+                             json.dumps(result, indent=4))
+
+
+def test_ntp_peers_state(local_salt_client):
+    """Test gets ntpq peers state and checks the system peer is declared"""
+    state = local_salt_client.cmd(
+        tgt='*',
+        param='ntpq -pn',
+        expr_form='compound')
+    final_result = {}
+    for node in state:
+        sys_peer_declared = False
+        if not state[node]:
+            # TODO: do not skip
+            print ("Node {} is skipped".format(node))
+            continue
+        ntpq_output = state[node].split('\n')
+        # if output has no 'remote' in the head of ntpq output
+        # the 'ntqp -np' command failed and cannot check peers
+        if 'remote' not in ntpq_output[0]:
+            final_result[node] = ntpq_output
+            continue
+
+        # take 3rd+ line of output (the actual peers)
+        try:
+            peers = ntpq_output[2:]
+        except IndexError:
+            final_result[node] = ntpq_output
+            continue
+        for p in peers:
+            if p.split()[0].startswith("*"):
+                sys_peer_declared = True
+        if not sys_peer_declared:
+            final_result[node] = ntpq_output
+    assert not final_result,\
+        "NTP peers state is not expected on some nodes, could not find " \
+        "declared system peer:\n{}".format(json.dumps(final_result, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_oss.py b/test_set/cvp-sanity/tests/test_oss.py
new file mode 100644
index 0000000..9e919c5
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_oss.py
@@ -0,0 +1,41 @@
+import requests
+import csv
+import json
+
+
+def test_oss_status(local_salt_client, check_cicd):
+    """
+       # Get IP of HAPROXY interface from pillar using 'salt -C "I@docker:swarm:role:master" pillar.get haproxy:proxy:listen:stats:binds:address'
+       # Read info from web-page "http://{haproxy:proxy:listen:stats:binds:address}:9600/haproxy?stats;csv"
+       # Check that each service from list 'aptly', 'openldap', 'gerrit', 'jenkins', 'postgresql',
+                                'pushkin', 'rundeck', 'elasticsearch' :
+                * has UP status
+                * has OPEN status
+    """
+    HAPROXY_STATS_IP = local_salt_client.pillar_get(
+        tgt='docker:swarm:role:master',
+        param='haproxy:proxy:listen:stats:binds:address')
+    proxies = {"http": None, "https": None}
+    csv_result = requests.get('http://{}:9600/haproxy?stats;csv"'.format(
+                              HAPROXY_STATS_IP),
+                              proxies=proxies).content
+    data = csv_result.lstrip('# ')
+    wrong_data = []
+    list_of_services = ['aptly', 'openldap', 'gerrit', 'jenkins', 'postgresql',
+                        'pushkin', 'rundeck', 'elasticsearch']
+    for service in list_of_services:
+        check = local_salt_client.test_ping(tgt='{}:client'.format(service))
+        if check:
+            lines = [row for row in csv.DictReader(data.splitlines())
+                     if service in row['pxname']]
+            for row in lines:
+                info = "Service {0} with svname {1} and status {2}".format(
+                    row['pxname'], row['svname'], row['status'])
+                if row['svname'] == 'FRONTEND' and row['status'] != 'OPEN':
+                        wrong_data.append(info)
+                if row['svname'] != 'FRONTEND' and row['status'] != 'UP':
+                        wrong_data.append(info)
+
+    assert len(wrong_data) == 0, \
+        '''Some haproxy services are in wrong state
+              {}'''.format(json.dumps(wrong_data, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_packet_checker.py b/test_set/cvp-sanity/tests/test_packet_checker.py
new file mode 100644
index 0000000..6c1ccc9
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_packet_checker.py
@@ -0,0 +1,118 @@
+import pytest
+import json
+import utils
+
+
+def test_check_package_versions(local_salt_client, nodes_in_group):
+    exclude_packages = utils.get_configuration().get("skipped_packages", [])
+    packages_versions = local_salt_client.cmd(tgt="L@"+','.join(nodes_in_group),
+                                              fun='lowpkg.list_pkgs',
+                                              expr_form='compound')
+    # Let's exclude cid01 and dbs01 nodes from this check
+    exclude_nodes = local_salt_client.test_ping(tgt="I@galera:master or I@gerrit:client",
+                                                expr_form='compound').keys()
+    total_nodes = [i for i in packages_versions.keys() if i not in exclude_nodes]
+    if len(total_nodes) < 2:
+        pytest.skip("Nothing to compare - only 1 node")
+
+    nodes = []
+    pkts_data = []
+    packages_names = set()
+
+    for node in total_nodes:
+        if not packages_versions[node]:
+            # TODO: do not skip node
+            print "Node {} is skipped".format (node)
+            continue
+        nodes.append(node)
+        packages_names.update(packages_versions[node].keys())
+
+    for deb in packages_names:
+        if deb in exclude_packages:
+            continue
+        diff = []
+        row = []
+        for node in nodes:
+            if not packages_versions[node]:
+                continue
+            if deb in packages_versions[node].keys():
+                diff.append(packages_versions[node][deb])
+                row.append("{}: {}".format(node, packages_versions[node][deb]))
+            else:
+                row.append("{}: No package".format(node))
+        if diff.count(diff[0]) < len(nodes):
+            row.sort()
+            row.insert(0, deb)
+            pkts_data.append(row)
+    assert len(pkts_data) <= 1, \
+        "Several problems found: {0}".format(
+        json.dumps(pkts_data, indent=4))
+
+
+def test_packages_are_latest(local_salt_client, nodes_in_group):
+    config = utils.get_configuration()
+    skip = config.get("test_packages")["skip_test"]
+    if skip:
+        pytest.skip("Test for the latest packages is disabled")
+    skipped_pkg = config.get("test_packages")["skipped_packages"]
+    info_salt = local_salt_client.cmd(
+        tgt='L@' + ','.join(nodes_in_group),
+        param='apt list --upgradable 2>/dev/null | grep -v Listing',
+        expr_form='compound')
+    for node in nodes_in_group:
+        result = []
+        if info_salt[node]:
+            upg_list = info_salt[node].split('\n')
+            for i in upg_list:
+                if i.split('/')[0] not in skipped_pkg:
+                    result.append(i)
+        assert not result, "Please check not latest packages at {}:\n{}".format(
+            node, "\n".join(result))
+
+
+def test_check_module_versions(local_salt_client, nodes_in_group):
+    exclude_modules = utils.get_configuration().get("skipped_modules", [])
+    pre_check = local_salt_client.cmd(
+        tgt="L@"+','.join(nodes_in_group),
+        param='dpkg -l | grep "python-pip "',
+        expr_form='compound')
+    if pre_check.values().count('') > 0:
+        pytest.skip("pip is not installed on one or more nodes")
+
+    exclude_nodes = local_salt_client.test_ping(tgt="I@galera:master or I@gerrit:client",
+                                                expr_form='compound').keys()
+    total_nodes = [i for i in pre_check.keys() if i not in exclude_nodes]
+
+    if len(total_nodes) < 2:
+        pytest.skip("Nothing to compare - only 1 node")
+    list_of_pip_packages = local_salt_client.cmd(tgt="L@"+','.join(nodes_in_group),
+                                   param='pip.freeze', expr_form='compound')
+
+    nodes = []
+
+    pkts_data = []
+    packages_names = set()
+
+    for node in total_nodes:
+        nodes.append(node)
+        packages_names.update([x.split("=")[0] for x in list_of_pip_packages[node]])
+        list_of_pip_packages[node] = dict([x.split("==") for x in list_of_pip_packages[node]])
+
+    for deb in packages_names:
+        if deb in exclude_modules:
+            continue
+        diff = []
+        row = []
+        for node in nodes:
+            if deb in list_of_pip_packages[node].keys():
+                diff.append(list_of_pip_packages[node][deb])
+                row.append("{}: {}".format(node, list_of_pip_packages[node][deb]))
+            else:
+                row.append("{}: No module".format(node))
+        if diff.count(diff[0]) < len(nodes):
+            row.sort()
+            row.insert(0, deb)
+            pkts_data.append(row)
+    assert len(pkts_data) <= 1, \
+        "Several problems found: {0}".format(
+        json.dumps(pkts_data, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_rabbit_cluster.py b/test_set/cvp-sanity/tests/test_rabbit_cluster.py
new file mode 100644
index 0000000..73efb57
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_rabbit_cluster.py
@@ -0,0 +1,47 @@
+import utils
+
+
+def test_checking_rabbitmq_cluster(local_salt_client):
+    # disable config for this test
+    # it may be reintroduced in future
+    config = utils.get_configuration()
+    # request pillar data from rmq nodes
+    # TODO: pillar.get
+    rabbitmq_pillar_data = local_salt_client.cmd(
+        tgt='rabbitmq:server',
+        fun='pillar.get',
+        param='rabbitmq:cluster',
+        expr_form='pillar')
+    # creating dictionary {node:cluster_size_for_the_node}
+    # with required cluster size for each node
+    control_dict = {}
+    required_cluster_size_dict = {}
+    # request actual data from rmq nodes
+    rabbit_actual_data = local_salt_client.cmd(
+        tgt='rabbitmq:server',
+        param='rabbitmqctl cluster_status', expr_form='pillar')
+    for node in rabbitmq_pillar_data:
+        if node in config.get('skipped_nodes'):
+            del rabbit_actual_data[node]
+            continue
+        cluster_size_from_the_node = len(
+            rabbitmq_pillar_data[node]['members'])
+        required_cluster_size_dict.update({node: cluster_size_from_the_node})
+
+    # find actual cluster size for each node
+    for node in rabbit_actual_data:
+        running_nodes_count = 0
+        # rabbitmqctl cluster_status output contains
+        # 3 * # of nodes 'rabbit@' entries + 1
+        running_nodes_count = (rabbit_actual_data[node].count('rabbit@') - 1)/3
+        # update control dictionary with values
+        # {node:actual_cluster_size_for_node}
+        if required_cluster_size_dict[node] != running_nodes_count:
+            control_dict.update({node: running_nodes_count})
+
+    assert not len(control_dict), "Inconsistency found within cloud. " \
+                                  "RabbitMQ cluster is probably broken, " \
+                                  "the cluster size for each node " \
+                                  "should be: {} but the following " \
+                                  "nodes has other values: {}".format(
+        len(required_cluster_size_dict.keys()), control_dict)
diff --git a/test_set/cvp-sanity/tests/test_repo_list.py b/test_set/cvp-sanity/tests/test_repo_list.py
new file mode 100644
index 0000000..5e70eeb
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_repo_list.py
@@ -0,0 +1,59 @@
+def test_list_of_repo_on_nodes(local_salt_client, nodes_in_group):
+    # TODO: pillar.get
+    info_salt = local_salt_client.cmd(tgt='L@' + ','.join(
+                                              nodes_in_group),
+                                      fun='pillar.get',
+                                      param='linux:system:repo',
+                                      expr_form='compound')
+
+    # check if some repos are disabled
+    for node in info_salt.keys():
+        repos = info_salt[node]
+        if not info_salt[node]:
+            # TODO: do not skip node
+            print "Node {} is skipped".format (node)
+            continue
+        for repo in repos.keys():
+            repository = repos[repo]
+            if "enabled" in repository:
+                if not repository["enabled"]:
+                    repos.pop(repo)
+
+    raw_actual_info = local_salt_client.cmd(
+        tgt='L@' + ','.join(
+            nodes_in_group),
+        param='cat /etc/apt/sources.list.d/*;'
+              'cat /etc/apt/sources.list|grep deb|grep -v "#"',
+        expr_form='compound', check_status=True)
+    actual_repo_list = [item.replace('/ ', ' ').replace('[arch=amd64] ', '')
+                        for item in raw_actual_info.values()[0].split('\n')]
+    if info_salt.values()[0] == '':
+        expected_salt_data = ''
+    else:
+        expected_salt_data = [repo['source'].replace('/ ', ' ')
+                                            .replace('[arch=amd64] ', '')
+                              for repo in info_salt.values()[0].values()
+                              if 'source' in repo.keys()]
+
+    diff = {}
+    my_set = set()
+    fail_counter = 0
+    my_set.update(actual_repo_list)
+    my_set.update(expected_salt_data)
+    import json
+    for repo in my_set:
+        rows = []
+        if repo not in actual_repo_list:
+            rows.append("{}: {}".format("pillars", "+"))
+            rows.append("{}: No repo".format('config'))
+            diff[repo] = rows
+            fail_counter += 1
+        elif repo not in expected_salt_data:
+            rows.append("{}: {}".format("config", "+"))
+            rows.append("{}: No repo".format('pillars'))
+            diff[repo] = rows
+    assert fail_counter == 0, \
+        "Several problems found: {0}".format(
+            json.dumps(diff, indent=4))
+    if fail_counter == 0 and len(diff) > 0:
+        print "\nWarning: nodes contain more repos than reclass"
diff --git a/test_set/cvp-sanity/tests/test_salt_master.py b/test_set/cvp-sanity/tests/test_salt_master.py
new file mode 100644
index 0000000..7ae5754
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_salt_master.py
@@ -0,0 +1,18 @@
+def test_uncommited_changes(local_salt_client):
+    git_status = local_salt_client.cmd(
+        tgt='salt:master',
+        param='cd /srv/salt/reclass/classes/cluster/; git status',
+        expr_form='pillar')
+    assert 'nothing to commit' in git_status.values()[0], 'Git status showed' \
+           ' some unmerged changes {}'''.format(git_status.values()[0])
+
+
+def test_reclass_smoke(local_salt_client):
+    reclass = local_salt_client.cmd(
+        tgt='salt:master',
+        param='reclass-salt --top; echo $?',
+        expr_form='pillar')
+    result = reclass[reclass.keys()[0]][-1]
+
+    assert result == '0', 'Reclass is broken' \
+                          '\n {}'.format(reclass)
diff --git a/test_set/cvp-sanity/tests/test_services.py b/test_set/cvp-sanity/tests/test_services.py
new file mode 100644
index 0000000..c704437
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_services.py
@@ -0,0 +1,122 @@
+import pytest
+import json
+import os
+import utils
+
+# Some nodes can have services that are not applicable for other nodes in similar group.
+# For example , there are 3 node in kvm group, but just kvm03 node has srv-volumes-backup.mount service
+# in service.get_all
+#                        NODE NAME          SERVICE_NAME
+inconsistency_rule = {"kvm03": ["srv-volumes-backup.mount", "rsync"]}
+
+
+def test_check_services(local_salt_client, nodes_in_group):
+    """
+    Skips services if they are not consistent for all node.
+    Inconsistent services will be checked with another test case
+    """
+    exclude_services = utils.get_configuration().get("skipped_services", [])
+    services_by_nodes = local_salt_client.cmd(tgt="L@"+','.join(nodes_in_group),
+                                              fun='service.get_all',
+                                              expr_form='compound')
+
+    if len(services_by_nodes.keys()) < 2:
+        pytest.skip("Nothing to compare - only 1 node")
+
+    nodes = []
+    pkts_data = []
+    all_services = set()
+
+    for node in services_by_nodes:
+        if not services_by_nodes[node]:
+            # TODO: do not skip node
+            print "Node {} is skipped".format (node)
+            continue
+        nodes.append(node)
+        all_services.update(services_by_nodes[node])
+
+    for srv in all_services:
+        if srv in exclude_services:
+            continue
+        service_existence = dict()
+        for node in nodes:
+            short_name_of_node = node.split('.')[0]
+            if inconsistency_rule.get(short_name_of_node) is not None and srv in inconsistency_rule[short_name_of_node]:
+                # Skip the checking of some service on the specific node
+                break
+            elif srv in services_by_nodes[node]:
+                # Found service on node
+                service_existence[node] = "+"
+            else:
+                # Not found expected service on node
+                service_existence[node] = "No service"
+        if set(service_existence.values()).__len__() > 1:
+            report = ["{node}: {status}".format(node=node, status=status) for node, status in service_existence.items()]
+            report.sort()
+            report.insert(0, srv)
+            pkts_data.append(report)
+    assert len(pkts_data) == 0, \
+        "Several problems found: {0}".format(
+        json.dumps(pkts_data, indent=4))
+
+
+# TODO : remake this test to make workable https://mirantis.jira.com/browse/PROD-25958
+
+# def _check_services_on_special_node(local_salt_client, nodes_in_group):
+#     """
+#     Check that specific node has service.
+#     Nodes and proper services should be defined in inconsistency_rule dictionary
+#
+#     :print: Table with nodes which don't have required services and not existed services
+#     """
+#
+#     output = local_salt_client.cmd("L@" + ','.join(nodes_in_group), 'service.get_all', expr_form='compound')
+#     if len(output.keys()) < 2:
+#         pytest.skip("Nothing to compare - just 1 node")
+#
+#     def is_proper_service_for_node(_service, _node):
+#         """
+#         Return True if service exists on node and exists in inconsistency_rule
+#         Return True if service doesn't exists on node and doesn't exists in inconsistency_rule
+#         Return False otherwise
+#         :param _service: string
+#         :param _node: string full name of node
+#         :return: bool, read description for further details
+#         """
+#         short_name_of_node = _node.split('.')[0]
+#         if short_name_of_node not in inconsistency_rule.keys():
+#             return False
+#
+#         if _service in inconsistency_rule[short_name_of_node] and \
+#                 _service in output[_node]:
+#             # Return True if service exists on node and exists in inconsistency_rule
+#             return True
+#
+#         if _service not in inconsistency_rule[short_name_of_node] and \
+#                 _service not in output[_node]:
+#             # Return True if service exists on node and exists in inconsistency_rule
+#             return True
+#         print("return False for {} in {}".format(_service, _node))
+#         # error_text = ""
+#         return False
+#
+#     errors = list()
+#     for node, expected_services in inconsistency_rule.items():
+#         print("Check {} , {} ".format(node, expected_services))
+#         # Skip if there is no proper node. Find nodes that contains node_title (like 'kvm03') in their titles
+#         if not any([node in node_name for node_name in output.keys()]):
+#             continue
+#         for expected_service in expected_services:
+#             service_on_nodes = {_node: expected_service if expected_service in _service else None
+#                                 for _node, _service
+#                                 in output.items()}
+#             print([is_proper_service_for_node(expected_service, _node)
+#                   for _node
+#                   in output.keys()])
+#             if not all([is_proper_service_for_node(expected_service, _node)
+#                         for _node
+#                         in output.keys()]):
+#                 errors.append(service_on_nodes)
+#
+#     assert errors.__len__() == 0, json.dumps(errors, indent=4)
+#     assert False
diff --git a/test_set/cvp-sanity/tests/test_single_vip.py b/test_set/cvp-sanity/tests/test_single_vip.py
new file mode 100644
index 0000000..7a1c2f8
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_single_vip.py
@@ -0,0 +1,26 @@
+import utils
+import json
+
+
+def test_single_vip_exists(local_salt_client):
+    """Test checks that there is only one VIP address
+       within one group of nodes (where applicable).
+       Steps:
+       1. Get IP addresses for nodes via salt cmd.run 'ip a | grep /32'
+       2. Check that at least 1 node responds with something.
+    """
+    groups = utils.calculate_groups()
+    no_vip = {}
+    for group in groups:
+        if group in ['cmp', 'cfg', 'kvm', 'cmn', 'osd', 'gtw']:
+            continue
+        nodes_list = local_salt_client.cmd(
+            "L@" + ','.join(groups[group]), 'cmd.run', 'ip a | grep /32', expr_form='compound')
+        result = [x for x in nodes_list.values() if x]
+        if len(result) != 1:
+            if len(result) == 0:
+                no_vip[group] = 'No vip found'
+            else:
+                no_vip[group] = nodes_list
+    assert len(no_vip) < 1, "Some groups of nodes have problem with vip " \
+           "\n{}".format(json.dumps(no_vip, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_stacklight.py b/test_set/cvp-sanity/tests/test_stacklight.py
new file mode 100644
index 0000000..703deea
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_stacklight.py
@@ -0,0 +1,188 @@
+import json
+import requests
+import datetime
+import pytest
+
+
+@pytest.mark.usefixtures('check_kibana')
+def test_elasticsearch_cluster(local_salt_client):
+    salt_output = local_salt_client.pillar_get(
+        tgt='kibana:server',
+        param='_param:haproxy_elasticsearch_bind_host')
+
+    proxies = {"http": None, "https": None}
+    IP = salt_output
+    assert requests.get('http://{}:9200/'.format(IP),
+                        proxies=proxies).status_code == 200, \
+        'Cannot check elasticsearch url on {}.'.format(IP)
+    resp = requests.get('http://{}:9200/_cat/health'.format(IP),
+                        proxies=proxies).content
+    assert resp.split()[3] == 'green', \
+        'elasticsearch status is not good {}'.format(
+        json.dumps(resp, indent=4))
+    assert resp.split()[4] == '3', \
+        'elasticsearch status is not good {}'.format(
+        json.dumps(resp, indent=4))
+    assert resp.split()[5] == '3', \
+        'elasticsearch status is not good {}'.format(
+        json.dumps(resp, indent=4))
+    assert resp.split()[10] == '0', \
+        'elasticsearch status is not good {}'.format(
+        json.dumps(resp, indent=4))
+    assert resp.split()[13] == '100.0%', \
+        'elasticsearch status is not good {}'.format(
+        json.dumps(resp, indent=4))
+
+
+@pytest.mark.usefixtures('check_kibana')
+def test_kibana_status(local_salt_client):
+    proxies = {"http": None, "https": None}
+    IP = local_salt_client.pillar_get(param='_param:stacklight_log_address')
+    resp = requests.get('http://{}:5601/api/status'.format(IP),
+                        proxies=proxies).content
+    body = json.loads(resp)
+    assert body['status']['overall']['state'] == "green", \
+        "Kibana status is not expected: {}".format(
+        body['status']['overall'])
+    for i in body['status']['statuses']:
+        assert i['state'] == "green", \
+            "Kibana statuses are unexpected: {}".format(i)
+
+
+@pytest.mark.usefixtures('check_kibana')
+def test_elasticsearch_node_count(local_salt_client):
+    now = datetime.datetime.now()
+    today = now.strftime("%Y.%m.%d")
+    salt_output = local_salt_client.pillar_get(
+        tgt='kibana:server',
+        param='_param:haproxy_elasticsearch_bind_host')
+
+    IP = salt_output
+    headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
+    proxies = {"http": None, "https": None}
+    data = ('{"size": 0, "aggs": '
+            '{"uniq_hostname": '
+            '{"terms": {"size": 1000, '
+            '"field": "Hostname.keyword"}}}}')
+    response = requests.post(
+        'http://{0}:9200/log-{1}/_search?pretty'.format(IP, today),
+        proxies=proxies,
+        headers=headers,
+        data=data)
+    assert 200 == response.status_code, 'Unexpected code {}'.format(
+        response.text)
+    resp = json.loads(response.text)
+    cluster_domain = local_salt_client.pillar_get(param='_param:cluster_domain')
+    monitored_nodes = []
+    for item_ in resp['aggregations']['uniq_hostname']['buckets']:
+        node_name = item_['key']
+        monitored_nodes.append(node_name + '.' + cluster_domain)
+    missing_nodes = []
+    all_nodes = local_salt_client.test_ping(tgt='*').keys()
+    for node in all_nodes:
+        if node not in monitored_nodes:
+            missing_nodes.append(node)
+    assert len(missing_nodes) == 0, \
+        'Not all nodes are in Elasticsearch. Found {0} keys, ' \
+        'expected {1}. Missing nodes: \n{2}'. \
+            format(len(monitored_nodes), len(all_nodes), missing_nodes)
+
+
+def test_stacklight_services_replicas(local_salt_client):
+    # TODO
+    # change to docker:swarm:role:master ?
+    salt_output = local_salt_client.cmd(
+        tgt='I@docker:client:stack:monitoring and I@prometheus:server',
+        param='docker service ls',
+        expr_form='compound')
+
+    if not salt_output:
+        pytest.skip("docker:client:stack:monitoring or \
+        prometheus:server pillars are not found on this environment.")
+
+    wrong_items = []
+    for line in salt_output[salt_output.keys()[0]].split('\n'):
+        if line[line.find('/') - 1] != line[line.find('/') + 1] \
+           and 'replicated' in line:
+            wrong_items.append(line)
+    assert len(wrong_items) == 0, \
+        '''Some monitoring services doesn't have expected number of replicas:
+              {}'''.format(json.dumps(wrong_items, indent=4))
+
+
+@pytest.mark.usefixtures('check_prometheus')
+def test_prometheus_alert_count(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    # keystone:server can return 3 nodes instead of 1
+    # this will be fixed later
+    # TODO
+    nodes_info = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl -s http://{}:15010/alerts | grep icon-chevron-down | '
+              'grep -v "0 active"'.format(IP),
+        expr_form='pillar')
+
+    result = nodes_info[nodes_info.keys()[0]].replace('</td>', '').replace(
+        '<td><i class="icon-chevron-down"></i> <b>', '').replace('</b>', '')
+    assert result == '', 'AlertManager page has some alerts! {}'.format(
+                         json.dumps(result), indent=4)
+
+
+def test_stacklight_containers_status(local_salt_client):
+    salt_output = local_salt_client.cmd(
+        tgt='I@docker:swarm:role:master and I@prometheus:server',
+        param='docker service ps $(docker stack services -q monitoring)',
+        expr_form='compound')
+
+    if not salt_output:
+        pytest.skip("docker:swarm:role:master or prometheus:server \
+        pillars are not found on this environment.")
+
+    result = {}
+    # for old reclass models, docker:swarm:role:master can return
+    # 2 nodes instead of one. Here is temporary fix.
+    # TODO
+    if len(salt_output.keys()) > 1:
+        if 'CURRENT STATE' not in salt_output[salt_output.keys()[0]]:
+            del salt_output[salt_output.keys()[0]]
+    for line in salt_output[salt_output.keys()[0]].split('\n')[1:]:
+        shift = 0
+        if line.split()[1] == '\\_':
+            shift = 1
+        if line.split()[1 + shift] not in result.keys():
+            result[line.split()[1]] = 'NOT OK'
+        if line.split()[4 + shift] == 'Running' \
+           or line.split()[4 + shift] == 'Ready':
+            result[line.split()[1 + shift]] = 'OK'
+    assert 'NOT OK' not in result.values(), \
+        '''Some containers are in incorrect state:
+              {}'''.format(json.dumps(result, indent=4))
+
+
+def test_running_telegraf_services(local_salt_client):
+    salt_output = local_salt_client.cmd(tgt='telegraf:agent',
+                                        fun='service.status',
+                                        param='telegraf',
+                                        expr_form='pillar',)
+
+    if not salt_output:
+        pytest.skip("Telegraf or telegraf:agent \
+        pillar are not found on this environment.")
+
+    result = [{node: status} for node, status
+              in salt_output.items()
+              if status is False]
+    assert result == [], 'Telegraf service is not running ' \
+                         'on following nodes: {}'.format(result)
+
+
+def test_running_fluentd_services(local_salt_client):
+    salt_output = local_salt_client.cmd(tgt='fluentd:agent',
+                                        fun='service.status',
+                                        param='td-agent',
+                                        expr_form='pillar')
+    result = [{node: status} for node, status
+              in salt_output.items()
+              if status is False]
+    assert result == [], 'Fluentd check failed: td-agent service is not ' \
+                         'running on following nodes:'.format(result)
diff --git a/test_set/cvp-sanity/tests/test_ui_addresses.py b/test_set/cvp-sanity/tests/test_ui_addresses.py
new file mode 100644
index 0000000..0c65451
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_ui_addresses.py
@@ -0,0 +1,218 @@
+import pytest
+
+
+@pytest.mark.usefixtures('check_openstack')
+def test_ui_horizon(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(
+        tgt='horizon:server',
+        param='_param:cluster_public_host')
+    if not IP:
+        pytest.skip("Horizon is not enabled on this environment")
+    result = local_salt_client.cmd_any(
+        tgt=ctl_nodes_pillar,
+        param='curl --insecure https://{}/auth/login/ 2>&1 | \
+               grep Login'.format(IP),
+        expr_form='pillar')
+    assert len(result) != 0, \
+        'Horizon login page is not reachable on {} from ctl nodes'.format(
+        IP[0])
+
+
+@pytest.mark.usefixtures('check_openstack')
+def test_public_openstack(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '5000'
+    url = "{}://{}:{}/v3".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl -k {}/ 2>&1 | \
+               grep stable'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Public Openstack url is not reachable on {} from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_kibana')
+def test_internal_ui_kibana(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:stacklight_log_address')
+    protocol = 'http'
+    port = '5601'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl {}/app/kibana 2>&1 | \
+               grep loading'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Internal Kibana login page is not reachable on {} ' \
+        'from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_kibana')
+def test_public_ui_kibana(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '5601'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl {}/app/kibana 2>&1 | \
+               grep loading'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Public Kibana login page is not reachable on {} ' \
+        'from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_prometheus')
+def test_internal_ui_prometheus(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:stacklight_monitor_address')
+    protocol = 'http'
+    port = '15010'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl {}/graph 2>&1 | \
+               grep Prometheus'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Internal Prometheus page is not reachable on {} ' \
+        'from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_prometheus')
+def test_public_ui_prometheus(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '15010'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl {}/graph 2>&1 | \
+               grep Prometheus'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Public Prometheus page is not reachable on {} ' \
+        'from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_prometheus')
+def test_internal_ui_alert_manager(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:stacklight_monitor_address')
+    protocol = 'http'
+    port = '15011'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl -s {}/ | grep Alertmanager'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Internal AlertManager page is not reachable on {} ' \
+        'from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_prometheus')
+def test_public_ui_alert_manager(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '15011'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl -s {}/ | grep Alertmanager'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Public AlertManager page is not reachable on {} ' \
+        'from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_grafana')
+def test_internal_ui_grafana(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:stacklight_monitor_address')
+    protocol = 'http'
+    port = '15013'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl {}/login 2>&1 | grep Grafana'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Internal Grafana page is not reachable on {} ' \
+        'from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_grafana')
+def test_public_ui_grafana(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '8084'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl {}/login 2>&1 | grep Grafana'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Public Grafana page is not reachable on {} from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_alerta')
+def test_internal_ui_alerta(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:stacklight_monitor_address')
+    protocol = 'http'
+    port = '15017'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl {}/ 2>&1 | \
+             grep Alerta'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Internal Alerta page is not reachable on {} from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_alerta')
+def test_public_ui_alerta(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '15017'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl {}/ 2>&1 | \
+               grep Alerta'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Public Alerta page is not reachable on {} from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_openstack')
+@pytest.mark.usefixtures('check_drivetrain')
+def test_public_ui_jenkins(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '8081'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl -k {}/ 2>&1 | \
+               grep Authentication'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Public Jenkins page is not reachable on {} from ctl nodes'.format(url)
+
+
+@pytest.mark.usefixtures('check_openstack')
+@pytest.mark.usefixtures('check_drivetrain')
+def test_public_ui_gerrit(local_salt_client, ctl_nodes_pillar):
+    IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
+    protocol = 'https'
+    port = '8070'
+    url = "{}://{}:{}".format(protocol, IP, port)
+    result = local_salt_client.cmd(
+        tgt=ctl_nodes_pillar,
+        param='curl -k {}/ 2>&1 | \
+               grep "Gerrit Code Review"'.format(url),
+        expr_form='pillar')
+    assert len(result[result.keys()[0]]) != 0, \
+        'Public Gerrit page is not reachable on {} from ctl nodes'.format(url)
diff --git a/test_set/cvp-sanity/utils/__init__.py b/test_set/cvp-sanity/utils/__init__.py
new file mode 100644
index 0000000..62ccae7
--- /dev/null
+++ b/test_set/cvp-sanity/utils/__init__.py
@@ -0,0 +1,195 @@
+import os
+import yaml
+import requests
+import re
+import sys, traceback
+import time
+
+
+class AuthenticationError(Exception):
+    pass
+
+
+class salt_remote:
+    def __init__(self):
+        self.config = get_configuration()
+        self.skipped_nodes = self.config.get('skipped_nodes') or []
+        self.url = self.config['SALT_URL'].strip()
+        if not re.match("^(http|https)://", self.url):
+            raise AuthenticationError("Salt URL should start \
+            with http or https, given - {}".format(url))
+        self.login_payload = {'username': self.config['SALT_USERNAME'],
+                              'password': self.config['SALT_PASSWORD'], 'eauth': 'pam'}
+        # TODO: proxies
+        self.proxies = {"http": None, "https": None}
+        self.expires = ''
+        self.cookies = []
+        self.headers = {'Accept': 'application/json'}
+        self._login()
+
+    def _login (self):
+        try:
+            login_request = requests.post(os.path.join(self.url, 'login'),
+                                          headers={'Accept': 'application/json'},
+                                          data=self.login_payload,
+                                          proxies=self.proxies)
+            if not login_request.ok:
+                raise AuthenticationError("Authentication to SaltMaster failed")
+        except Exception as e:
+            print ("\033[91m\nConnection to SaltMaster "
+                  "was not established.\n"
+                  "Please make sure that you "
+                  "provided correct credentials.\n"
+                  "Error message: {}\033[0m\n".format(e.message or e))
+            traceback.print_exc(file=sys.stdout)
+            sys.exit()
+        self.expire = login_request.json()['return'][0]['expire']
+        self.cookies = login_request.cookies
+        self.headers['X-Auth-Token'] = login_request.json()['return'][0]['token']
+
+    def cmd(self, tgt, fun='cmd.run', param=None, expr_form=None, tgt_type=None, check_status=False, retries=3):
+        if self.expire < time.time() + 300:
+            self.headers['X-Auth-Token'] = self._login()
+        accept_key_payload = {'fun': fun, 'tgt': tgt, 'client': 'local',
+                              'expr_form': expr_form, 'tgt_type': tgt_type,
+                              'timeout': self.config['salt_timeout']}
+        if param:
+            accept_key_payload['arg'] = param
+
+        for i in range(retries):
+            request = requests.post(self.url, headers=self.headers,
+                                    data=accept_key_payload,
+                                    cookies=self.cookies,
+                                    proxies=self.proxies)
+            if not request.ok or not isinstance(request.json()['return'][0], dict):
+                print("Salt master is not responding or response is incorrect. Output: {}".format(request))
+                continue
+            response = request.json()['return'][0]
+            result = {key: response[key] for key in response if key not in self.skipped_nodes}
+            if check_status:
+                if False in result.values():
+                    print(
+                         "One or several nodes are not responding. Output {}".format(json.dumps(result, indent=4)))
+                    continue
+            break
+        else:
+            raise Exception("Error with Salt Master response")
+        return result
+
+    def test_ping(self, tgt, expr_form='pillar'):
+        return self.cmd(tgt=tgt, fun='test.ping', param=None, expr_form=expr_form)
+
+    def cmd_any(self, tgt, param=None, expr_form='pillar'):
+        """
+        This method returns first non-empty result on node or nodes.
+        If all nodes returns nothing, then exception is thrown.
+        """
+        response = self.cmd(tgt=tgt, param=param, expr_form=expr_form)
+        for node in response.keys():
+            if response[node] or response[node] == '':
+                return response[node]
+        else:
+            raise Exception("All minions are down")
+
+    def pillar_get(self, tgt='salt:master', param=None, expr_form='pillar', fail_if_empty=False):
+        """
+        This method is for fetching pillars only.
+        Returns value for pillar, False (if no such pillar) or if fail_if_empty=True - exception
+        """
+        response = self.cmd(tgt=tgt, fun='pillar.get', param=param, expr_form=expr_form)
+        for node in response.keys():
+            if response[node] or response[node] != '':
+                return response[node]
+        else:
+            if fail_if_empty:
+                raise Exception("No pillar found or it is empty.")
+            else:
+                return False
+
+
+def init_salt_client():
+    local = salt_remote()
+    return local
+
+
+def list_to_target_string(node_list, separator, add_spaces=True):
+    if add_spaces:
+        separator = ' ' + separator.strip() + ' '
+    return separator.join(node_list)
+
+
+def calculate_groups():
+    config = get_configuration()
+    local_salt_client = init_salt_client()
+    node_groups = {}
+    nodes_names = set ()
+    expr_form = ''
+    all_nodes = set(local_salt_client.test_ping(tgt='*',expr_form=None))
+    print all_nodes
+    if 'groups' in config.keys() and 'PB_GROUPS' in os.environ.keys() and \
+       os.environ['PB_GROUPS'].lower() != 'false':
+        nodes_names.update(config['groups'].keys())
+        expr_form = 'compound'
+    else:
+        for node in all_nodes:
+            index = re.search('[0-9]{1,3}$', node.split('.')[0])
+            if index:
+                nodes_names.add(node.split('.')[0][:-len(index.group(0))])
+            else:
+                nodes_names.add(node)
+        expr_form = 'pcre'
+
+    gluster_nodes = local_salt_client.test_ping(tgt='I@salt:control and '
+                                          'I@glusterfs:server',
+                                           expr_form='compound')
+    kvm_nodes = local_salt_client.test_ping(tgt='I@salt:control and not '
+                                      'I@glusterfs:server',
+                                       expr_form='compound')
+
+    for node_name in nodes_names:
+        skipped_groups = config.get('skipped_groups') or []
+        if node_name in skipped_groups:
+            continue
+        if expr_form == 'pcre':
+            nodes = local_salt_client.test_ping(tgt='{}[0-9]{{1,3}}'.format(node_name),
+                                                 expr_form=expr_form)
+        else:
+            nodes = local_salt_client.test_ping(tgt=config['groups'][node_name],
+                                                 expr_form=expr_form)
+            if nodes == {}:
+                continue
+
+        node_groups[node_name]=[x for x in nodes
+                                if x not in config['skipped_nodes']
+                                if x not in gluster_nodes.keys()
+                                if x not in kvm_nodes.keys()]
+        all_nodes = set(all_nodes - set(node_groups[node_name]))
+        if node_groups[node_name] == []:
+            del node_groups[node_name]
+            if kvm_nodes:
+                node_groups['kvm'] = kvm_nodes.keys()
+            node_groups['kvm_gluster'] = gluster_nodes.keys()
+    all_nodes = set(all_nodes - set(kvm_nodes.keys()))
+    all_nodes = set(all_nodes - set(gluster_nodes.keys()))
+    if all_nodes:
+        print ("These nodes were not collected {0}. Check config (groups section)".format(all_nodes))
+    return node_groups
+                
+            
+def get_configuration():
+    """function returns configuration for environment
+    and for test if it's specified"""
+    global_config_file = os.path.join(
+        os.path.dirname(os.path.abspath(__file__)), "../global_config.yaml")
+    with open(global_config_file, 'r') as file:
+        global_config = yaml.load(file)
+    for param in global_config.keys():
+        if param in os.environ.keys():
+            if ',' in os.environ[param]:
+                global_config[param] = []
+                for item in os.environ[param].split(','):
+                    global_config[param].append(item)
+            else:
+                global_config[param] = os.environ[param]
+
+    return global_config