Reworked SPT tests for running at MOS inside rally pod

These SPT tests are taken from the CVP-SPT, but reworked
to test MOS inside rally pod.

Here is the list of changes since CVP-SPT:
* Switched to Python3
* Removed all Salt related code
* Removed HW2HW test
* Default global_config.yaml is suitable for MOS
* Switched to iperf3
* Added smart waiters for VMs to be Active, VMs to be reachable by FIPs
* Extended pytest.ini file with logging settings
* Added lots of loggers at info level to understand what happends during the test run
* Extended & fixed README with the actual instruction
* Ability to use iperf3 even if there is no Internet at VMs
* Fixed the coding style according PEP8
* Various small fixes, enhancements

Change-Id: I31a1b8c8c827133d144377031c6f546d8c82a47d
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..7fb7fcd
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,146 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+fixtures/__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+#  Usually these files are written by a python script from a template
+#  before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+cover/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+#   For a library or package, you might want to ignore these files since the code is
+#   intended to run in multiple environments; otherwise, check them in:
+# .python-version
+
+# pipenv
+#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+#   However, in case of collaboration, if having platform-specific dependencies or dependencies
+#   having no cross-platform support, pipenv may install dependencies that don't work, or not
+#   install all needed dependencies.
+#Pipfile.lock
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/
+
+# PyCharm
+#  JetBrains specific template is maintainted in a separate JetBrains.gitignore that can
+#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
+#  and can be added to the global gitignore or merged into this file.  For a more nuclear
+#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
+.idea/
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..55d0f2b
--- /dev/null
+++ b/README.md
@@ -0,0 +1,74 @@
+# mos-spt
+
+Requirements
+--
+ At least Python 3.6 is required for the tests.
+
+Installation
+--
+```
+cd mos-spt/
+virtualenv .venv
+. .venv/bin/activate
+pip install -r requirements.txt
+```
+
+Configuration
+--
+ Open _global_config.yaml_ file to override the settings, or export the 
+ environment variables.
+
+Settings
+--
+ The following options can be set in _global_config.yaml_ file, or by exporting
+ the environment variables.
+
+* **test_glance** allows next overrides:
+
+| Environment Variable | Default | Description |
+| --- | --- | --- |
+| IMAGE_SIZE_MB | 9000 | Specific image size (in MB) to upload/download at Glance |
+
+* **test_vm2vm** allows next overrides:
+
+| Environment Variable | Default | Description |
+| --- | --- | --- |
+| flavor_name | spt-test | Flavor name |
+| flavor_ram | 1536 | To define RAM allocation for specific flavor, MB |
+| flavor_vcpus | 1 | To define a count of vCPU for flavor |
+| flavor_disk | 5 | To define a count of disks on flavor, GB |
+| image_name | Ubuntu-18.04 | Cloud Ubuntu image to create VMs |
+| CMP_HOSTS | "" | Pair of compute hosts to create VMs at different hosts. By default, some random pair from nova compute list will be selected. To set some pair, set _CMP_HOSTS: ["cmp001", "cmp002"]_ in _global_config.yaml_ file, or export CMP_HOSTS="cmp001,cmp002". | 
+| skipped_nodes | "" | Skip some compute hosts, so they are not selected at CMP_HOSTS pair. To set some nodes to skip, set _skipped_nodes: ["cmp003"]_ in _global_config.yaml_ file, or export skipped_nodes="cmp003".|
+| nova_timeout | 300 | Timeout to VM to be ACTIVE, seconds. |
+| external_network | public | External network name to allocate the Floating IPs |
+| ssh_timeout | 500 | Timeout to VM to be reachable via SSH, seconds. |
+| iperf_prep_string | "sudo /bin/bash -c 'echo \"91.189.88.161        archive.ubuntu.com\" >> /etc/hosts'" | Preparation string to set ubuntu repository host in /etc/hosts of VMs |
+| internet_at_vms | 'true' | In case True, the Internet is present at VMs, and the tests are able to install iperf3 by _apt update; apt install iperf3_. In case VMs have no Internet, set 'false' and the iperf3 will be installed from offline *.deb packages. |
+| iperf_deb_package_dir_path | /artifacts/mos-spt/ | Path to the local directory where the iperf3 *.deb packages are present. You need to download/copy them there manually beforehand. |
+| iperf_time | 60 | iperf3 -t option value: time in seconds to transmit for (iperf -t option). |
+
+ In case _internet_at_vms=false_, download the iperf3 packages from:
+```
+wget https://iperf.fr/download/ubuntu/libiperf0_3.1.3-1_amd64.deb 
+wget https://iperf.fr/download/ubuntu/iperf3_3.1.3-1_amd64.deb 
+```
+ and place both of them to the path equal to _iperf_deb_package_dir_path_.
+
+Executing tests
+--
+ Run tests:
+```
+pytest -sv --tb=short tests/
+```
+ In case the test is skipped and you want to know the reason, use python -rs option:
+```
+pytest -rs --tb=short tests/
+```
+
+Enable logging
+--
+ In case something went wrong, use logging of the tests, set “log_cli=true” 
+ in pytest.ini and rerun tests. By default, the log level is INFO 
+ _log_cli_level=info_. In case you want to go deeper for the API requests 
+ (with URIs, payloads, etc), set _cli_level=debug_.
diff --git a/__init__.py b/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/__init__.py
diff --git a/conftest.py b/conftest.py
new file mode 100644
index 0000000..693d514
--- /dev/null
+++ b/conftest.py
@@ -0,0 +1 @@
+from fixtures.base import *
diff --git a/fixtures/__init__.py b/fixtures/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/fixtures/__init__.py
diff --git a/fixtures/base.py b/fixtures/base.py
new file mode 100644
index 0000000..2ecfc21
--- /dev/null
+++ b/fixtures/base.py
@@ -0,0 +1,111 @@
+import os
+import pytest
+import utils
+import random
+import time
+import logging
+
+from utils import os_client
+
+
+logger = logging.getLogger(__name__)
+
+
+@pytest.fixture(scope='session')
+def openstack_clients():
+    return os_client.OfficialClientManager(
+        username=os.environ['OS_USERNAME'],
+        password=os.environ['OS_PASSWORD'],
+        tenant_name=os.environ['OS_PROJECT_NAME'],
+        auth_url=os.environ['OS_AUTH_URL'],
+        cert=False,
+        domain=os.environ['OS_PROJECT_DOMAIN_NAME'],
+        )
+
+
+nodes = utils.get_pairs()
+
+
+@pytest.fixture(scope='session', params=list(nodes.values()),
+                ids=list(nodes.keys()))
+def pair(request):
+    return request.param
+
+
+@pytest.fixture(scope='session')
+def os_resources(openstack_clients):
+    os_actions = os_client.OSCliActions(openstack_clients)
+    os_resource = {}
+    config = utils.get_configuration()
+    image_name = config.get('image_name', 'Ubuntu-18.04')
+    flavor_name = config.get('flavor_name', 'spt-test')
+    flavor_ram = config.get('flavor_ram', 1536)
+    flavor_vcpus = config.get('flavor_vcpus', 1)
+    flavor_disk = config.get('flavor_disk', 3)
+
+    os_images_list = [image.id for image in
+                      openstack_clients.image.images.list(
+                          filters={'name': image_name})]
+
+    if os_images_list.__len__() == 0:
+        pytest.skip("No images with name {}. This name can be redefined "
+                    "with 'image_name' env var ".format(image_name))
+
+    os_resource['image_id'] = str(os_images_list[0])
+
+    os_resource['flavor_id'] = [flavor.id for flavor in
+                                openstack_clients.compute.flavors.list()
+                                if flavor.name == flavor_name]
+    flavor_is_created = False
+    if not os_resource['flavor_id']:
+        os_resource['flavor_id'] = os_actions.create_flavor(
+            flavor_name, flavor_ram, flavor_vcpus, flavor_disk).id
+        flavor_is_created = True
+    else:
+        os_resource['flavor_id'] = str(os_resource['flavor_id'][0])
+
+    os_resource['sec_group'] = os_actions.create_sec_group()
+    os_resource['keypair'] = openstack_clients.compute.keypairs.create(
+        '{}-{}'.format('spt-key', random.randrange(100, 999))
+    )
+    os_resource['net1'] = os_actions.create_network_resources()
+    os_resource['ext_net'] = os_actions.get_external_network()
+    adm_tenant = os_actions.get_admin_tenant()
+    os_resource['router'] = os_actions.create_router(
+        os_resource['ext_net'], adm_tenant.id)
+    os_resource['net2'] = os_actions.create_network(adm_tenant.id)
+    os_resource['subnet2'] = os_actions.create_subnet(
+        os_resource['net2'], adm_tenant.id, '10.2.7.0/24')
+    for subnet in openstack_clients.network.list_subnets()['subnets']:
+        if subnet['network_id'] == os_resource['net1']['id']:
+            os_resource['subnet1'] = subnet['id']
+
+    openstack_clients.network.add_interface_router(
+        os_resource['router']['id'],{'subnet_id': os_resource['subnet1']})
+    openstack_clients.network.add_interface_router(
+        os_resource['router']['id'],
+        {'subnet_id': os_resource['subnet2']['id']})
+    yield os_resource
+
+    # cleanup created resources
+    logger.info("Deleting routers, networks, SG, key pair, flavor...")
+    openstack_clients.network.remove_interface_router(
+        os_resource['router']['id'], {'subnet_id': os_resource['subnet1']})
+    openstack_clients.network.remove_interface_router(
+        os_resource['router']['id'],
+        {'subnet_id': os_resource['subnet2']['id']})
+    openstack_clients.network.remove_gateway_router(
+        os_resource['router']['id'])
+    time.sleep(5)
+    openstack_clients.network.delete_router(os_resource['router']['id'])
+    time.sleep(5)
+    openstack_clients.network.delete_network(os_resource['net1']['id'])
+    openstack_clients.network.delete_network(os_resource['net2']['id'])
+
+    openstack_clients.compute.security_groups.delete(
+        os_resource['sec_group'].id)
+    openstack_clients.compute.keypairs.delete(os_resource['keypair'].name)
+    if flavor_is_created:
+        openstack_clients.compute.flavors.delete(os_resource['flavor_id'])
+    if os_actions.create_fake_ext_net:
+        openstack_clients.network.delete_network(os_resource['ext_net']['id'])
diff --git a/global_config.yaml b/global_config.yaml
new file mode 100644
index 0000000..b297350
--- /dev/null
+++ b/global_config.yaml
@@ -0,0 +1,19 @@
+---
+# parameters for glance image test
+IMAGE_SIZE_MB: 9000
+
+# parameters for vm2vm test
+CMP_HOSTS: []
+image_name: "Ubuntu-18.04"
+flavor_name: 'spt-test'
+flavor_ram: 1536
+flavor_vcpus: 1
+flavor_disk: 5
+nova_timeout: 300
+external_network: 'public'
+iperf_prep_string: "sudo /bin/bash -c 'echo \"91.189.88.161        archive.ubuntu.com\" >> /etc/hosts'"
+internet_at_vms: 'true' # whether Internet is present at OpenStack VMs and iperf can be installed with apt
+iperf_deb_package_dir_path: '/artifacts/mos-spt/'
+iperf_time: 60 # time in seconds to transmit for (iperf -t option)
+ssh_timeout: 500
+skipped_nodes: []
diff --git a/pytest.ini b/pytest.ini
new file mode 100644
index 0000000..31d80f5
--- /dev/null
+++ b/pytest.ini
@@ -0,0 +1,8 @@
+[pytest]
+norecursedirs = venv
+addopts = -vv --tb=native --capture=no
+
+log_cli = false
+log_format = %(asctime)s %(levelname)s %(message)s
+log_date_format = %Y-%m-%d %H:%M:%S
+log_cli_level = info
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..6898f80
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,10 @@
+paramiko==2.7.2 # LGPLv2.1+
+pytest==4.6.11 # MIT
+python-cinderclient==6.0.0 # Apache-2.0
+python-glanceclient==3.0.0  # Apache-2.0
+python-keystoneclient==3.22.0  # Apache-2.0
+python-neutronclient==7.1.0 # Apache-2.0
+python-novaclient==7.1.0
+PyYAML>=5.4  # MIT
+requests==2.24.0 # Apache-2.0
+texttable==1.2.0
diff --git a/tests/__init__.py b/tests/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tests/__init__.py
diff --git a/tests/test_glance.py b/tests/test_glance.py
new file mode 100644
index 0000000..460d900
--- /dev/null
+++ b/tests/test_glance.py
@@ -0,0 +1,100 @@
+import pytest
+import time
+import subprocess
+import random
+import logging
+
+import utils
+
+logger = logging.getLogger(__name__)
+
+
+def is_parsable(value, to):
+    """
+    Check if value can be converted into some type
+    :param value:  input value that should be converted
+    :param to: type of output value like int, float. It's not a string!
+    :return: bool
+    """
+    try:
+        to(value)
+    except:
+        return False
+    return True
+
+
+@pytest.fixture
+def create_image():
+    image_size_megabytes = utils.get_configuration().get("IMAGE_SIZE_MB", 9000)
+    create_file_cmdline = 'dd if=/dev/zero of=/tmp/image_mk_framework.dd ' \
+                          'bs=1M count={} 2>/dev/null' \
+                          ''.format(image_size_megabytes)
+    is_cmd_successful = subprocess.call(create_file_cmdline, shell=True) == 0
+    logger.info("Created local image file /tmp/image_mk_framework.dd")
+    yield is_cmd_successful
+
+    # teardown
+    logger.info("Deleting /tmp/image_mk_framework.dd file")
+    subprocess.call('rm -f /tmp/image_mk_framework.dd', shell=True)
+    subprocess.call('rm -f /tmp/image_mk_framework.download', shell=True)
+
+
+def test_speed_glance(create_image, openstack_clients, record_property):
+    """
+    Simplified Performance Tests Download / upload Glance
+    1. Create file with random data (dd)
+    2. Upload data as image to glance.
+    3. Download image.
+    4. Measure download/upload speed and print them into stdout
+    """
+    image_size_megabytes = utils.get_configuration().get("IMAGE_SIZE_MB")
+    if not is_parsable(image_size_megabytes, int):
+        pytest.fail("Can't convert IMAGE_SIZE_MB={} to 'int'".format(
+            image_size_megabytes))
+    image_size_megabytes = int(image_size_megabytes)
+    if not create_image:
+        pytest.skip("Can't create image, maybe there is lack of disk "
+                    "space to create file {}MB".
+                    format(image_size_megabytes))
+    image_name = "spt-test-image-{}".format(random.randrange(100, 999))
+    try:
+        image = openstack_clients.image.images.create(
+            name=image_name,
+            disk_format='iso',
+            container_format='bare')
+        logger.info("Created an image {} in Glance.".format(image_name))
+    except BaseException as e:
+        logger.info("Could not create image in Glance. See details: {}"
+                    "".format(e))
+        pytest.fail("Can't create image in Glance. Occurred error: {}"
+                    "".format(e))
+
+    logger.info("Testing upload file speed...")
+    start_time = time.time()
+    try:
+        openstack_clients.image.images.upload(
+            image.id, image_data=open("/tmp/image_mk_framework.dd", 'rb'))
+    except BaseException as e:
+        pytest.fail("Can't upload image in Glance. "
+                    "Occurred error: {}".format(e))
+    end_time = time.time()
+
+    speed_upload = image_size_megabytes / (end_time - start_time)
+
+    logger.info("Testing download file speed...")
+    start_time = time.time()
+    with open("/tmp/image_mk_framework.download", 'wb') as image_file:
+        for item in openstack_clients.image.images.data(image.id):
+            image_file.write(item)
+    end_time = time.time()
+
+    speed_download = image_size_megabytes / (end_time - start_time)
+    logger.info("Deleted image {}.".format(image.id))
+    openstack_clients.image.images.delete(image.id)
+    record_property("Upload", speed_upload)
+    record_property("Download", speed_download)
+
+    print("++++++++++++++++++++++++++++++++++++++++")
+    print(('upload - {} MB/s'.format(speed_upload)))
+    print(('download - {} MB/s'.format(speed_download)))
+    print("++++++++++++++++++++++++++++++++++++++++")
diff --git a/tests/test_vm2vm.py b/tests/test_vm2vm.py
new file mode 100644
index 0000000..19b6abf
--- /dev/null
+++ b/tests/test_vm2vm.py
@@ -0,0 +1,198 @@
+import logging
+import time
+
+import pytest
+from texttable import Texttable
+
+import utils
+from utils import os_client
+from utils import ssh
+
+
+logger = logging.getLogger(__name__)
+
+
+def test_vm2vm(openstack_clients, pair, os_resources, record_property):
+    """
+    Simplified Performance Tests VM to VM test in different topologies
+    1. Create 4 VMs admin project
+    2. Associate floating IPs to the VMs
+    3. Connect to each VM via SSH and install iperf3
+    4. Measure VM to VM on same node via Private IP, 1 thread
+    5. Measure VM to VM on different HW nodes via Private IP, 1 thread
+    6. Measure VM to VM on different HW nodes via Private IP, 10 threads
+    7. Measure VM to VM on different HW nodes via Floating IP, 1 thread
+    8. Measure VM to VM on different HW nodes, each VM is in separate network,
+       the networks are connected using Router via Private IP, 1 thread
+    9. Draw the table with all pairs and results
+    """
+    os_actions = os_client.OSCliActions(openstack_clients)
+    config = utils.get_configuration()
+    timeout = int(config.get('nova_timeout', 30))
+    iperf_time = int(config.get('iperf_time', 60))
+    private_key = os_resources['keypair'].private_key
+    ssh_timeout = int(config.get('ssh_timeout', 500))
+    result_table = Texttable()
+
+    try:
+        zone1 = [service.zone for service in
+                 openstack_clients.compute.services.list() if
+                 service.host == pair[0]]
+        zone2 = [service.zone for service in
+                 openstack_clients.compute.services.list()
+                 if service.host == pair[1]]
+
+        # create 4 VMs
+        logger.info("Creating 4 VMs...")
+        vm1 = os_actions.create_basic_server(
+            os_resources['image_id'], os_resources['flavor_id'],
+            os_resources['net1'], '{0}:{1}'.format(zone1[0], pair[0]),
+            [os_resources['sec_group'].name], os_resources['keypair'].name)
+        logger.info("Created VM {}.".format(vm1.id))
+
+        vm2 = os_actions.create_basic_server(
+            os_resources['image_id'], os_resources['flavor_id'],
+            os_resources['net1'], '{0}:{1}'.format(zone1[0], pair[0]),
+            [os_resources['sec_group'].name], os_resources['keypair'].name)
+        logger.info("Created VM {}.".format(vm2.id))
+
+        vm3 = os_actions.create_basic_server(
+            os_resources['image_id'], os_resources['flavor_id'],
+            os_resources['net1'], '{0}:{1}'.format(zone2[0], pair[1]),
+            [os_resources['sec_group'].name], os_resources['keypair'].name)
+        logger.info("Created VM {}.".format(vm3.id))
+
+        vm4 = os_actions.create_basic_server(
+            os_resources['image_id'], os_resources['flavor_id'],
+            os_resources['net2'], '{0}:{1}'.format(zone2[0], pair[1]),
+            [os_resources['sec_group'].name], os_resources['keypair'].name)
+        logger.info("Created VM {}.".format(vm4.id))
+
+        vm_info = []
+        vms = []
+        vms.extend([vm1, vm2, vm3, vm4])
+        fips = []
+        time.sleep(5)
+
+        # Associate FIPs and check VMs are Active
+        logger.info("Creating Floating IPs and associating them...")
+        for i in range(4):
+            fip = openstack_clients.compute.floating_ips.create(
+                os_resources['ext_net']['name'])
+            fips.append(fip.id)
+            os_actions.check_vm_is_active(vms[i].id, timeout=timeout)
+            vms[i].add_floating_ip(fip)
+            private_address = vms[i].addresses[
+                list(vms[i].addresses.keys())[0]][0]['addr']
+            vm_info.append({'vm': vms[i], 'fip': fip.ip,
+                            'private_address': private_address})
+        # Check VMs are reachable and prepare iperf3
+        transport1 = ssh.SSHTransport(vm_info[0]['fip'], 'ubuntu',
+                                      password='dd', private_key=private_key)
+        logger.info("Checking VMs are reachable via SSH...")
+        for i in range(4):
+            if transport1.check_vm_is_reachable_ssh(
+                    floating_ip=vm_info[i]['fip'], timeout=ssh_timeout):
+                ssh.prepare_iperf(vm_info[i]['fip'], private_key=private_key)
+
+        # Prepare the result table and run iperf3
+        table_rows = []
+        table_rows.append(['Test Case', 'Host 1', 'Host 2', 'Result'])
+        # Do iperf3 measurement #1
+        logger.info("Doing 'VM to VM in same tenant on same node via Private "
+                    "IP, 1 thread' measurement...")
+        result1 = transport1.exec_command(
+            'iperf3 -c {} -t {} | grep sender | tail -n 1'.format(
+                vm_info[1]['private_address'], iperf_time))
+        res1 = (b" ".join(result1.split()[-4:-2:])).decode('utf-8')
+        logger.info("Result #1 is {}".format(res1))
+        table_rows.append(['VM to VM in same tenant on same node via '
+                           'Private IP, 1 thread',
+                           "{}".format(pair[0]),
+                           "{}".format(pair[0]),
+                           "{}".format(res1)])
+
+        # Do iperf3 measurement #2
+        logger.info("Doing 'VM to VM in same tenant on different HW nodes "
+                    "via Private IP, 1 thread' measurement...")
+        result2 = transport1.exec_command(
+            'iperf3 -c {} -t {} | grep sender | tail -n 1'.format(
+                vm_info[2]['private_address'], iperf_time))
+        res2 = (b" ".join(result2.split()[-4:-2:])).decode('utf-8')
+        logger.info("Result #2 is {}".format(res2))
+        table_rows.append(['VM to VM in same tenant on different HW nodes '
+                           'via Private IP, 1 thread',
+                           "{}".format(pair[0]),
+                           "{}".format(pair[1]),
+                           "{}".format(res2)])
+
+        # Do iperf3 measurement #3
+        logger.info("Doing 'VM to VM in same tenant on different HW nodes "
+                    "via Private IP, 10 threads' measurement...")
+        result3 = transport1.exec_command(
+            'iperf3 -c {} -P 10 -t {} | grep sender | tail -n 1'.format(
+                vm_info[2]['private_address'], iperf_time))
+        res3 = (b" ".join(result3.split()[-4:-2:])).decode('utf-8')
+        logger.info("Result #3 is {}".format(res3))
+        table_rows.append(['VM to VM in same tenant on different HW nodes '
+                           'via Private IP, 10 threads',
+                           "{}".format(pair[0]),
+                           "{}".format(pair[1]),
+                           "{}".format(res3)])
+
+        # Do iperf3 measurement #4
+        logger.info("Doing 'VM to VM in same tenant via Floating IP and VMs "
+                    "are on different nodes, 1 thread' measurement...")
+        result4 = transport1.exec_command(
+            'iperf3 -c {} -t {} | grep sender | tail -n 1'.format(
+                vm_info[2]['fip'], iperf_time))
+        res4 = (b" ".join(result4.split()[-4:-2:])).decode('utf-8')
+        logger.info("Result #4 is {}".format(res4))
+        table_rows.append(['VM to VM in same tenant via Floating IP and VMs '
+                           'are on different nodes, 1 thread',
+                           "{}".format(pair[0]),
+                           "{}".format(pair[1]),
+                           "{}".format(res4)])
+
+        # Do iperf3 measurement #5
+        logger.info("Doing 'VM to VM in same tenant, different HW nodes and "
+                    "each VM is connected to separate network which are "
+                    " connected using Router via Private IP, 1 thread' "
+                    "measurement...")
+        result5 = transport1.exec_command(
+            'iperf3 -c {} -t {} | grep sender | tail -n 1'.format(
+                vm_info[3]['private_address'], iperf_time))
+        res5 = (b" ".join(result5.split()[-4:-2:])).decode('utf-8')
+        logger.info("Result #5 is {}".format(res5))
+        table_rows.append(['VM to VM in same tenant, different HW nodes and '
+                           'each VM is connected to separate network which are'
+                           ' connected using Router via Private IP, 1 thread',
+                           "{}".format(pair[0]),
+                           "{}".format(pair[1]),
+                           "{}".format(res5)])
+
+        logger.info("Drawing the table with iperf results...")
+        result_table.add_rows(table_rows)
+        print((result_table.draw()))
+
+        print("Removing VMs and FIPs...")
+        logger.info("Removing VMs and FIPs...")
+        for vm in vms:
+            openstack_clients.compute.servers.delete(vm)
+        print("Removing FIPs...")
+        for fip in fips:
+            openstack_clients.compute.floating_ips.delete(fip)
+    except Exception as e:
+        print(e)
+        print("Something went wrong")
+        if 'vms' in locals():
+            logger.info("Removing VMs...")
+            for vm in vms:
+                openstack_clients.compute.servers.delete(vm)
+            if 'fips' in locals():
+                logger.info("Removing FIPs...")
+                for fip in fips:
+                    openstack_clients.compute.floating_ips.delete(fip)
+        else:
+            print("Skipping cleaning, VMs were not created")
+        pytest.fail("Something went wrong")
diff --git a/utils/__init__.py b/utils/__init__.py
new file mode 100644
index 0000000..8e858b3
--- /dev/null
+++ b/utils/__init__.py
@@ -0,0 +1,73 @@
+import os
+import yaml
+import logging
+
+from utils import os_client
+
+logger = logging.getLogger(__name__)
+
+
+def compile_pairs(nodes):
+    result = {}
+    if len(nodes) %2 != 0:
+        nodes.pop(1)
+    pairs = list(zip(*[iter(nodes)] * 2))
+    for pair in pairs:
+        result[pair[0]+'<>'+pair[1]] = pair
+    return result
+
+
+def get_pairs():
+    config = get_configuration()
+    cmp_hosts = config.get('CMP_HOSTS') or []
+    skipped_nodes = config.get('skipped_nodes') or []
+    if skipped_nodes:
+        print(("\nNotice: {} nodes will be skipped for vm2vm test".format(
+            ",".join(skipped_nodes))))
+        logger.info("Skipping nodes {}".format(",".join(skipped_nodes)))
+    if not cmp_hosts:
+        openstack_clients = os_client.OfficialClientManager(
+            username=os.environ['OS_USERNAME'],
+            password=os.environ['OS_PASSWORD'],
+            tenant_name=os.environ['OS_PROJECT_NAME'],
+            auth_url=os.environ['OS_AUTH_URL'],
+            cert=False,
+            domain=os.environ['OS_PROJECT_DOMAIN_NAME']
+        )
+        os_actions = os_client.OSCliActions(openstack_clients)
+        nova_computes = os_actions.list_nova_computes()
+        if len(nova_computes) < 2:
+            raise BaseException(
+                "At least 2 compute hosts are needed for VM2VM test, "
+                "now: {}.".format(len(nova_computes)))
+        cmp_hosts = [n.host_name for n in nova_computes
+                     if n.host_name not in skipped_nodes]
+        if len(cmp_hosts) < 2:
+            raise BaseException(
+                "At least 2 compute hosts are needed for VM2VM test. "
+                "Cannot create a pair from {}. Please check skip list, at "
+                "least 2 computes should be tested.".format(cmp_hosts))
+        logger.info("CMP_HOSTS option is not set, using host pair from "
+                    "Nova compute list. Pair generated: {}".format(cmp_hosts))
+
+    return compile_pairs(cmp_hosts)
+
+
+def get_configuration():
+    """function returns configuration for environment
+    and for test if it's specified"""
+
+    global_config_file = os.path.join(
+        os.path.dirname(os.path.abspath(__file__)), "../global_config.yaml")
+    with open(global_config_file, 'r') as file:
+        global_config = yaml.load(file, Loader=yaml.SafeLoader)
+    for param in list(global_config.keys()):
+        if param in list(os.environ.keys()):
+            if ',' in os.environ[param]:
+                global_config[param] = []
+                for item in os.environ[param].split(','):
+                    global_config[param].append(item)
+            else:
+                global_config[param] = os.environ[param]
+
+    return global_config
diff --git a/utils/helpers.py b/utils/helpers.py
new file mode 100644
index 0000000..800a9fe
--- /dev/null
+++ b/utils/helpers.py
@@ -0,0 +1,24 @@
+import texttable as tt
+
+
+class helpers(object):
+
+    def __init__(self):
+        pass
+
+    def draw_table_with_results(self, global_results):
+        tab = tt.Texttable()
+        header = [
+            'node name 1',
+            'node name 2',
+            'network',
+            'bandwidth >',
+            'bandwidth <',
+        ]
+        tab.set_cols_align(['l', 'l', 'l', 'l', 'l'])
+        tab.set_cols_width([27, 27, 15, 20, '20'])
+        tab.header(header)
+        for row in global_results:
+            tab.add_row(row)
+        s = tab.draw()
+        print(s)
diff --git a/utils/os_client.py b/utils/os_client.py
new file mode 100644
index 0000000..12aad1b
--- /dev/null
+++ b/utils/os_client.py
@@ -0,0 +1,429 @@
+from cinderclient import client as cinder_client
+from glanceclient import client as glance_client
+from keystoneauth1 import identity as keystone_identity
+from keystoneauth1 import session as keystone_session
+from keystoneclient.v3 import client as keystone_client
+from neutronclient.v2_0 import client as neutron_client
+from novaclient import client as novaclient
+
+import logging
+import os
+import random
+import time
+
+import utils
+
+logger = logging.getLogger(__name__)
+
+
+class OfficialClientManager(object):
+    """Manager that provides access to the official python clients for
+    calling various OpenStack APIs.
+    """
+
+    CINDERCLIENT_VERSION = 3
+    GLANCECLIENT_VERSION = 2
+    KEYSTONECLIENT_VERSION = 3
+    NEUTRONCLIENT_VERSION = 2
+    NOVACLIENT_VERSION = 2
+    INTERFACE = 'admin'
+    if "OS_ENDPOINT_TYPE" in list(os.environ.keys()):
+        INTERFACE = os.environ["OS_ENDPOINT_TYPE"]
+
+    def __init__(self, username=None, password=None,
+                 tenant_name=None, auth_url=None, endpoint_type="internalURL",
+                 cert=False, domain="Default", **kwargs):
+        self.traceback = ""
+
+        self.client_attr_names = [
+            "auth",
+            "compute",
+            "network",
+            "volume",
+            "image",
+        ]
+        self.username = username
+        self.password = password
+        self.tenant_name = tenant_name
+        self.project_name = tenant_name
+        self.auth_url = auth_url
+        self.endpoint_type = endpoint_type
+        self.cert = cert
+        self.domain = domain
+        self.kwargs = kwargs
+
+        # Lazy clients
+        self._auth = None
+        self._compute = None
+        self._network = None
+        self._volume = None
+        self._image = None
+
+    @classmethod
+    def _get_auth_session(cls, username=None, password=None,
+                          tenant_name=None, auth_url=None, cert=None,
+                          domain='Default'):
+        if None in (username, password, tenant_name):
+            print((username, password, tenant_name))
+            msg = ("Missing required credentials for identity client. "
+                   "username: {username}, password: {password}, "
+                   "tenant_name: {tenant_name}").format(
+                username=username,
+                password=password,
+                tenant_name=tenant_name
+            )
+            raise msg
+
+        if cert and "https" not in auth_url:
+            auth_url = auth_url.replace("http", "https")
+
+        if "v2" in auth_url:
+            raise BaseException("Keystone v2 is deprecated since OpenStack"
+                                "Queens release. So current OS_AUTH_URL {} "
+                                "is not valid. Please use Keystone v3."
+                                "".format(auth_url))
+        else:
+            auth_url = auth_url if ("v3" in auth_url) else "{}{}".format(
+                auth_url, "/v3")
+            auth = keystone_identity.v3.Password(
+                auth_url=auth_url,
+                user_domain_name=domain,
+                username=username,
+                password=password,
+                project_domain_name=domain,
+                project_name=tenant_name)
+
+        auth_session = keystone_session.Session(auth=auth, verify=cert)
+        # auth_session.get_auth_headers()
+        return auth_session
+
+    @classmethod
+    def get_auth_client(cls, username=None, password=None,
+                        tenant_name=None, auth_url=None, cert=None,
+                        domain='Default', **kwargs):
+        session = cls._get_auth_session(
+            username=username,
+            password=password,
+            tenant_name=tenant_name,
+            auth_url=auth_url,
+            cert=cert,
+            domain=domain)
+        keystone = keystone_client.Client(version=cls.KEYSTONECLIENT_VERSION,
+                                          session=session, **kwargs)
+        keystone.management_url = auth_url
+        return keystone
+
+    @classmethod
+    def get_compute_client(cls, username=None, password=None,
+                           tenant_name=None, auth_url=None, cert=None,
+                           domain='Default', **kwargs):
+        session = cls._get_auth_session(
+            username=username, password=password, tenant_name=tenant_name,
+            auth_url=auth_url, cert=cert, domain=domain)
+        service_type = 'compute'
+        compute_client = novaclient.Client(
+            version=cls.NOVACLIENT_VERSION, session=session,
+            service_type=service_type, os_cache=False, **kwargs)
+        return compute_client
+
+    @classmethod
+    def get_network_client(cls, username=None, password=None,
+                           tenant_name=None, auth_url=None, cert=None,
+                           domain='Default', **kwargs):
+        session = cls._get_auth_session(
+            username=username, password=password, tenant_name=tenant_name,
+            auth_url=auth_url, cert=cert, domain=domain)
+        service_type = 'network'
+        return neutron_client.Client(
+            service_type=service_type, session=session,
+            interface=cls.INTERFACE, **kwargs)
+
+    @classmethod
+    def get_volume_client(cls, username=None, password=None,
+                          tenant_name=None, auth_url=None, cert=None,
+                          domain='Default', **kwargs):
+        session = cls._get_auth_session(
+            username=username, password=password, tenant_name=tenant_name,
+            auth_url=auth_url, cert=cert, domain=domain)
+        service_type = 'volume'
+        return cinder_client.Client(
+            version=cls.CINDERCLIENT_VERSION,
+            service_type=service_type,
+            interface=cls.INTERFACE,
+            session=session, **kwargs)
+
+    @classmethod
+    def get_image_client(cls, username=None, password=None,
+                         tenant_name=None, auth_url=None, cert=None,
+                         domain='Default', **kwargs):
+        session = cls._get_auth_session(
+            username=username, password=password, tenant_name=tenant_name,
+            auth_url=auth_url, cert=cert, domain=domain)
+        service_type = 'image'
+        return glance_client.Client(
+            version=cls.GLANCECLIENT_VERSION,
+            service_type=service_type,
+            session=session, interface=cls.INTERFACE,
+            **kwargs)
+
+    @property
+    def auth(self):
+        if self._auth is None:
+            self._auth = self.get_auth_client(
+                self.username, self.password, self.tenant_name, self.auth_url,
+                self.cert, self.domain, endpoint_type=self.endpoint_type
+            )
+        return self._auth
+
+    @property
+    def compute(self):
+        if self._compute is None:
+            self._compute = self.get_compute_client(
+                self.username, self.password, self.tenant_name, self.auth_url,
+                self.cert, self.domain, endpoint_type=self.endpoint_type
+            )
+        return self._compute
+
+    @property
+    def network(self):
+        if self._network is None:
+            self._network = self.get_network_client(
+                self.username, self.password, self.tenant_name, self.auth_url,
+                self.cert, self.domain, endpoint_type=self.endpoint_type
+            )
+        return self._network
+
+    @property
+    def volume(self):
+        if self._volume is None:
+            self._volume = self.get_volume_client(
+                self.username, self.password, self.tenant_name, self.auth_url,
+                self.cert, self.domain, endpoint_type=self.endpoint_type
+            )
+        return self._volume
+
+    @property
+    def image(self):
+
+        if self._image is None:
+            self._image = self.get_image_client(
+                self.username, self.password, self.tenant_name, self.auth_url,
+                self.cert, self.domain
+            )
+        return self._image
+
+
+class OSCliActions(object):
+    def __init__(self, os_clients):
+        self.os_clients = os_clients
+        self.create_fake_ext_net = False
+
+    def get_admin_tenant(self):
+        # TODO Keystone v3 doesnt have tenants attribute
+        return self.os_clients.auth.projects.find(name="admin")
+
+    def get_internal_network(self):
+        networks = [
+            net for net in self.os_clients.network.list_networks()["networks"]
+            if net["admin_state_up"] and not net["router:external"] and
+            len(net["subnets"])
+            ]
+        if networks:
+            net = networks[0]
+        else:
+            net = self.create_network_resources()
+        return net
+
+    def create_fake_external_network(self):
+        logger.info(
+            "Could not find any external network, creating a fake one...")
+        net_name = "spt-ext-net-{}".format(random.randrange(100, 999))
+        net_body = {"network": {"name": net_name,
+                                "router:external": True,
+                                "provider:network_type": "local"}}
+        try:
+            ext_net = \
+                self.os_clients.network.create_network(net_body)['network']
+            logger.info("Created a fake external net {}".format(net_name))
+        except Exception as e:
+            # in case 'local' net type is absent, create with default type
+            net_body["network"].pop('provider:network_type', None)
+            ext_net = \
+                self.os_clients.network.create_network(net_body)['network']
+        subnet_name = "spt-ext-subnet-{}".format(random.randrange(100, 999))
+        subnet_body = {
+            "subnet": {
+                "name": subnet_name,
+                "network_id": ext_net["id"],
+                "ip_version": 4,
+                "cidr": "10.255.255.0/24",
+                "allocation_pools": [{"start": "10.255.255.100",
+                                      "end": "10.255.255.200"}]
+            }
+        }
+        self.os_clients.network.create_subnet(subnet_body)
+        self.create_fake_ext_net = True
+        return ext_net
+
+    def get_external_network(self):
+        config = utils.get_configuration()
+        ext_net = config.get('external_network') or ''
+        if not ext_net:
+            networks = [
+                net for net in
+                self.os_clients.network.list_networks()["networks"]
+                if net["admin_state_up"] and net["router:external"] and
+                len(net["subnets"])
+                ]
+        else:
+            networks = [net for net in
+                        self.os_clients.network.list_networks()["networks"]
+                        if net["name"] == ext_net]
+
+        if networks:
+            ext_net = networks[0]
+            logger.info("Using external net '{}'.".format(ext_net["name"]))
+        else:
+            ext_net = self.create_fake_external_network()
+        return ext_net
+
+    def create_flavor(self, name, ram=256, vcpus=1, disk=2):
+        logger.info("Creating a flavor {}".format(name))
+        return self.os_clients.compute.flavors.create(name, ram, vcpus, disk)
+
+    def create_sec_group(self, rulesets=None):
+        if rulesets is None:
+            rulesets = [
+                {
+                    # ssh
+                    'ip_protocol': 'tcp',
+                    'from_port': 22,
+                    'to_port': 22,
+                    'cidr': '0.0.0.0/0',
+                },
+                {
+                    # iperf3
+                    'ip_protocol': 'tcp',
+                    'from_port': 5201,
+                    'to_port': 5201,
+                    'cidr': '0.0.0.0/0',
+                },
+                {
+                    # ping
+                    'ip_protocol': 'icmp',
+                    'from_port': -1,
+                    'to_port': -1,
+                    'cidr': '0.0.0.0/0',
+                }
+            ]
+        sg_name = "spt-test-secgroup-{}".format(random.randrange(100, 999))
+        sg_desc = sg_name + " SPT"
+        secgroup = self.os_clients.compute.security_groups.create(
+            sg_name, sg_desc)
+        for ruleset in rulesets:
+            self.os_clients.compute.security_group_rules.create(
+                secgroup.id, **ruleset)
+        logger.info("Created a security group {}".format(sg_name))
+        return secgroup
+
+    def create_basic_server(self, image=None, flavor=None, net=None,
+                            availability_zone=None, sec_groups=(),
+                            keypair=None):
+        os_conn = self.os_clients
+        net = net or self.get_internal_network()
+        kwargs = {}
+        if sec_groups:
+            kwargs['security_groups'] = sec_groups
+        server = os_conn.compute.servers.create(
+            "spt-test-server-{}".format(random.randrange(100, 999)),
+            image, flavor, nics=[{"net-id": net["id"]}],
+            availability_zone=availability_zone, key_name=keypair, **kwargs)
+
+        return server
+
+    def get_vm(self, vm_id):
+        os_conn = self.os_clients
+        try:
+            vm = os_conn.compute.servers.find(id=vm_id)
+        except Exception as e:
+            raise Exception(
+                "{}. Could not get the VM \"{}\": {}".format(
+                    vm_id, e))
+        return vm
+
+    def check_vm_is_active(self, vm_uuid, retry_delay=5, timeout=500):
+        vm = None
+        timeout_reached = False
+        start_time = time.time()
+        expected_state = 'ACTIVE'
+        while not timeout_reached:
+            vm = self.get_vm(vm_uuid)
+            if vm.status == expected_state:
+                logger.info(
+                    "VM {} is in {} status.".format(vm_uuid, vm.status))
+                break
+            if vm.status == 'ERROR':
+                break
+            time.sleep(retry_delay)
+            timeout_reached = (time.time() - start_time) > timeout
+        if vm.status != expected_state:
+            logger.info("VM {} is in {} status.".format(vm_uuid, vm.status))
+            raise TimeoutError(
+                "VM {vm_uuid} on is expected to be in '{expected_state}' "
+                "state, but is in '{actual}' state instead.".format(
+                    vm_uuid=vm_uuid, expected_state=expected_state,
+                    actual=vm.status))
+
+    def create_network(self, tenant_id):
+        net_name = "spt-test-net-{}".format(random.randrange(100, 999))
+        net_body = {
+            'network': {
+                'name': net_name,
+                'tenant_id': tenant_id
+            }
+        }
+        net = self.os_clients.network.create_network(net_body)['network']
+        logger.info("Created internal network {}".format(net_name))
+        return net
+
+    def create_subnet(self, net, tenant_id, cidr=None):
+        subnet_name = "spt-test-subnet-{}".format(random.randrange(100, 999))
+        subnet_body = {
+            'subnet': {
+                "name": subnet_name,
+                'network_id': net['id'],
+                'ip_version': 4,
+                'cidr': cidr if cidr else '10.1.7.0/24',
+                'tenant_id': tenant_id
+            }
+        }
+        subnet = self.os_clients.network.create_subnet(subnet_body)['subnet']
+        logger.info("Created subnet {}".format(subnet_name))
+        return subnet
+
+    def create_router(self, ext_net, tenant_id):
+        name = 'spt-test-router-{}'.format(random.randrange(100, 999))
+        router_body = {
+            'router': {
+                'name': name,
+                'external_gateway_info': {
+                    'network_id': ext_net['id']
+                },
+                'tenant_id': tenant_id
+            }
+        }
+        logger.info("Created a router {}".format(name))
+        router = self.os_clients.network.create_router(router_body)['router']
+        return router
+
+    def create_network_resources(self):
+        tenant_id = self.get_admin_tenant().id
+        self.get_external_network()
+        net = self.create_network(tenant_id)
+        self.create_subnet(net, tenant_id)
+        return net
+
+    def list_nova_computes(self):
+        nova_services = self.os_clients.compute.hosts.list()
+        computes_list = [h for h in nova_services if h.service == "compute"]
+        return computes_list
diff --git a/utils/ssh.py b/utils/ssh.py
new file mode 100644
index 0000000..29a56f0
--- /dev/null
+++ b/utils/ssh.py
@@ -0,0 +1,210 @@
+from io import StringIO
+import logging
+import select
+import utils
+import paramiko
+import time
+import os
+
+logger = logging.getLogger(__name__)
+
+# Suppress paramiko logging
+logging.getLogger("paramiko").setLevel(logging.WARNING)
+
+
+class SSHTransport(object):
+    def __init__(self, address, username, password=None,
+                 private_key=None, look_for_keys=False, *args, **kwargs):
+
+        self.address = address
+        self.username = username
+        self.password = password
+        if private_key is not None:
+            self.private_key = paramiko.RSAKey.from_private_key(
+                StringIO(private_key))
+        else:
+            self.private_key = None
+
+        self.look_for_keys = look_for_keys
+        self.buf_size = 1024
+        self.channel_timeout = 10.0
+
+    def _get_ssh_connection(self):
+        ssh = paramiko.SSHClient()
+        ssh.set_missing_host_key_policy(
+            paramiko.AutoAddPolicy())
+        ssh.connect(self.address, username=self.username,
+                    password=self.password, pkey=self.private_key,
+                    timeout=self.channel_timeout)
+        logger.debug("Successfully connected to: {0}".format(self.address))
+        return ssh
+
+    def _get_sftp_connection(self):
+        transport = paramiko.Transport((self.address, 22))
+        transport.connect(username=self.username,
+                          password=self.password,
+                          pkey=self.private_key)
+
+        return paramiko.SFTPClient.from_transport(transport)
+
+    def exec_sync(self, cmd):
+        logger.debug("Executing {0} on host {1}".format(cmd, self.address))
+        ssh = self._get_ssh_connection()
+        transport = ssh.get_transport()
+        channel = transport.open_session()
+        channel.fileno()
+        channel.exec_command(cmd)
+        channel.shutdown_write()
+        out_data = []
+        err_data = []
+        poll = select.poll()
+        poll.register(channel, select.POLLIN)
+
+        while True:
+            ready = poll.poll(self.channel_timeout)
+            if not any(ready):
+                continue
+            if not ready[0]:
+                continue
+            out_chunk = err_chunk = None
+            if channel.recv_ready():
+                out_chunk = channel.recv(self.buf_size)
+                out_data += out_chunk,
+            if channel.recv_stderr_ready():
+                err_chunk = channel.recv_stderr(self.buf_size)
+                err_data += err_chunk,
+            if channel.closed and not err_chunk and not out_chunk:
+                break
+        exit_status = channel.recv_exit_status()
+        logger.debug("Command {0} executed with status: {1}"
+                     .format(cmd, exit_status))
+        return (exit_status, b" ".join(out_data).strip(),
+                b" ".join(err_data).strip())
+
+    def exec_command(self, cmd):
+        exit_status, stdout, stderr = self.exec_sync(cmd)
+        return stdout
+
+    def check_call(self, command, error_info=None, expected=None,
+                   raise_on_err=True):
+        """Execute command and check for return code
+        :type command: str
+        :type error_info: str
+        :type expected: list
+        :type raise_on_err: bool
+        :rtype: ExecResult
+        :raises: DevopsCalledProcessError
+        """
+        if expected is None:
+            expected = [0]
+        ret = self.exec_sync(command)
+        exit_code, stdout_str, stderr_str = ret
+        if exit_code not in expected:
+            message = (
+                "{append}Command '{cmd}' returned exit code {code} while "
+                "expected {expected}\n"
+                "\tSTDOUT:\n"
+                "{stdout}"
+                "\n\tSTDERR:\n"
+                "{stderr}".format(
+                    append=error_info + '\n' if error_info else '',
+                    cmd=command,
+                    code=exit_code,
+                    expected=expected,
+                    stdout=stdout_str,
+                    stderr=stderr_str
+                ))
+            logger.error(message)
+            if raise_on_err:
+                exit()
+        return ret
+
+    def put_file(self, source_path, destination_path):
+        sftp = self._get_sftp_connection()
+        sftp.put(source_path, destination_path)
+        sftp.close()
+
+    def put_iperf3_deb_packages_at_vms(self, source_directory,
+                                       destination_directory):
+        iperf_deb_files = [f for f in os.listdir(source_directory)
+                           if "deb" in f]
+        if not iperf_deb_files:
+            raise BaseException(
+                "iperf3 *.deb packages are not found locally at path {}. "
+                "Please recheck 'iperf_deb_package_dir_path' variable in "
+                "global_config.yaml and check *.deb packages are manually "
+                "copied there.".format(source_directory))
+        for f in iperf_deb_files:
+            source_abs_path = "{}/{}".format(source_directory, f)
+            dest_abs_path = "{}/{}".format(destination_directory, f)
+            self.put_file(source_abs_path, dest_abs_path)
+
+    def get_file(self, source_path, destination_path):
+        sftp = self._get_sftp_connection()
+        sftp.get(source_path, destination_path)
+        sftp.close()
+
+    def _is_timed_out(self, start_time, timeout):
+        return (time.time() - timeout) > start_time
+
+    def check_vm_is_reachable_ssh(self, floating_ip, timeout=500, sleep=5):
+        bsleep = sleep
+        ssh = paramiko.SSHClient()
+        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
+        _start_time = time.time()
+        attempts = 0
+        while True:
+            try:
+                ssh.connect(floating_ip, username=self.username,
+                            password=self.password, pkey=self.private_key,
+                            timeout=self.channel_timeout)
+                logger.info("VM with FIP {} is reachable via SSH. Success!"
+                            "".format(floating_ip))
+                return True
+            except Exception as e:
+                ssh.close()
+                if self._is_timed_out(_start_time, timeout):
+                    logger.info("VM with FIP {} is not reachable via SSH. "
+                                "See details: {}".format(floating_ip, e))
+                    raise TimeoutError(
+                        "\nFailed to establish authenticated ssh connection "
+                        "to {} after {} attempts during {} seconds.\n{}"
+                        "".format(floating_ip, attempts, timeout, e))
+                attempts += 1
+                logger.info("Failed to establish authenticated ssh connection "
+                            "to {}. Number attempts: {}. Retry after {} "
+                            "seconds.".format(floating_ip, attempts, bsleep))
+                time.sleep(bsleep)
+
+
+class prepare_iperf(object):
+
+    def __init__(self, fip, user='ubuntu', password='password',
+                 private_key=None):
+
+        transport = SSHTransport(fip, user, password, private_key)
+        config = utils.get_configuration()
+
+        # Install iperf3 using apt or downloaded deb package
+        internet_at_vms = utils.get_configuration().get("internet_at_vms")
+        if internet_at_vms.lower() == 'false':
+            logger.info("Copying offline iperf3 deb packages, installing...")
+            path_to_iperf_deb = (config.get('iperf_deb_package_dir_path') or
+                                 "/artifacts/mos-spt/")
+            home_ubuntu = "/home/ubuntu/"
+            transport.put_iperf3_deb_packages_at_vms(path_to_iperf_deb,
+                                                     home_ubuntu)
+            transport.exec_command('sudo dpkg -i {}*.deb'.format(home_ubuntu))
+        else:
+            logger.info("Installing iperf3 using apt")
+            preparation_cmd = config.get('iperf_prep_string') or ['']
+            transport.exec_command(preparation_cmd)
+            transport.exec_command('sudo apt-get update;'
+                                   'sudo apt-get install -y iperf3')
+
+        # Log whether iperf is installed with version
+        check = transport.exec_command('dpkg -l | grep iperf3')
+        logger.debug(check.decode('utf-8'))
+
+        # Staring iperf server
+        transport.exec_command('nohup iperf3 -s > file 2>&1 &')