At least Python 3.6 is required for the tests.
Installation
cd mos-spt/ virtualenv .venv . .venv/bin/activate pip install -r requirements.txt
Configuration
Open global_config.yaml file to override the settings.
Settings
The following options can be set in global_config.yaml file.
Environment Variable | Default | Description |
---|---|---|
IMAGE_SIZE_MB | 9000 | Specific image size (in MB) to upload/download at Glance |
Environment Variable | Default | Description |
---|---|---|
flavor_name | spt-test | Flavor name |
flavor_ram | 1536 | To define RAM allocation for specific flavor, MB |
flavor_vcpus | 1 | To define a count of vCPU for flavor |
flavor_disk | 5 | To define a count of disks on flavor, GB |
image_name | cvp.ubuntu.2004 | Cloud Ubuntu image to create VMs. Use 20.04 image in case internet_at_vms=false since the offline packages are set for Ubuntu 20.04. You can use any other Ubuntu image in case internet_at_vms=true. |
CMP_HOSTS | [] | Pair of compute hosts to create VMs at different hosts. By default, some random pair from nova compute list will be selected. To set some pair, set CMP_HOSTS: ["cmp001", "cmp002"] in global_config.yaml file. |
skipped_nodes | [] | Skip some compute hosts, so they are not selected at CMP_HOSTS pair. To set some nodes to skip, set skipped_nodes: ["cmp003"] in global_config.yaml file. Applies the hw2hw test as well. |
nova_timeout | 300 | Timeout to VM to be ACTIVE, seconds. |
external_network | public | External network name to allocate the Floating IPs |
custom_mtu | default | The MTU to set at the VMs. If "default" is set, the MTU will be set automatically from the newly created SPT internal networks. The default value in the networks comes from the Neutron configuration. In case you want to test the bandwidth with some specific custom MTU, set the value like 8950. |
ssh_timeout | 500 | Timeout to VM to be reachable via SSH, seconds. |
iperf_prep_string | "sudo /bin/bash -c 'echo "91.189.88.161 archive.ubuntu.com" >> /etc/hosts'" | Preparation string to set ubuntu repository host in /etc/hosts of VMs. |
internet_at_vms | 'true' | In case True, the Internet is present at VMs, and the tests are able to install iperf3 by apt update; apt install iperf3. In case VMs have no Internet, set 'false' and the iperf3 will be installed from offline *.deb packages. |
iperf_deb_package_dir_path | /opt/packages/ | Path to the local directory where the iperf3 *.deb packages are present. In the toolset offline images they are located at /opt/packages. Or you can download iperf3 deb package and its dependencies and put them at some custom folder. |
iperf_time | 60 | iperf3 -t option value: time in seconds to transmit for (iperf -t option). Applies the hw2hw test as well. |
multiple_threads_number | 10 | Number of iperf/iperf3 parallel client threads to run (iperf3/iperf -P option value). Applies the hw2hw test as well. |
multiple_threads_iperf_utility | 'iperf' | The tool for bandwidth measurements. Options to set: 'iperf' to use v2 and 'iperf3' to use v3. Eventually, this is the name of the utility that is installed at Ubuntu VMs from apt. Applies the hw2hw test as well. |
In case internet_at_vms=false, please make sure that iperf_deb_package_dir_path is set correctly and has iperf3 deb package and its dependencies.
Environment Variable | Default | Description |
---|---|---|
skipped_nodes | [] | Skip some compute hosts, so they are not selected at HW2HW pair. To set some nodes to skip, set their names from "kubectl get nodes", for example, skipped_nodes: ["cmp003", "some_node_name"] in global_config.yaml file. Applies to the VM2VM tests as well. |
iperf_time | 60 | iperf3 -t option value: time in seconds to transmit for (iperf -t option). Applies to the VM2VM tests as well. |
multiple_threads_number | 10 | Number of iperf/iperf3 parallel client threads to run (iperf3/iperf -P option value). Applies to the VM2VM tests as well. |
multiple_threads_iperf_utility | 'iperf' | The tool for bandwidth measurements. Options to set: 'iperf' to use v2 and 'iperf3' to use v3. Eventually, this is the name of the utility that is installed at Ubuntu VMs from apt. Applies to the VM2VM tests as well. |
hw_nodes_list | [] | List of the compute hosts to measure the performance between them. By default, some random pairs from "kubectl get nodes -l openvswitch=enabled,openstack-compute-node=enabled" list will be selected. To set some pair, set its names from the k8s nodes list, e.g. hw_nodes_list: ["cmp001", "cmp002"] in global_config.yaml file. |
mos_kubeconfig_path | "" | Path to the MOSK K8S config file. In case executing inside toolset pod, copy the config inside the pod and set its path. |
node_ssh_key_path | "" | Path to the private SSH key to log in to the K8S nodes. In case executing inside toolset pod, copy the key file inside the pod and set its path. |
node_ssh_username | "mcc-user" | Username to be used to connect to the K8S nodes via SSH using the ssh key. |
network_cidr | "" | Network CIRD to be used for running iperf between the nodes, e.g. find the CIDR of the br-tenant or the storage network. Example: "10.23.195.0/25". |
Executing tests
Run tests:
pytest -sv --tb=short tests/
In case the test is skipped and you want to know the reason, use python -rs option:
pytest -rs --tb=short tests/
Enable logging
In case something went wrong, use -o log_cli=true option to see detailed logs:
pytest -sv --tb=short -o log_cli=true tests/
By default, the log level is INFO log_cli_level=info. In case you want to go deeper for the API requests (with URIs, payloads, etc), set cli_level=debug in pytest.ini file.