Backport cvp-sanity from master to 2019.2.0
Related-Prod; #PROD-29210(PROD:29210)
Squashed commit of the following:
commit 8c05e2703aa328d9e22bc09360ea30723dc0dd74
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Wed Apr 24 11:47:31 2019 +0300
Add test steps into the stdout if test failed.
Related-Prod:#PROD-29995(PROD:29995)
Change-Id: Ie0a03d4d8896c7d7836cfd57736778f3896bcb87
commit a14488d565790992e8453d643a6fbea14bb25311
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Tue Apr 30 15:08:33 2019 +0300
Fix incorrect counting of backends
Split tests for cinder services into two tests
Change-Id: I74137b4cc31a82718fc2a17f5abfd117aacf9963
Fix-Issue:#PROD-29913(PROD:29913)
commit 10e2db4420d74db51259f55cc5b98482b53b116b
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Thu May 2 13:17:00 2019 +0300
Run opencontrain tests for OpenStack-type deployment only
Change-Id: I2b36bf33c4d3fde3fac37d669a4a2e8e449d4caf
Fix-Prod: #PROD-27782(PROD:27782
commit 1db3888a0df328e8c41f3f465c9ed28bb1f95763
Merge: 80514de 50a2167
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Wed May 1 20:43:52 2019 +0000
Merge "test_oss launched if cicd node available"
commit 80514de4b630141ba42e6f4bb85bf5f6e0a15f72
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Thu Apr 25 12:33:28 2019 +0300
Exclude kdt-nodes from test_mounts
Change-Id: I1cb9c2521fff6e9cffe8d4d86c0abf149233c296
Related-Prod: #PROD-29774(PROD:29774)
commit 864f2326856b128aacad5ccba13227938541ce78
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Mon Apr 29 15:48:12 2019 -0500
[CVP] Add ext_net parameter
Change-Id: Ie0d80d86b6d527f5593b9525cf22bc8343b84839
Related-PROD: PROD-26972
commit dd17609d8f4e3a6a080b6cc1858139a0d3cf5057
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Fri Apr 26 15:29:10 2019 -0500
[CVP] Fix parameter name in test_check_services
Change-Id: I338bea5bb180ef9999d22b5acefc5af74f877ba3
Related-PROD: PROD-29928
commit 10b360319fafb711391884af9f2b484a15412c0d
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Wed Apr 24 18:16:43 2019 -0500
[CVP] Add sanity test to check vip address presence for groups
Change-Id: I8b26a8e30de7eadf76254f35afb0e2621b73ea52
Related-PROD: PROD-29845
commit 577453f143d140353d8d62f6bd2f51a4b7011888
Merge: bcb27cd 4a79efd
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Tue Apr 30 22:52:31 2019 +0000
Merge "Added tests to check Drivetrain on K8s"
commit bcb27cd48482ba8daee5a2466482d2d9a30d0091
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Tue Apr 23 17:04:20 2019 -0500
[CVP] Disable public url check for gerrit and jenkins on k8s envs
Change-Id: Iab1637d234e8d597635758c886f7a40165928597
Related-PROD: PROD-28324
commit 50a2167b35f743c27432e6ac6a4dc3634c3b6acb
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Thu Apr 25 12:20:52 2019 +0300
test_oss launched if cicd node available
Change-Id: Ief119e18851b5ec39103195ca183db1d82fc5eb8
Related-Prod: #PROD-29775(PROD:29775)
commit 67aaec97464e5750388d760cb5d35672fd194419
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Mon Apr 15 18:05:13 2019 -0500
[CVP] Do not skip test_jenkins_jobs_branch by default
Change-Id: I2b636e089d77d17833f4839f55808369e1f1ebce
Related-PROD: PROD-29505
commit 4a79efda8e8151760cd54f2cc4b0561aaf536bc0
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Wed Apr 24 11:12:55 2019 +0300
Added tests to check Drivetrain on K8s
Change-Id: I86b9bbccf771cee6d6d294bb76f0c3979e269e86
Related-Prod: #PROD-29625(PROD:29625)
commit 4bfd2ee3f0e1b83ebb6928ea5a490be19b4c9166
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Wed Apr 10 21:56:58 2019 -0500
[CVP] Refactor salt client class
Change-Id: I91cfffe1c8d5df0224657ce9e36be9063b56f0b3
Related-PROD: PROD-28981
Related-PROD: PROD-28729
Related-PROD: PROD-28624
Related-PROD: PROD-29286
commit b7e866cfa45c2887c7b3671463774c3dc78cab26
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Wed Apr 10 13:49:56 2019 +0300
Set requirement for existed cicd-nodes in drivetrain tests
Related-Task: #PROD-28514(PROD:28514)
Change-Id: I95268fae93cb1fe0eed5276468d0e8e1512c92d2
commit 45ae6b65ca436867fcf5b6ac7144e9f837299ad3
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Tue Mar 5 18:52:44 2019 +0300
Added test to check mounted file systems on control plane VMs
Problem: sometimes it happens that after KVM host is rebooted,
its VMs come up with missing mounts (e.g. ctl node can have
keystone volumes missed). We had such an issue. It can happen
because of the hardware / host system performance issue or
network misconfiguration, but such issue can appear after
e.g. HA testing, or when KVM is rebooted. So, such test
will detect the inconsistent mounts.
Created the test to check that mounted file systems are
consistent on the virtual control plane nodes (e.g. on ctl,
prx, etc nodes). The nodes like kvm, cmp, ceph OSD, nodes
with docker (like k8s nodes, cid, mon) are skipped.
To skip other nodes if needed, add the node or group in the
config (skipped_nodes, skipped_groups).
Change-Id: Iab5311060790bd2fdfc8587e4cb8fc63cc3a0a13
Related-PROD: PROD-28247
commit 835b0cb957748e49e21bafd43c0ca9da60707e92
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Wed Apr 10 17:10:20 2019 +0300
[test_cinder_services] Verify backend existence before testing
Added docstring to the method
Change-Id: I511b9876e5a65f21a4cc823e616a29166b5b9cb4
Fixes-bug:#PROD-28523(PROD:28523)
commit 16a8f414ac8cc8d43404995b2002d3a943f893ca
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Mon Apr 8 17:10:38 2019 +0300
Move test_drivetrain_jenkins_job to the end of drivetrain tests queue to
avoid the failing of the test_drivetrain_jenkins_job
Additional changes:
* added docstring to test methods
* fixed pep8
Related-Bug: #PROD-27372(PROD:27372)
Change-Id: I34c679d66e483c107e6dda583b3c2e1ceeca5ced
commit b91c3147e20eb00e5429beefbb8e9a2e157bd3c0
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Tue Mar 26 16:49:44 2019 -0500
[CVP] Fix test_drivetrain_components_and_versions for new MCP updates logic
Related-PROD: PROD-28954
Change-Id: I9ea99b36115834da7d7110de8811730d11df4da4
commit cbf1f3ae648129b26fdd5183878ce7abab9cc794
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Tue Apr 9 20:02:10 2019 +0300
[cvp-spt] change adm_tenant in the create_subnet function
Change-Id: I8ebf04b658d5f17846c23f13670b7b63c1e9c771
Fixes-Issue: #PROD-29311(PROD:29311)
commit d52b5fe2722ea50eac65c5f8f2a55bab9f1db583
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Thu Mar 28 11:11:35 2019 -0500
[CVP] Fix test_jenkins_jobs_branch according to new MCP updates logic
Related-PROD: PROD-28954
Change-Id: I023fd6f57ac5f52642aa963cef5cbc9fc1a74264
commit ab919649a64e1a379e11d84d3c21604d027e9645
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Wed Mar 27 20:05:38 2019 +0200
[test_check_services] Change logic to check services
Change-Id: I1eb0ff077d497f95a0004bfd8ff4f25538acbfd6
Fix-bug: #PROD-26431(PROD:26431)
commit 8fd295c1a4b037b9aad5c1fe485351d4f9ed457c
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Thu Mar 7 13:46:43 2019 +0200
Add possibility to define list of services/modules/packages to skip
Change-Id: Ice289221e6e99181682ddf9155f390c388e590ad
Related-Prod: #PROD-27215(PROD:27215)
commit f139db45e7fe0cc9b62178dc8cd1f799344723a1
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Tue Mar 5 11:18:48 2019 +0300
Added test to check nova services, hosts are consistent
Added test to check nova hosts are consistent in nova-services,
openstack hosts and hypervisors. While deploying clouds, we faced
several times when nova hosts were inconsistent after deployment
(due to incorrect deployment steps), in this case hypervisor list
has some computes missing, but they are present in nova-services.
So, there can be some issues like "host is not mapped to any cell",
or boot VM error. So, it is better to check these nova lists are
consistent.
Related-PROD: PROD-28210
Change-Id: I9705417817e6075455dc4ccf5e25f2ab3439108c
commit 04ac2000016338fa283b9c34931ec3e96c595302
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Fri Mar 1 13:12:41 2019 +0200
cvp-spt, size check image to check Glance upload/download speed can be
changed using env var
It set to 2000 MB by default (because of free space on cid* nodes)
Test vm2vm gracefully skips test if no image found
Change-Id: I3aa5f50bf75b48df528de8c4196ae51c23de4b9e
Fixes-bug: #PROD-27763(PROD:27763)
commit 1ee3a651d10d6b32e1b34adef8c703e2036ffae1
Merge: 90ed2ea c4f520c
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Fri Mar 1 23:33:22 2019 +0000
Merge "Remove accidentally added file"
commit c4f520c98136b8aa35d3ec02f93244bb090da5c3
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Tue Feb 26 17:58:48 2019 -0600
Remove accidentally added file
Related-PROD: PROD-28153
Change-Id: Iee4c7b8fc59fd88fb148ed24776cae1af54998f1
commit 90ed2eadd9c1ce0f2f83d70b34a37144dc0da791
Merge: d006dbf 9b74486
Author: harhipova <harhipova@mirantis.com>
Date: Fri Mar 1 15:39:05 2019 +0000
Merge "Add ntp_skipped_nodes parameter, do not compare time on nodes with salt master"
commit d006dbf457567ab7e43d165e751cae5bf9fe64ff
Merge: 6661b23 24b71aa
Author: harhipova <harhipova@mirantis.com>
Date: Fri Mar 1 15:38:22 2019 +0000
Merge "Do not add node without virbr0* interfaces for comparison"
commit 6661b2332faad465a3e50bd6bf38f05731a95c9d
Merge: 5a0d02b 25215d9
Author: harhipova <harhipova@mirantis.com>
Date: Fri Mar 1 15:37:37 2019 +0000
Merge "Add more public url tests for UIs"
commit 24b71aa285748e8912fd780673f321f32e09a8c8
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Wed Feb 27 17:02:05 2019 -0600
Do not add node without virbr0* interfaces for comparison
Related-PROD: PROD-27217
Change-Id: I704290f5b0708b96e03cbbb96674fc4355639723
commit 9b74486023b04708c9db2ee45ba4d0f0f6410c6b
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Tue Feb 26 17:33:43 2019 -0600
Add ntp_skipped_nodes parameter, do not compare time on nodes with salt master
Related-PROD: PROD-21993
Related-PROD: PROD-27182
Change-Id: Id8247d0b28301d098569f2ae3bd08ff7cfcad154
commit 5a0d02b3f0dfc8e525e2bd49736a352a1e101d06
Merge: e792be5 90fdfb5
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Fri Feb 22 18:10:30 2019 +0000
Merge "Added test to check nodes status in MAAS"
commit 25215d9ededf612f3e9354e9a6232eea6b958bc6
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Thu Jan 31 16:35:57 2019 -0600
Add more public url tests for UIs
Related-PROD: PROD-23746
Change-Id: Ie680d0d934cf36f4147b9d9a079f53469d26eccc
commit 90fdfb5e3cccbba22f8fe60a2fe119cab7308b37
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Sun Jan 27 23:01:07 2019 +0300
Added test to check nodes status in MAAS
MAAS should have the nodes in 'Deployed' status. At the same time,
QA engineer can add some nodes in the skipped_nodes list and ignore
checking them.
Change-Id: I5407523f700fd76bb88cd5383c73cfce55cdd907
commit e792be50fa47222389e2e55f7d46e01b59a88e52
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Wed Feb 13 13:28:11 2019 +0200
Fix version parsing in test_drivetrain_components_and_versions
Change-Id: I3f036a7e3c324be8c50d6c5d7071ee12a5b3127e
Fixes-Bug: #PROD-27454(PROD:27454)
Closes-Task: #PROD-27253(PROD:27253)
commit 6baf78783bad9dbdf1fb1928077507f5f9a70a1a
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Fri Jan 25 19:09:30 2019 +0300
Added test to check all packages are latest
Added test to check that all packages on the nodes are latest and
are not upgradable. Added the possibility to skip some packages
in global_config if they should not be upgraded.
Problem description:
The 'test_check_package_versions' checks that the versions are
consistent across the nodes of the same group. But we have no
test that the actual versions are really correct and from correct
MCP release. I had the cloud that had some packages installed
from the wrong repositories, not from the required MCP release, so
the fix was to upgrade packages. So there is the need to have
the test to check that the packages are latest.
At the same time if several packages should not be upgraded and
have correct version even if Installed!=Candidate, there is
possibility to skip packages.
Currently the test is skipped by default ("skip_test": True in
global_config.yaml file). Set False to run the test.
Change-Id: Iddfab8b3d7fb4e72870aa0791e9da95a66f0ccfd
commit c48585ffded98bf907b98a69b61635829c48f2c4
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Mon Feb 4 19:38:54 2019 +0300
Added test to check ntpq peers state
The existing test 'test_ntp_sync' check the time is equal across
the nodes. Sometimes there can be some NTP issue, but the time
can be still correct. For example, Contrail can have "NTP state
unsynchronized" when noone from remote peers is chosen. So there
is some need to check "ntpq -pn" on the nodes and check the peers
state.
The new test gets ntpq peers state and check the system peer is
declared.
Change-Id: Icb8799b2323a446a3ec3dc6db54fd1d9de0356e5
commit ae0e72af8b63a65fb9e1fcfb7a626532da4c14b1
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Tue Feb 12 13:57:26 2019 +0200
Disabled test result for test_duplicate_ips. It reacts to ens3 networks
Relalet-Bug: #PROD-27449(PROD:27449)
Change-Id: Ia28dcf09a89b4a6bee8a746a7ce1a069b74ce8cf
commit 47e42daa5287c858daefbab8eeefe2d8f406feb5
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Tue Feb 12 11:49:26 2019 +0200
Disabled test results for test_cinder_services. It affects to test_drivetrain job voting
Fixes-Bug:#PROD-27436(PROD:27436)
Change-Id: I0da3365d7f51a8863b10d9450321c7f5119b842e
commit f9a95caa34f0eb1043e2c9655d096d0d69a6d4c2
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Wed Jan 30 15:47:00 2019 +0200
Add stacklight tests from stackligth-pytest repo
Change-Id: I2d2ea6201b6495c35bed57d71450b30b0e0ff49f
Relates-Task: #PROD-21318(PROD:21318)
commit f2660bdee650fa0240a3e9b34ca2b92f7d1d1e00
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Fri Feb 8 17:25:39 2019 +0200
Retry test for docker services replicas
Change-Id: Id4b983fe575516f33be4b401a005b23097c0fe96
Fixes-Bug: #PROD-27372(PROD:27372)
commit 6f34fbbfcb424f99e2a6c81ac4eb73ac4e40ce6b
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Fri Feb 8 11:19:41 2019 +0200
Change jenkins_jobs_branch test to check release branches
Change-Id: I2d0551f6291f79dc20b2d031e4e669c4009d0aa3
commit 42ed43a37b96846cddb1d69985f1e15780c8a697
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Sun Jan 27 23:58:35 2019 +0300
Added timeout in iperf command for Vm2Vm test
Added timeout in iperf command for Vm2Vm test for having some
statistics: sometimes 10s timeout is not enough when the network
speed is unstable.
Change-Id: I4912ccf8ba346a8b427cf6bd6181ce6e6c180fb2
commit 7c5f3fdef6477ac08dec4ace6630662b8adfe458
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Tue Feb 5 18:01:33 2019 +0300
Added test for K8S dashboard availability
In MCP the K8S dashboard is enabled by default. Need to check that
the dashboard is available.
Change-Id: I5b94ecce46d5f43491c9cf65a15a50461214e9c4
commit b8ec40e14917ec3b69dfcfe6ddcf36500dbc4754
Merge: 6dc2b00 ac4a14e
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Thu Jan 31 18:37:51 2019 +0000
Merge "Add a new test to check for duplicate IPs in an env"
commit 6dc2b00bc4b059968daa3d49775ec77e00b903ed
Merge: 09b1ae8 03af292
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Tue Jan 29 22:47:00 2019 +0000
Merge "Small fix: test nodes in ElasticSearch will get > than 500 nodes"
commit ac4a14e24f5bc1f53096627ae7d4f4cb60183ea0
Author: Dmitriy Kruglov <dkruglov@mirantis.com>
Date: Wed Jan 23 09:37:13 2019 +0100
Add a new test to check for duplicate IPs in an env
Change-Id: I08ad6b22f252a0f8ea5bc4a4edd2fe566826868b
Closes-PROD: #PROD-24347
commit 09b1ae88229cb8055a6c291097b3f6b0e0eb63c8
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Mon Jan 28 13:06:01 2019 +0300
Added StackLight UI tests for public endpoints
Change-Id: Ib60f278b77cc6673394c70b6b0ab16f74bc74366
commit df243ef14cbb7ab2707d2e7a2c292863f5010760
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Thu Nov 8 18:17:17 2018 +0300
Added UI tests for Alerta: internal and public addresses
The related fix https://gerrit.mcp.mirantis.com/#/c/34661
adds tests for public endpoints for the rest of
StackLight UI.
Change-Id: Ie94ea242b19e30b7ed7143e01444125182fb6305
commit ac850455f686f1092077e2c95c9ab0d466f099c6
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Sun Jan 27 22:31:38 2019 +0300
Added test to check minions status
The CVP Sanity tests skip the nodes automatically if this minion
does not respond in 1 sec (salt_timeout in config) time.
Sometimes all the tests can pass, but some KVM nodes along with its
Control plane VMs can be down and CVP tests will not test this and
will not inform about this.
The tests check that all minions are up.
Change-Id: Ib8495aeb043448b36aea85bb31ee2650d655075e
commit 03af292569edc29db72bbdf97a331eceab3dc05c
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Mon Jan 28 15:55:02 2019 +0300
Small fix: test nodes in ElasticSearch will get > than 500 nodes
Some big Production clouds have more than 500 nodes in total.
So the test is not valid for such cloud: it will fetch only
500 nodes instead of all nodes of the cloud. Changing the request
to fetch 1000 nodes.
Change-Id: I58493fc55e1deb2c988d61e7c8a4f8ed971a60d4
commit 16e93fb7375fdfb87901b4a074f17ef09e722e56
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Wed Jan 23 19:03:01 2019 +0200
Renamed folder with tests to make them consistent with cvp-runner.groovy
and CVP jobs in cluster Jenkins
Return rsync service into inconsistency_rule
Related-Task: #PROD-23604(PROD:23604)
Change-Id: I94afe350bd1d9c184bafe8e9e270aeb4c6c24c50
commit 27a41d814cc9d4f5bbc7f780a3d9e6042a6aaa4c
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Thu Jan 17 17:40:40 2019 +0200
Check kubectl on kubernetes:master only
Change-Id: I8ae308eb903694feffe65b14f7f857dfaf6b689c
Fixes-Bug: #PROD-26555(PROD:26555)
commit 55cc129f3e93a3801a4abf620b40c1e5d7c53fe7
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Tue Jan 8 14:22:18 2019 +0200
Common Dockerfile for CVP-Sanity and CVP-SPT
Related-Task: #PROD-26312(PROD:26312)
Change-Id: I457a8d5c6ff73d944518f6b0c2c568f8286728a9
commit 753a03e19780b090776ce5e2c27d74c44c5750a3
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Tue Jan 15 17:35:25 2019 -0600
[CVP] Add checks for docker_registry, docker_visualizer and cvp jobs version
Related-PROD: PROD-21801
Change-Id: I79c8c5eb0833aca6d077129e3ec81ff3afb06143
commit 7b70537c2b7bfe29d1dc84915a21da5238f120f0
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Tue Jan 15 18:40:29 2019 -0600
[CVP] Get drivetrain_version parameter from reclass
Related-PROD: PROD-21801
Change-Id: I628480b053e7b03c09c55d5b997e9dc74aa98c90
commit aaa8e6e95861e4e3f51c4d28dc7fcb0ed8ab8578
Merge: c0a7f0c 30bd90c
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Fri Jan 11 16:45:10 2019 +0000
Merge "Add assert for return code"
commit 30bd90c986234343aabf03ec5f174026d02d4988
Author: Tatyana Leontovich <tleontovich@mirantis.com>
Date: Fri Jan 11 16:26:32 2019 +0200
Add assert for return code
* Add assertion that reposne was sucess
before process response.text()
* Add header to request to avoid 406 error
Change-Id: If41598e8c1ef5d9bf36847a750008d1203b4ed84
Closes-Prod: PROD-26423
commit c0a7f0c01adc6a391f662dc306902cde138658ce
Author: Tatyana Leontovich <tleontovich@mirantis.com>
Date: Fri Jan 11 16:08:50 2019 +0200
Remove rsync service from inconsistency_rule
Rsync service exists on all kvm nodes, so that
remove it from inconsistency_rule to avail false -negative results
Change-Id: I25ce5db2990645992c8fa7fb6cc33f082903b295
Closes-PROD: PROD-26431
commit 5d965b230b4b5348d425510dc4667ced0c7e8ec3
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Wed Jan 9 16:29:31 2019 -0600
Fix ceph tests filename
Change-Id: I67ffc9f4da27d8b64c0334f3a6ae3f8f05dcd3b2
commit 0763a040044b20f8229292b547798d4ed99ca7e3
Merge: b8d04d5 f77b50b
Author: Oleksii Zhurba <ozhurba@mirantis.com>
Date: Wed Jan 9 21:47:45 2019 +0000
Merge "Added Ceph health test"
commit b8d04d575b1cb8b05aace823de64d74e0d0e4c48
Author: Hanna Arhipova <harhipova@mirantis.com>
Date: Fri Dec 28 13:19:17 2018 +0200
Remove ldap_server component from test_drivetrain_components_and_versions
Change-Id: Idc8e8581f828334db511b0ca2149ad812e71a6c3
Fixes-Bug: #PROD-26151(PROD:26151)
commit f77b50bdb2b1fb2b747ac8e1b1262ee88fdfd2ed
Author: Ievgeniia Zadorozhna <izadorozhna@mirantis.com>
Date: Wed Dec 12 19:41:15 2018 +0300
Added Ceph health test
Change-Id: If2f318169e841cdd278c76c237f040b09d7b87ea
Change-Id: I9687de3dbdae9d5dc3deb94dbd2afcd5e7f0ec7d
diff --git a/.gitignore b/.gitignore
index 61e0f6b..3757d15 100644
--- a/.gitignore
+++ b/.gitignore
@@ -29,6 +29,7 @@
issue/
env/
.env/
+venv/
3rdparty/
.tox
.cache
@@ -37,4 +38,3 @@
.ropeproject
.idea
.hypothesis
-
diff --git a/Dockerfile b/Dockerfile
index c6a1fe8..960ec96 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -9,12 +9,14 @@
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
USER root
-RUN mkdir -p /var/lib/cvp-sanity/
-COPY cvp-sanity/ /var/lib/cvp-sanity
ARG UBUNTU_MIRROR_URL="http://archive.ubuntu.com/ubuntu"
+ARG SL_TEST_REPO='http://gerrit.mcp.mirantis.com/mcp/stacklight-pytest'
+ARG SL_TEST_BRANCH='master'
WORKDIR /var/lib/
-
+COPY bin/ /usr/local/bin/
+COPY test_set/ ./
+#
RUN set -ex; pushd /etc/apt/ && echo > sources.list && \
echo 'Acquire::Languages "none";' > apt.conf.d/docker-no-languages && \
echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > apt.conf.d/docker-gzip-indexes && \
@@ -23,15 +25,39 @@
echo "deb [arch=amd64] $UBUNTU_MIRROR_URL xenial-updates main restricted universe multiverse" >> sources.list && \
echo "deb [arch=amd64] $UBUNTU_MIRROR_URL xenial-backports main restricted universe multiverse" >> sources.list && \
popd ; apt-get update && apt-get upgrade -y && \
- apt-get install -y curl git-core iputils-ping libffi-dev libldap2-dev libsasl2-dev libssl-dev patch python-dev python-pip python3-dev vim-tiny wget \
- python-virtualenv python3-virtualenv && \
-#Due to upstream bug we should use fixed version of pip
- python -m pip install --upgrade 'pip==9.0.3' && \
- pip install -r cvp-sanity/requirements.txt && \
+ apt-get install -y build-essential curl git-core iputils-ping libffi-dev libldap2-dev libsasl2-dev libssl-dev patch python-dev python-pip vim-tiny wget \
+ python-virtualenv \
+# Enable these packages while porting to Python3 => python3-virtualenv python3-dev \
+# Due to upstream bug we should use fixed version of pip
+ && python -m pip install --upgrade 'pip==9.0.3' \
+ # initialize cvp sanity test suite
+ && pushd cvp-sanity \
+ && virtualenv --python=python2 venv \
+ && . venv/bin/activate \
+ && pip install -r requirements.txt \
+ && deactivate \
+ && popd \
+ # initialize cvp spt test suite
+ && pushd cvp-spt \
+ && virtualenv --python=python2 venv \
+ && . venv/bin/activate \
+ && pip install -r requirements.txt \
+ && deactivate \
+ && popd \
+ # initialize cvp stacklight test suite
+ && mkdir cvp-stacklight \
+ && pushd cvp-stacklight \
+ && virtualenv --system-site-packages venv \
+ && . venv/bin/activate \
+ && git clone -b $SL_TEST_BRANCH $SL_TEST_REPO \
+ && pip install ./stacklight-pytest \
+ && pip install -r stacklight-pytest/requirements.txt \
+ && deactivate \
+ && popd \
# Cleanup
- apt-get -y purge libx11-data xauth libxmuu1 libxcb1 libx11-6 libxext6 ppp pppconfig pppoeconf popularity-contest cpp gcc g++ libssl-doc && \
+ && apt-get -y purge libx11-data xauth libxmuu1 libxcb1 libx11-6 libxext6 ppp pppconfig pppoeconf popularity-contest cpp gcc g++ libssl-doc && \
apt-get -y autoremove; apt-get -y clean ; rm -rf /root/.cache; rm -rf /var/lib/apt/lists/* && \
rm -rf /tmp/* ; rm -rf /var/tmp/* ; rm -rfv /etc/apt/sources.list.d/* ; echo > /etc/apt/sources.list
-COPY bin /usr/local/bin/
-ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
-# docker build --no-cache -t cvp-sanity-checks:$(date "+%Y_%m_%d_%H_%M_%S") .
+
+ENTRYPOINT ["entrypoint.sh"]
+# docker build --no-cache -t cvp-sanity-checks:test_latest .
diff --git a/bin/entrypoint.sh b/bin/entrypoint.sh
index 99cd175..daba3d4 100755
--- a/bin/entrypoint.sh
+++ b/bin/entrypoint.sh
@@ -1,14 +1,4 @@
#!/bin/bash
set -xe
-
-function _info(){
- set +x
- echo -e "=== INFO: pip freeze:"
- pip freeze | sort
- echo -e "============================"
- set -x
-}
-
-_info
exec "$@"
diff --git a/bin/with_venv.sh b/bin/with_venv.sh
new file mode 100755
index 0000000..b120ebe
--- /dev/null
+++ b/bin/with_venv.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+# This file used as an interface for automatic activating of virtualenv.
+# Should be placed into PATH
+# Example: with_venv.sh python --version
+
+set -xe
+
+function _info(){
+ set +x
+ echo -e "===== virtualenv info: ====="
+ python --version
+ pip freeze | sort
+ echo -e "============================"
+ set -x
+}
+
+function activate_venv(){
+ set +x
+ if [ -f venv/bin/activate ]; then
+ echo "Activating venv in $(pwd)"
+ source venv/bin/activate && echo "Activated succesfully"
+ else
+ echo "WARNING: No venv found in $(pwd)"
+ return 1
+ fi
+ set -x
+}
+
+activate_venv &&
+_info &&
+exec "$@"
diff --git a/cvp-sanity/Makefile b/cvp-sanity/Makefile
deleted file mode 100644
index 908381e..0000000
--- a/cvp-sanity/Makefile
+++ /dev/null
@@ -1,5 +0,0 @@
-init:
- pip install -r requirements.txt
-
-# test:
-# nosetests tests
diff --git a/cvp-sanity/cvp_checks/tests/ceph/test_ceph_osd.py b/cvp-sanity/cvp_checks/tests/ceph/test_ceph_osd.py
deleted file mode 100644
index 6969d8a..0000000
--- a/cvp-sanity/cvp_checks/tests/ceph/test_ceph_osd.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import pytest
-
-
-def test_check_ceph_osd(local_salt_client):
- osd_fail = local_salt_client.cmd(
- 'ceph:osd',
- 'cmd.run',
- ['ceph osd tree | grep down'],
- expr_form='pillar')
- if not osd_fail:
- pytest.skip("Ceph is not found on this environment")
- assert not osd_fail.values()[0], \
- "Some osds are in down state or ceph is not found".format(
- osd_fail.values()[0])
diff --git a/cvp-sanity/cvp_checks/tests/conftest.py b/cvp-sanity/cvp_checks/tests/conftest.py
deleted file mode 100644
index 21af490..0000000
--- a/cvp-sanity/cvp_checks/tests/conftest.py
+++ /dev/null
@@ -1 +0,0 @@
-from cvp_checks.fixtures.base import *
diff --git a/cvp-sanity/cvp_checks/tests/test_cinder_services.py b/cvp-sanity/cvp_checks/tests/test_cinder_services.py
deleted file mode 100644
index e6b8c8e..0000000
--- a/cvp-sanity/cvp_checks/tests/test_cinder_services.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import pytest
-
-
-def test_cinder_services(local_salt_client):
- cinder_backends_info = local_salt_client.cmd(
- 'cinder:controller',
- 'pillar.get',
- ['cinder:controller:backend'],
- expr_form='pillar')
- if not cinder_backends_info:
- pytest.skip("Cinder service or cinder:controller pillar \
- are not found on this environment.")
- service_down = local_salt_client.cmd(
- 'keystone:server',
- 'cmd.run',
- ['. /root/keystonercv3; cinder service-list | grep "down\|disabled"'],
- expr_form='pillar')
- cinder_volume = local_salt_client.cmd(
- 'keystone:server',
- 'cmd.run',
- ['. /root/keystonercv3; cinder service-list | grep -c "volume"'],
- expr_form='pillar')
- backends_cinder = cinder_backends_info[cinder_backends_info.keys()[0]]
- backends_num = len(backends_cinder.keys())
- assert service_down[service_down.keys()[0]] == '', \
- '''Some cinder services are in wrong state'''
- assert cinder_volume[cinder_volume.keys()[0]] == str(backends_num), \
- 'Number of cinder-volume services ({0}) does not match ' \
- 'number of volume backends ({1})'.format(
- cinder_volume[cinder_volume.keys()[0]], str(backends_num))
diff --git a/cvp-sanity/cvp_checks/tests/test_default_gateway.py b/cvp-sanity/cvp_checks/tests/test_default_gateway.py
deleted file mode 100644
index 69fd116..0000000
--- a/cvp-sanity/cvp_checks/tests/test_default_gateway.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import json
-import pytest
-import os
-from cvp_checks import utils
-
-
-def test_check_default_gateways(local_salt_client, nodes_in_group):
- netstat_info = local_salt_client.cmd(
- "L@"+','.join(nodes_in_group), 'cmd.run', ['ip r | sed -n 1p'], expr_form='compound')
-
- gateways = {}
- nodes = netstat_info.keys()
-
- for node in nodes:
- if netstat_info[node] not in gateways:
- gateways[netstat_info[node]] = [node]
- else:
- gateways[netstat_info[node]].append(node)
-
- assert len(gateways.keys()) == 1, \
- "There were found few gateways: {gw}".format(
- gw=json.dumps(gateways, indent=4)
- )
diff --git a/cvp-sanity/cvp_checks/tests/test_drivetrain.py b/cvp-sanity/cvp_checks/tests/test_drivetrain.py
deleted file mode 100644
index f232909..0000000
--- a/cvp-sanity/cvp_checks/tests/test_drivetrain.py
+++ /dev/null
@@ -1,336 +0,0 @@
-import jenkins
-from xml.dom import minidom
-from cvp_checks import utils
-import json
-import pytest
-import time
-import os
-from pygerrit2 import GerritRestAPI, HTTPBasicAuth
-from requests import HTTPError
-import git
-import ldap
-import ldap.modlist as modlist
-
-def join_to_gerrit(local_salt_client, gerrit_user, gerrit_password):
- gerrit_port = local_salt_client.cmd(
- 'I@gerrit:client and not I@salt:master',
- 'pillar.get',
- ['_param:haproxy_gerrit_bind_port'],
- expr_form='compound').values()[0]
- gerrit_address = local_salt_client.cmd(
- 'I@gerrit:client and not I@salt:master',
- 'pillar.get',
- ['_param:haproxy_gerrit_bind_host'],
- expr_form='compound').values()[0]
- url = 'http://{0}:{1}'.format(gerrit_address,gerrit_port)
- auth = HTTPBasicAuth(gerrit_user, gerrit_password)
- rest = GerritRestAPI(url=url, auth=auth)
- return rest
-
-def join_to_jenkins(local_salt_client, jenkins_user, jenkins_password):
- jenkins_port = local_salt_client.cmd(
- 'I@jenkins:client and not I@salt:master',
- 'pillar.get',
- ['_param:haproxy_jenkins_bind_port'],
- expr_form='compound').values()[0]
- jenkins_address = local_salt_client.cmd(
- 'I@jenkins:client and not I@salt:master',
- 'pillar.get',
- ['_param:haproxy_jenkins_bind_host'],
- expr_form='compound').values()[0]
- jenkins_url = 'http://{0}:{1}'.format(jenkins_address,jenkins_port)
- server = jenkins.Jenkins(jenkins_url, username=jenkins_user, password=jenkins_password)
- return server
-
-def get_password(local_salt_client,service):
- password = local_salt_client.cmd(
- service,
- 'pillar.get',
- ['_param:openldap_admin_password'],
- expr_form='pillar').values()[0]
- return password
-
-def test_drivetrain_gerrit(local_salt_client):
- gerrit_password = get_password(local_salt_client,'gerrit:client')
- gerrit_error = ''
- current_date = time.strftime("%Y%m%d-%H.%M.%S", time.localtime())
- test_proj_name = "test-dt-{0}".format(current_date)
- gerrit_port = local_salt_client.cmd(
- 'I@gerrit:client and not I@salt:master',
- 'pillar.get',
- ['_param:haproxy_gerrit_bind_port'],
- expr_form='compound').values()[0]
- gerrit_address = local_salt_client.cmd(
- 'I@gerrit:client and not I@salt:master',
- 'pillar.get',
- ['_param:haproxy_gerrit_bind_host'],
- expr_form='compound').values()[0]
- try:
- #Connecting to gerrit and check connection
- server = join_to_gerrit(local_salt_client,'admin',gerrit_password)
- gerrit_check = server.get("/changes/?q=owner:self%20status:open")
- #Check deleteproject plugin and skip test if the plugin is not installed
- gerrit_plugins = server.get("/plugins/?all")
- if 'deleteproject' not in gerrit_plugins:
- pytest.skip("Delete-project plugin is not installed")
- #Create test project and add description
- server.put("/projects/"+test_proj_name)
- server.put("/projects/"+test_proj_name+"/description",json={"description":"Test DriveTrain project","commit_message": "Update the project description"})
- except HTTPError, e:
- gerrit_error = e
- try:
- #Create test folder and init git
- repo_dir = os.path.join(os.getcwd(),test_proj_name)
- file_name = os.path.join(repo_dir, current_date)
- repo = git.Repo.init(repo_dir)
- #Add remote url for this git repo
- origin = repo.create_remote('origin', 'http://admin:{1}@{2}:{3}/{0}.git'.format(test_proj_name,gerrit_password,gerrit_address,gerrit_port))
- #Add commit-msg hook to automatically add Change-Id to our commit
- os.system("curl -Lo {0}/.git/hooks/commit-msg 'http://admin:{1}@{2}:{3}/tools/hooks/commit-msg' > /dev/null 2>&1".format(repo_dir,gerrit_password,gerrit_address,gerrit_port))
- os.system("chmod u+x {0}/.git/hooks/commit-msg".format(repo_dir))
- #Create a test file
- f = open(file_name, 'w+')
- f.write("This is a test file for DriveTrain test")
- f.close()
- #Add file to git and commit it to Gerrit for review
- repo.index.add([file_name])
- repo.index.commit("This is a test commit for DriveTrain test")
- repo.git.push("origin", "HEAD:refs/for/master")
- #Get change id from Gerrit. Set Code-Review +2 and submit this change
- changes = server.get("/changes/?q=project:{0}".format(test_proj_name))
- last_change = changes[0].get('change_id')
- server.post("/changes/{0}/revisions/1/review".format(last_change),json={"message":"All is good","labels":{"Code-Review":"+2"}})
- server.post("/changes/{0}/submit".format(last_change))
- except HTTPError, e:
- gerrit_error = e
- finally:
- #Delete test project
- server.post("/projects/"+test_proj_name+"/deleteproject~delete")
- assert gerrit_error == '',\
- 'Something is wrong with Gerrit'.format(gerrit_error)
-
-def test_drivetrain_openldap(local_salt_client):
- '''Create a test user 'DT_test_user' in openldap,
- add the user to admin group, login using the user to Jenkins.
- Add the user to devops group in Gerrit and then login to Gerrit,
- using test_user credentials. Finally, delete the user from admin
- group and openldap
- '''
- ldap_password = get_password(local_salt_client,'openldap:client')
- #Check that ldap_password is exists, otherwise skip test
- if not ldap_password:
- pytest.skip("Openldap service or openldap:client pillar \
- are not found on this environment.")
- ldap_port = local_salt_client.cmd(
- 'I@openldap:client and not I@salt:master',
- 'pillar.get',
- ['_param:haproxy_openldap_bind_port'],
- expr_form='compound').values()[0]
- ldap_address = local_salt_client.cmd(
- 'I@openldap:client and not I@salt:master',
- 'pillar.get',
- ['_param:haproxy_openldap_bind_host'],
- expr_form='compound').values()[0]
- ldap_dc = local_salt_client.cmd(
- 'openldap:client',
- 'pillar.get',
- ['_param:openldap_dn'],
- expr_form='pillar').values()[0]
- ldap_con_admin = local_salt_client.cmd(
- 'openldap:client',
- 'pillar.get',
- ['openldap:client:server:auth:user'],
- expr_form='pillar').values()[0]
- ldap_url = 'ldap://{0}:{1}'.format(ldap_address,ldap_port)
- ldap_error = ''
- ldap_result = ''
- gerrit_result = ''
- gerrit_error = ''
- jenkins_error = ''
- #Test user's CN
- test_user_name = 'DT_test_user'
- test_user = 'cn={0},ou=people,{1}'.format(test_user_name,ldap_dc)
- #Admins group CN
- admin_gr_dn = 'cn=admins,ou=groups,{0}'.format(ldap_dc)
- #List of attributes for test user
- attrs = {}
- attrs['objectclass'] = ['organizationalRole','simpleSecurityObject','shadowAccount']
- attrs['cn'] = test_user_name
- attrs['uid'] = test_user_name
- attrs['userPassword'] = 'aSecretPassw'
- attrs['description'] = 'Test user for CVP DT test'
- searchFilter = 'cn={0}'.format(test_user_name)
- #Get a test job name from config
- config = utils.get_configuration()
- jenkins_cvp_job = config['jenkins_cvp_job']
- #Open connection to ldap and creating test user in admins group
- try:
- ldap_server = ldap.initialize(ldap_url)
- ldap_server.simple_bind_s(ldap_con_admin,ldap_password)
- ldif = modlist.addModlist(attrs)
- ldap_server.add_s(test_user,ldif)
- ldap_server.modify_s(admin_gr_dn,[(ldap.MOD_ADD, 'memberUid', [test_user_name],)],)
- #Check search test user in LDAP
- searchScope = ldap.SCOPE_SUBTREE
- ldap_result = ldap_server.search_s(ldap_dc, searchScope, searchFilter)
- except ldap.LDAPError, e:
- ldap_error = e
- try:
- #Check connection between Jenkins and LDAP
- jenkins_server = join_to_jenkins(local_salt_client,test_user_name,'aSecretPassw')
- jenkins_version = jenkins_server.get_job_name(jenkins_cvp_job)
- #Check connection between Gerrit and LDAP
- gerrit_server = join_to_gerrit(local_salt_client,'admin',ldap_password)
- gerrit_check = gerrit_server.get("/changes/?q=owner:self%20status:open")
- #Add test user to devops-contrib group in Gerrit and check login
- _link = "/groups/devops-contrib/members/{0}".format(test_user_name)
- gerrit_add_user = gerrit_server.put(_link)
- gerrit_server = join_to_gerrit(local_salt_client,test_user_name,'aSecretPassw')
- gerrit_result = gerrit_server.get("/changes/?q=owner:self%20status:open")
- except HTTPError, e:
- gerrit_error = e
- except jenkins.JenkinsException, e:
- jenkins_error = e
- finally:
- ldap_server.modify_s(admin_gr_dn,[(ldap.MOD_DELETE, 'memberUid', [test_user_name],)],)
- ldap_server.delete_s(test_user)
- ldap_server.unbind_s()
- assert ldap_error == '', \
- '''Something is wrong with connection to LDAP:
- {0}'''.format(e)
- assert jenkins_error == '', \
- '''Connection to Jenkins was not established:
- {0}'''.format(e)
- assert gerrit_error == '', \
- '''Connection to Gerrit was not established:
- {0}'''.format(e)
- assert ldap_result !=[], \
- '''Test user was not found'''
-
-def test_drivetrain_jenkins_job(local_salt_client):
- jenkins_password = get_password(local_salt_client,'jenkins:client')
- server = join_to_jenkins(local_salt_client,'admin',jenkins_password)
- #Getting Jenkins test job name from configuration
- config = utils.get_configuration()
- jenkins_test_job = config['jenkins_test_job']
- if not server.get_job_name(jenkins_test_job):
- server.create_job(jenkins_test_job, jenkins.EMPTY_CONFIG_XML)
- if server.get_job_name(jenkins_test_job):
- next_build_num = server.get_job_info(jenkins_test_job)['nextBuildNumber']
- #If this is first build number skip building check
- if next_build_num != 1:
- #Check that test job is not running at this moment,
- #Otherwise skip the test
- last_build_num = server.get_job_info(jenkins_test_job)['lastBuild'].get('number')
- last_build_status = server.get_build_info(jenkins_test_job,last_build_num)['building']
- if last_build_status:
- pytest.skip("Test job {0} is already running").format(jenkins_test_job)
- server.build_job(jenkins_test_job)
- timeout = 0
- #Use job status True by default to exclude timeout between build job and start job.
- job_status = True
- while job_status and ( timeout < 180 ):
- time.sleep(10)
- timeout += 10
- job_status = server.get_build_info(jenkins_test_job,next_build_num)['building']
- job_result = server.get_build_info(jenkins_test_job,next_build_num)['result']
- else:
- pytest.skip("The job {0} was not found").format(test_job_name)
- assert job_result == 'SUCCESS', \
- '''Test job '{0}' build was not successfull or timeout is too small
- '''.format(jenkins_test_job)
-
-def test_drivetrain_services_replicas(local_salt_client):
- salt_output = local_salt_client.cmd(
- 'I@gerrit:client',
- 'cmd.run',
- ['docker service ls'],
- expr_form='compound')
- wrong_items = []
- for line in salt_output[salt_output.keys()[0]].split('\n'):
- if line[line.find('/') - 1] != line[line.find('/') + 1] \
- and 'replicated' in line:
- wrong_items.append(line)
- assert len(wrong_items) == 0, \
- '''Some DriveTrain services doesn't have expected number of replicas:
- {}'''.format(json.dumps(wrong_items, indent=4))
-
-
-def test_drivetrain_components_and_versions(local_salt_client):
- """ This test compares drivetrain components and their versions
- collected from the cloud vs collected from pillars.
- """
- table_with_docker_services = local_salt_client.cmd('I@gerrit:client',
- 'cmd.run',
- ['docker service ls --format "{{.Image}}"'],
- expr_form='compound')
- table_from_pillar = local_salt_client.cmd('I@gerrit:client',
- 'pillar.get',
- ['docker:client:images'],
- expr_form='compound')
-
- mismatch = {}
- actual_images = {}
- for image in set(table_with_docker_services[table_with_docker_services.keys()[0]].split('\n')):
- actual_images[image.split(":")[0]] = image.split(":")[-1]
- for image in set(table_from_pillar[table_from_pillar.keys()[0]]):
- im_name = image.split(":")[0]
- if im_name not in actual_images:
- mismatch[im_name] = 'not found on env'
- elif image.split(":")[-1] != actual_images[im_name]:
- mismatch[im_name] = 'has {actual} version instead of {expected}'.format(
- actual=actual_images[im_name], expected=image.split(":")[-1])
- assert len(mismatch) == 0, \
- '''Some DriveTrain components doesn't have expected versions:
- {}'''.format(json.dumps(mismatch, indent=4))
-
-
-def test_jenkins_jobs_branch(local_salt_client):
- """ This test compares Jenkins jobs versions
- collected from the cloud vs collected from pillars.
- """
- excludes = ['upgrade-mcp-release', 'deploy-update-salt']
-
- config = utils.get_configuration()
- drivetrain_version = config.get('drivetrain_version', '')
- if not drivetrain_version:
- pytest.skip("drivetrain_version is not defined. Skipping")
-
- jenkins_password = get_password(local_salt_client, 'jenkins:client')
- version_mismatch = []
- server = join_to_jenkins(local_salt_client, 'admin', jenkins_password)
- for job_instance in server.get_jobs():
- job_name = job_instance.get('name')
- if job_name in excludes:
- continue
-
- job_config = server.get_job_config(job_name)
- xml_data = minidom.parseString(job_config)
- BranchSpec = xml_data.getElementsByTagName('hudson.plugins.git.BranchSpec')
-
- # We use master branch for pipeline-library in case of 'testing,stable,nighlty' versions
- # Leave proposed version as is
- # in other cases we get release/{drivetrain_version} (e.g release/2019.2.0)
- if drivetrain_version in ['testing','nightly','stable']:
- expected_version = 'master'
- else:
- expected_version = local_salt_client.cmd(
- 'I@gerrit:client',
- 'pillar.get',
- ['jenkins:client:job:{}:scm:branch'.format(job_name)],
- expr_form='compound').values()[0]
-
- if not BranchSpec:
- print("No BranchSpec has found for {} job".format(job_name))
- continue
-
- actual_version = BranchSpec[0].getElementsByTagName('name')[0].childNodes[0].data
-
- if (actual_version not in [expected_version, "release/{}".format(drivetrain_version)]):
- version_mismatch.append("Job {0} has {1} branch."
- "Expected {2}".format(job_name,
- actual_version,
- expected_version))
- assert len(version_mismatch) == 0, \
- '''Some DriveTrain jobs have version/branch mismatch:
- {}'''.format(json.dumps(version_mismatch, indent=4))
diff --git a/cvp-sanity/cvp_checks/tests/test_etc_hosts.py b/cvp-sanity/cvp_checks/tests/test_etc_hosts.py
deleted file mode 100644
index 1db29c8..0000000
--- a/cvp-sanity/cvp_checks/tests/test_etc_hosts.py
+++ /dev/null
@@ -1,23 +0,0 @@
-import pytest
-import json
-import os
-from cvp_checks import utils
-
-
-def test_etc_hosts(local_salt_client):
- active_nodes = utils.get_active_nodes()
- nodes_info = local_salt_client.cmd(
- utils.list_to_target_string(active_nodes, 'or'), 'cmd.run',
- ['cat /etc/hosts'],
- expr_form='compound')
- result = {}
- for node in nodes_info.keys():
- for nd in nodes_info.keys():
- if node not in nodes_info[nd]:
- if node in result:
- result[node]+=','+nd
- else:
- result[node]=nd
- assert len(result) <= 1, \
- "Some hosts are not presented in /etc/hosts: {0}".format(
- json.dumps(result, indent=4))
diff --git a/cvp-sanity/cvp_checks/tests/test_mtu.py b/cvp-sanity/cvp_checks/tests/test_mtu.py
deleted file mode 100644
index 9054ba3..0000000
--- a/cvp-sanity/cvp_checks/tests/test_mtu.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import pytest
-import json
-from cvp_checks import utils
-import os
-
-
-def test_mtu(local_salt_client, nodes_in_group):
- testname = os.path.basename(__file__).split('.')[0]
- config = utils.get_configuration()
- skipped_ifaces = config.get(testname)["skipped_ifaces"] or \
- ["bonding_masters", "lo", "veth", "tap", "cali", "qv", "qb", "br-int", "vxlan"]
- total = {}
- network_info = local_salt_client.cmd(
- "L@"+','.join(nodes_in_group), 'cmd.run', ['ls /sys/class/net/'], expr_form='compound')
-
- kvm_nodes = local_salt_client.cmd(
- 'salt:control', 'test.ping', expr_form='pillar').keys()
-
- if len(network_info.keys()) < 2:
- pytest.skip("Nothing to compare - only 1 node")
-
- for node, ifaces_info in network_info.iteritems():
- if node in kvm_nodes:
- kvm_info = local_salt_client.cmd(node, 'cmd.run',
- ["virsh list | "
- "awk '{print $2}' | "
- "xargs -n1 virsh domiflist | "
- "grep -v br-pxe | grep br- | "
- "awk '{print $1}'"])
- ifaces_info = kvm_info.get(node)
- node_ifaces = ifaces_info.split('\n')
- ifaces = {}
- for iface in node_ifaces:
- for skipped_iface in skipped_ifaces:
- if skipped_iface in iface:
- break
- else:
- iface_mtu = local_salt_client.cmd(node, 'cmd.run',
- ['cat /sys/class/'
- 'net/{}/mtu'.format(iface)])
- ifaces[iface] = iface_mtu.get(node)
- total[node] = ifaces
-
- nodes = []
- mtu_data = []
- my_set = set()
-
- for node in total:
- nodes.append(node)
- my_set.update(total[node].keys())
- for interf in my_set:
- diff = []
- row = []
- for node in nodes:
- if interf in total[node].keys():
- diff.append(total[node][interf])
- row.append("{}: {}".format(node, total[node][interf]))
- else:
- row.append("{}: No interface".format(node))
- if diff.count(diff[0]) < len(nodes):
- row.sort()
- row.insert(0, interf)
- mtu_data.append(row)
- assert len(mtu_data) == 0, \
- "Several problems found: {0}".format(
- json.dumps(mtu_data, indent=4))
diff --git a/cvp-sanity/cvp_checks/tests/test_nova_services.py b/cvp-sanity/cvp_checks/tests/test_nova_services.py
deleted file mode 100644
index 8fdadd6..0000000
--- a/cvp-sanity/cvp_checks/tests/test_nova_services.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import pytest
-
-
-def test_nova_services_status(local_salt_client):
- result = local_salt_client.cmd(
- 'keystone:server',
- 'cmd.run',
- ['. /root/keystonercv3; nova service-list | grep "down\|disabled" | grep -v "Forced down"'],
- expr_form='pillar')
-
- if not result:
- pytest.skip("Nova service or keystone:server pillar \
- are not found on this environment.")
-
- assert result[result.keys()[0]] == '', \
- '''Some nova services are in wrong state'''
diff --git a/cvp-sanity/cvp_checks/tests/test_oss.py b/cvp-sanity/cvp_checks/tests/test_oss.py
deleted file mode 100644
index 58a4151..0000000
--- a/cvp-sanity/cvp_checks/tests/test_oss.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import requests
-import csv
-import json
-
-
-def test_oss_status(local_salt_client):
- result = local_salt_client.cmd(
- 'docker:swarm:role:master',
- 'pillar.fetch',
- ['haproxy:proxy:listen:stats:binds:address'],
- expr_form='pillar')
- HAPROXY_STATS_IP = [node for node in result if result[node]]
- proxies = {"http": None, "https": None}
- csv_result = requests.get('http://{}:9600/haproxy?stats;csv"'.format(
- result[HAPROXY_STATS_IP[0]]),
- proxies=proxies).content
- data = csv_result.lstrip('# ')
- wrong_data = []
- list_of_services = ['aptly', 'openldap', 'gerrit', 'jenkins', 'postgresql',
- 'pushkin', 'rundeck', 'elasticsearch']
- for service in list_of_services:
- check = local_salt_client.cmd(
- '{}:client'.format(service),
- 'test.ping',
- expr_form='pillar')
- if check:
- lines = [row for row in csv.DictReader(data.splitlines())
- if service in row['pxname']]
- for row in lines:
- info = "Service {0} with svname {1} and status {2}".format(
- row['pxname'], row['svname'], row['status'])
- if row['svname'] == 'FRONTEND' and row['status'] != 'OPEN':
- wrong_data.append(info)
- if row['svname'] != 'FRONTEND' and row['status'] != 'UP':
- wrong_data.append(info)
-
- assert len(wrong_data) == 0, \
- '''Some haproxy services are in wrong state
- {}'''.format(json.dumps(wrong_data, indent=4))
diff --git a/cvp-sanity/cvp_checks/tests/test_packet_checker.py b/cvp-sanity/cvp_checks/tests/test_packet_checker.py
deleted file mode 100644
index 21a6a6b..0000000
--- a/cvp-sanity/cvp_checks/tests/test_packet_checker.py
+++ /dev/null
@@ -1,93 +0,0 @@
-import pytest
-import json
-from cvp_checks import utils
-
-def test_check_package_versions(local_salt_client, nodes_in_group):
- exclude_packages = utils.get_configuration().get("skipped_packages", [])
- packages_versions = local_salt_client.cmd("L@"+','.join(nodes_in_group),
- 'lowpkg.list_pkgs',
- expr_form='compound')
- # Let's exclude cid01 and dbs01 nodes from this check
- exclude_nodes = local_salt_client.cmd("I@galera:master or I@gerrit:client",
- 'test.ping',
- expr_form='compound').keys()
- total_nodes = [i for i in packages_versions.keys() if i not in exclude_nodes]
- if len(total_nodes) < 2:
- pytest.skip("Nothing to compare - only 1 node")
-
- nodes = []
- pkts_data = []
- packages_names = set()
-
- for node in total_nodes:
- nodes.append(node)
- packages_names.update(packages_versions[node].keys())
-
- for deb in packages_names:
- if deb in exclude_packages:
- continue
- diff = []
- row = []
- for node in nodes:
- if deb in packages_versions[node].keys():
- diff.append(packages_versions[node][deb])
- row.append("{}: {}".format(node, packages_versions[node][deb]))
- else:
- row.append("{}: No package".format(node))
- if diff.count(diff[0]) < len(nodes):
- row.sort()
- row.insert(0, deb)
- pkts_data.append(row)
- assert len(pkts_data) <= 1, \
- "Several problems found: {0}".format(
- json.dumps(pkts_data, indent=4))
-
-
-def test_check_module_versions(local_salt_client, nodes_in_group):
- exclude_modules = utils.get_configuration().get("skipped_modules", [])
- pre_check = local_salt_client.cmd(
- "L@"+','.join(nodes_in_group),
- 'cmd.run',
- ['dpkg -l | grep "python-pip "'],
- expr_form='compound')
- if pre_check.values().count('') > 0:
- pytest.skip("pip is not installed on one or more nodes")
-
- exclude_nodes = local_salt_client.cmd("I@galera:master or I@gerrit:client",
- 'test.ping',
- expr_form='compound').keys()
- total_nodes = [i for i in pre_check.keys() if i not in exclude_nodes]
-
- if len(total_nodes) < 2:
- pytest.skip("Nothing to compare - only 1 node")
- list_of_pip_packages = local_salt_client.cmd("L@"+','.join(nodes_in_group),
- 'pip.freeze', expr_form='compound')
-
- nodes = []
-
- pkts_data = []
- packages_names = set()
-
- for node in total_nodes:
- nodes.append(node)
- packages_names.update([x.split("=")[0] for x in list_of_pip_packages[node]])
- list_of_pip_packages[node] = dict([x.split("==") for x in list_of_pip_packages[node]])
-
- for deb in packages_names:
- if deb in exclude_modules:
- continue
- diff = []
- row = []
- for node in nodes:
- if deb in list_of_pip_packages[node].keys():
- diff.append(list_of_pip_packages[node][deb])
- row.append("{}: {}".format(node, list_of_pip_packages[node][deb]))
- else:
- row.append("{}: No module".format(node))
- if diff.count(diff[0]) < len(nodes):
- row.sort()
- row.insert(0, deb)
- pkts_data.append(row)
- assert len(pkts_data) <= 1, \
- "Several problems found: {0}".format(
- json.dumps(pkts_data, indent=4))
diff --git a/cvp-sanity/cvp_checks/tests/test_single_vip.py b/cvp-sanity/cvp_checks/tests/test_single_vip.py
deleted file mode 100644
index fe6cb5f..0000000
--- a/cvp-sanity/cvp_checks/tests/test_single_vip.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import pytest
-from cvp_checks import utils
-import os
-from collections import Counter
-
-
-def test_single_vip(local_salt_client, nodes_in_group):
- local_salt_client.cmd("L@"+','.join(nodes_in_group), 'saltutil.sync_all', expr_form='compound')
- nodes_list = local_salt_client.cmd(
- "L@"+','.join(nodes_in_group), 'grains.item', ['ipv4'], expr_form='compound')
-
- ipv4_list = []
-
- for node in nodes_list:
- ipv4_list.extend(nodes_list.get(node).get('ipv4'))
-
- cnt = Counter(ipv4_list)
-
- for ip in cnt:
- if ip == '127.0.0.1':
- continue
- elif cnt[ip] > 1:
- assert "VIP IP duplicate found " \
- "\n{}".format(ipv4_list)
diff --git a/cvp-sanity/cvp_checks/utils/__init__.py b/cvp-sanity/cvp_checks/utils/__init__.py
deleted file mode 100644
index aeb4cd8..0000000
--- a/cvp-sanity/cvp_checks/utils/__init__.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import os
-import yaml
-import requests
-import re
-import sys, traceback
-
-
-class AuthenticationError(Exception):
- pass
-
-
-class salt_remote:
- def cmd(self, tgt, fun, param=None, expr_form=None, tgt_type=None):
- config = get_configuration()
- url = config['SALT_URL'].strip()
- if not re.match("^(http|https)://", url):
- raise AuthenticationError("Salt URL should start \
- with http or https, given - {}".format(url))
- proxies = {"http": None, "https": None}
- headers = {'Accept': 'application/json'}
- login_payload = {'username': config['SALT_USERNAME'],
- 'password': config['SALT_PASSWORD'], 'eauth': 'pam'}
- accept_key_payload = {'fun': fun, 'tgt': tgt, 'client': 'local',
- 'expr_form': expr_form, 'tgt_type': tgt_type,
- 'timeout': config['salt_timeout']}
- if param:
- accept_key_payload['arg'] = param
-
- try:
- login_request = requests.post(os.path.join(url, 'login'),
- headers=headers, data=login_payload,
- proxies=proxies)
- if not login_request.ok:
- raise AuthenticationError("Authentication to SaltMaster failed")
-
- request = requests.post(url, headers=headers,
- data=accept_key_payload,
- cookies=login_request.cookies,
- proxies=proxies)
-
- response = request.json()['return'][0]
- return response
-
- except Exception as e:
- print ("\033[91m\nConnection to SaltMaster "
- "was not established.\n"
- "Please make sure that you "
- "provided correct credentials.\n"
- "Error message: {}\033[0m\n".format(e.message or e)
- )
- traceback.print_exc(file=sys.stdout)
- sys.exit()
-
-
-def init_salt_client():
- local = salt_remote()
- return local
-
-
-def list_to_target_string(node_list, separator, add_spaces=True):
- if add_spaces:
- separator = ' ' + separator.strip() + ' '
- return separator.join(node_list)
-
-
-def get_monitoring_ip(param_name):
- local_salt_client = init_salt_client()
- salt_output = local_salt_client.cmd(
- 'salt:master',
- 'pillar.get',
- ['_param:{}'.format(param_name)],
- expr_form='pillar')
- return salt_output[salt_output.keys()[0]]
-
-
-def get_active_nodes(test=None):
- config = get_configuration()
- local_salt_client = init_salt_client()
-
- skipped_nodes = config.get('skipped_nodes') or []
- if test:
- testname = test.split('.')[0]
- if 'skipped_nodes' in config.get(testname).keys():
- skipped_nodes += config.get(testname)['skipped_nodes'] or []
- if skipped_nodes != ['']:
- print "\nNotice: {0} nodes will be skipped".format(skipped_nodes)
- nodes = local_salt_client.cmd(
- '* and not ' + list_to_target_string(skipped_nodes, 'and not'),
- 'test.ping',
- expr_form='compound')
- else:
- nodes = local_salt_client.cmd('*', 'test.ping')
- return nodes
-
-
-def calculate_groups():
- config = get_configuration()
- local_salt_client = init_salt_client()
- node_groups = {}
- nodes_names = set ()
- expr_form = ''
- all_nodes = set(local_salt_client.cmd('*', 'test.ping'))
- if 'groups' in config.keys() and 'PB_GROUPS' in os.environ.keys() and \
- os.environ['PB_GROUPS'].lower() != 'false':
- nodes_names.update(config['groups'].keys())
- expr_form = 'compound'
- else:
- for node in all_nodes:
- index = re.search('[0-9]{1,3}$', node.split('.')[0])
- if index:
- nodes_names.add(node.split('.')[0][:-len(index.group(0))])
- else:
- nodes_names.add(node)
- expr_form = 'pcre'
-
- gluster_nodes = local_salt_client.cmd('I@salt:control and '
- 'I@glusterfs:server',
- 'test.ping', expr_form='compound')
- kvm_nodes = local_salt_client.cmd('I@salt:control and not '
- 'I@glusterfs:server',
- 'test.ping', expr_form='compound')
-
- for node_name in nodes_names:
- skipped_groups = config.get('skipped_groups') or []
- if node_name in skipped_groups:
- continue
- if expr_form == 'pcre':
- nodes = local_salt_client.cmd('{}[0-9]{{1,3}}'.format(node_name),
- 'test.ping',
- expr_form=expr_form)
- else:
- nodes = local_salt_client.cmd(config['groups'][node_name],
- 'test.ping',
- expr_form=expr_form)
- if nodes == {}:
- continue
-
- node_groups[node_name]=[x for x in nodes
- if x not in config['skipped_nodes']
- if x not in gluster_nodes.keys()
- if x not in kvm_nodes.keys()]
- all_nodes = set(all_nodes - set(node_groups[node_name]))
- if node_groups[node_name] == []:
- del node_groups[node_name]
- if kvm_nodes:
- node_groups['kvm'] = kvm_nodes.keys()
- node_groups['kvm_gluster'] = gluster_nodes.keys()
- all_nodes = set(all_nodes - set(kvm_nodes.keys()))
- all_nodes = set(all_nodes - set(gluster_nodes.keys()))
- if all_nodes:
- print ("These nodes were not collected {0}. Check config (groups section)".format(all_nodes))
- return node_groups
-
-
-def get_configuration():
- """function returns configuration for environment
- and for test if it's specified"""
- global_config_file = os.path.join(
- os.path.dirname(os.path.abspath(__file__)), "../global_config.yaml")
- with open(global_config_file, 'r') as file:
- global_config = yaml.load(file)
- for param in global_config.keys():
- if param in os.environ.keys():
- if ',' in os.environ[param]:
- global_config[param] = []
- for item in os.environ[param].split(','):
- global_config[param].append(item)
- else:
- global_config[param] = os.environ[param]
-
- return global_config
diff --git a/cvp-sanity/setup.py b/cvp-sanity/setup.py
deleted file mode 100755
index f13131c..0000000
--- a/cvp-sanity/setup.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# -*- coding: utf-8 -*-
-
-from setuptools import setup, find_packages
-import os
-
-def read(fname):
- return open(os.path.join(os.path.dirname(__file__), fname)).read()
-
-def get_requirements_list(requirements):
- all_requirements = read(requirements)
- return all_requirements
-
-with open('README.md') as f:
- readme = f.read()
-
-with open('LICENSE') as f:
- license = f.read()
-
-setup(
- name='cvp-sanity',
- version='0.1',
- description='set of tests for MCP verification',
- long_description=readme,
- author='Mirantis',
- license=license,
- install_requires=get_requirements_list('./requirements.txt'),
- packages=find_packages(exclude=('tests', 'docs'))
-)
diff --git a/README.md b/test_set/cvp-sanity/README.md
similarity index 90%
rename from README.md
rename to test_set/cvp-sanity/README.md
index 13acd7e..564f236 100644
--- a/README.md
+++ b/test_set/cvp-sanity/README.md
@@ -38,7 +38,7 @@
4) Configure:
```bash
- # vim cvp_checks/global_config.yaml
+ # vim cvp-sanity/global_config.yaml
```
SALT credentials are mandatory for tests.
@@ -57,9 +57,9 @@
5) Start tests:
```bash
- # pytest --tb=short -sv cvp_checks/tests/
+ # pytest --tb=short -sv cvp-sanity/tests/
```
or
```bash
- # pytest -sv cvp_checks/tests/ --ignore cvp_checks/tests/test_mtu.py
+ # pytest -sv cvp-sanity/tests/ --ignore cvp-sanity/tests/test_mtu.py
```
diff --git a/cvp-sanity/cvp_checks/__init__.py b/test_set/cvp-sanity/__init__.py
similarity index 100%
rename from cvp-sanity/cvp_checks/__init__.py
rename to test_set/cvp-sanity/__init__.py
diff --git a/test_set/cvp-sanity/conftest.py b/test_set/cvp-sanity/conftest.py
new file mode 100644
index 0000000..7c85d62
--- /dev/null
+++ b/test_set/cvp-sanity/conftest.py
@@ -0,0 +1,29 @@
+from fixtures.base import *
+
+
+@pytest.hookimpl(tryfirst=True, hookwrapper=True)
+def pytest_runtest_makereport(item, call):
+ outcome = yield
+
+ rep = outcome.get_result()
+ setattr(item, "rep_" + rep.when, rep)
+ rep.description = "{}".format(str(item.function.__doc__))
+ setattr(item, 'description', item.function.__doc__)
+
+
+@pytest.fixture(autouse=True)
+def show_test_steps(request):
+ yield
+ # request.node is an "item" because we use the default
+ # "function" scope
+ if request.node.description is None or request.node.description == "None":
+ return
+ try:
+ if request.node.rep_setup.failed:
+ print("setup failed. The following steps were attempted: \n {steps}".format(steps=request.node.description))
+ elif request.node.rep_setup.passed:
+ if request.node.rep_call.failed:
+ print("test execution failed! The following steps were attempted: \n {steps}".format(steps=request.node.description))
+ except BaseException as e:
+ print("Error in show_test_steps fixture: {}".format(e))
+ pass
diff --git a/cvp-sanity/cvp_checks/fixtures/__init__.py b/test_set/cvp-sanity/fixtures/__init__.py
similarity index 100%
rename from cvp-sanity/cvp_checks/fixtures/__init__.py
rename to test_set/cvp-sanity/fixtures/__init__.py
diff --git a/cvp-sanity/cvp_checks/fixtures/base.py b/test_set/cvp-sanity/fixtures/base.py
similarity index 64%
rename from cvp-sanity/cvp_checks/fixtures/base.py
rename to test_set/cvp-sanity/fixtures/base.py
index ff7e860..8e3b130 100644
--- a/cvp-sanity/cvp_checks/fixtures/base.py
+++ b/test_set/cvp-sanity/fixtures/base.py
@@ -1,6 +1,6 @@
import pytest
import atexit
-import cvp_checks.utils as utils
+import utils
@pytest.fixture(scope='session')
@@ -14,15 +14,6 @@
def nodes_in_group(request):
return request.param
-@pytest.fixture(scope='session')
-def check_alerta(local_salt_client):
- salt_output = local_salt_client.cmd(
- 'prometheus:alerta',
- 'test.ping',
- expr_form='pillar')
- if not salt_output:
- pytest.skip("Alerta service or prometheus:alerta pillar \
- are not found on this environment.")
@pytest.fixture(scope='session')
def ctl_nodes_pillar(local_salt_client):
@@ -31,27 +22,18 @@
If no platform is installed (no OS or k8s) we need to skip
the test (product team use case).
'''
- salt_output = local_salt_client.cmd(
- 'keystone:server',
- 'test.ping',
- expr_form='pillar')
+ salt_output = local_salt_client.test_ping(tgt='keystone:server')
if salt_output:
return "keystone:server"
else:
- salt_output = local_salt_client.cmd(
- 'etcd:server',
- 'test.ping',
- expr_form='pillar')
+ salt_output = local_salt_client.test_ping(tgt='etcd:server')
return "etcd:server" if salt_output else pytest.skip("Neither \
Openstack nor k8s is found. Skipping test")
@pytest.fixture(scope='session')
def check_openstack(local_salt_client):
- salt_output = local_salt_client.cmd(
- 'keystone:server',
- 'test.ping',
- expr_form='pillar')
+ salt_output = local_salt_client.test_ping(tgt='keystone:server')
if not salt_output:
pytest.skip("Openstack not found or keystone:server pillar \
are not found on this environment.")
@@ -59,10 +41,8 @@
@pytest.fixture(scope='session')
def check_drivetrain(local_salt_client):
- salt_output = local_salt_client.cmd(
- 'I@jenkins:client and not I@salt:master',
- 'test.ping',
- expr_form='compound')
+ salt_output = local_salt_client.test_ping(tgt='I@jenkins:client and not I@salt:master',
+ expr_form='compound')
if not salt_output:
pytest.skip("Drivetrain service or jenkins:client pillar \
are not found on this environment.")
@@ -70,21 +50,23 @@
@pytest.fixture(scope='session')
def check_prometheus(local_salt_client):
- salt_output = local_salt_client.cmd(
- 'prometheus:server',
- 'test.ping',
- expr_form='pillar')
+ salt_output = local_salt_client.test_ping(tgt='prometheus:server')
if not salt_output:
pytest.skip("Prometheus service or prometheus:server pillar \
are not found on this environment.")
@pytest.fixture(scope='session')
+def check_alerta(local_salt_client):
+ salt_output = local_salt_client.test_ping(tgt='prometheus:alerta')
+ if not salt_output:
+ pytest.skip("Alerta service or prometheus:alerta pillar \
+ are not found on this environment.")
+
+
+@pytest.fixture(scope='session')
def check_kibana(local_salt_client):
- salt_output = local_salt_client.cmd(
- 'kibana:server',
- 'test.ping',
- expr_form='pillar')
+ salt_output = local_salt_client.test_ping(tgt='kibana:server')
if not salt_output:
pytest.skip("Kibana service or kibana:server pillar \
are not found on this environment.")
@@ -92,15 +74,20 @@
@pytest.fixture(scope='session')
def check_grafana(local_salt_client):
- salt_output = local_salt_client.cmd(
- 'grafana:client',
- 'test.ping',
- expr_form='pillar')
+ salt_output = local_salt_client.test_ping(tgt='grafana:client')
if not salt_output:
pytest.skip("Grafana service or grafana:client pillar \
are not found on this environment.")
+@pytest.fixture(scope='session')
+def check_cinder_backends(local_salt_client):
+ backends_cinder_available = local_salt_client.test_ping(tgt='cinder:controller')
+ if not backends_cinder_available or not any(backends_cinder_available.values()):
+ pytest.skip("Cinder service or cinder:controller:backend pillar \
+ are not found on this environment.")
+
+
def pytest_namespace():
return {'contrail': None}
@@ -108,9 +95,9 @@
@pytest.fixture(scope='module')
def contrail(local_salt_client):
probe = local_salt_client.cmd(
- 'opencontrail:control',
- 'pillar.get',
- 'opencontrail:control:version',
+ tgt='opencontrail:control',
+ fun='pillar.get',
+ param='opencontrail:control:version',
expr_form='pillar')
if not probe:
pytest.skip("Contrail is not found on this environment")
@@ -120,6 +107,38 @@
pytest.contrail = str(versions.pop())[:1]
+@pytest.fixture(scope='session')
+def check_kdt(local_salt_client):
+ kdt_nodes_available = local_salt_client.test_ping(
+ tgt="I@gerrit:client and I@kubernetes:pool and not I@salt:master",
+ expr_form='compound'
+ )
+ if not kdt_nodes_available:
+ pytest.skip("No 'kdt' nodes found. Skipping this test...")
+ return kdt_nodes_available.keys()
+
+
+@pytest.fixture(scope='session')
+def check_kfg(local_salt_client):
+ kfg_nodes_available = local_salt_client.cmd(
+ tgt="I@kubernetes:pool and I@salt:master",
+ expr_form='compound'
+ )
+ if not kfg_nodes_available:
+ pytest.skip("No cfg-under-Kubernetes nodes found. Skipping this test...")
+ return kfg_nodes_available.keys()
+
+
+@pytest.fixture(scope='session')
+def check_cicd(local_salt_client):
+ cicd_nodes_available = local_salt_client.test_ping(
+ tgt="I@gerrit:client and I@docker:swarm",
+ expr_form='compound'
+ )
+ if not cicd_nodes_available:
+ pytest.skip("No 'cid' nodes found. Skipping this test...")
+
+
@pytest.fixture(autouse=True, scope='session')
def print_node_version(local_salt_client):
"""
@@ -140,9 +159,8 @@
fi ".format(name=filename_with_versions)
list_version = local_salt_client.cmd(
- '*',
- 'cmd.run',
- 'echo "NODE_INFO=$(uname -sr)" && ' + cat_image_version_file,
+ tgt='*',
+ param='echo "NODE_INFO=$(uname -sr)" && ' + cat_image_version_file,
expr_form='compound')
if list_version.__len__() == 0:
yield
diff --git a/cvp-sanity/cvp_checks/global_config.yaml b/test_set/cvp-sanity/global_config.yaml
similarity index 88%
rename from cvp-sanity/cvp_checks/global_config.yaml
rename to test_set/cvp-sanity/global_config.yaml
index 1521383..813b82d 100644
--- a/cvp-sanity/cvp_checks/global_config.yaml
+++ b/test_set/cvp-sanity/global_config.yaml
@@ -71,6 +71,16 @@
{
"skipped_ifaces": ["lo", "virbr0", "docker_gwbridge", "docker0"]}
+# packages test 'test_packages_are_latest' setting
+# this can skip scecial packages
+# True value for 'skip_test' will skip this test. Set False to run the test.
+# TODO: remove default False value when prod env is fixed
+test_packages:
+ { # "skipped_packages": ["update-notifier-common", "wget"]
+ "skipped_packages": [""],
+ "skip_test": True
+ }
+
# specify what mcp version (tag) is deployed
drivetrain_version: ''
diff --git a/test_set/cvp-sanity/pytest.ini b/test_set/cvp-sanity/pytest.ini
new file mode 100644
index 0000000..7d6dde9
--- /dev/null
+++ b/test_set/cvp-sanity/pytest.ini
@@ -0,0 +1,3 @@
+[pytest]
+norecursedirs = venv
+addopts = -vv --tb=short
\ No newline at end of file
diff --git a/cvp-sanity/requirements.txt b/test_set/cvp-sanity/requirements.txt
similarity index 100%
rename from cvp-sanity/requirements.txt
rename to test_set/cvp-sanity/requirements.txt
diff --git a/cvp-sanity/cvp_checks/tests/__init__.py b/test_set/cvp-sanity/tests/__init__.py
similarity index 100%
rename from cvp-sanity/cvp_checks/tests/__init__.py
rename to test_set/cvp-sanity/tests/__init__.py
diff --git a/cvp-sanity/cvp_checks/tests/ceph/test_ceph_haproxy.py b/test_set/cvp-sanity/tests/ceph/test_ceph_haproxy.py
similarity index 74%
rename from cvp-sanity/cvp_checks/tests/ceph/test_ceph_haproxy.py
rename to test_set/cvp-sanity/tests/ceph/test_ceph_haproxy.py
index d6c8e49..4d2566c 100644
--- a/cvp-sanity/cvp_checks/tests/ceph/test_ceph_haproxy.py
+++ b/test_set/cvp-sanity/tests/ceph/test_ceph_haproxy.py
@@ -6,11 +6,10 @@
fail = {}
monitor_info = local_salt_client.cmd(
- 'ceph:mon',
- 'cmd.run',
- ["echo 'show stat' | nc -U "
- "/var/run/haproxy/admin.sock | "
- "grep ceph_mon_radosgw_cluster"],
+ tgt='ceph:mon',
+ param="echo 'show stat' | nc -U "
+ "/var/run/haproxy/admin.sock | "
+ "grep ceph_mon_radosgw_cluster",
expr_form='pillar')
if not monitor_info:
pytest.skip("Ceph is not found on this environment")
diff --git a/cvp-sanity/cvp_checks/tests/ceph/test_ceph_pg_count.py b/test_set/cvp-sanity/tests/ceph/test_ceph_pg_count.py
similarity index 100%
rename from cvp-sanity/cvp_checks/tests/ceph/test_ceph_pg_count.py
rename to test_set/cvp-sanity/tests/ceph/test_ceph_pg_count.py
diff --git a/cvp-sanity/cvp_checks/tests/ceph/test_ceph_replicas.py b/test_set/cvp-sanity/tests/ceph/test_ceph_replicas.py
similarity index 76%
rename from cvp-sanity/cvp_checks/tests/ceph/test_ceph_replicas.py
rename to test_set/cvp-sanity/tests/ceph/test_ceph_replicas.py
index 62af49d..4c93fe6 100644
--- a/cvp-sanity/cvp_checks/tests/ceph/test_ceph_replicas.py
+++ b/test_set/cvp-sanity/tests/ceph/test_ceph_replicas.py
@@ -8,23 +8,17 @@
special requirement for that.
"""
- ceph_monitors = local_salt_client.cmd(
- 'ceph:mon',
- 'test.ping',
- expr_form='pillar')
+ ceph_monitors = local_salt_client.test_ping(tgt='ceph:mon')
if not ceph_monitors:
pytest.skip("Ceph is not found on this environment")
monitor = ceph_monitors.keys()[0]
- raw_pool_replicas = local_salt_client.cmd(
- monitor,
- 'cmd.run',
- ["ceph osd dump | grep size | " \
- "awk '{print $3, $5, $6, $7, $8}'"],
- expr_form='glob').get(
- ceph_monitors.keys()[0]).split('\n')
+ raw_pool_replicas = local_salt_client.cmd_any(
+ tgt='ceph:mon',
+ param="ceph osd dump | grep size | " \
+ "awk '{print $3, $5, $6, $7, $8}'").split('\n')
pools_replicas = {}
for pool in raw_pool_replicas:
diff --git a/test_set/cvp-sanity/tests/ceph/test_ceph_status.py b/test_set/cvp-sanity/tests/ceph/test_ceph_status.py
new file mode 100644
index 0000000..0c0ef0c
--- /dev/null
+++ b/test_set/cvp-sanity/tests/ceph/test_ceph_status.py
@@ -0,0 +1,36 @@
+import json
+import pytest
+
+
+def test_ceph_osd(local_salt_client):
+ osd_fail = local_salt_client.cmd(
+ tgt='ceph:osd',
+ param='ceph osd tree | grep down',
+ expr_form='pillar')
+ if not osd_fail:
+ pytest.skip("Ceph is not found on this environment")
+ assert not osd_fail.values()[0], \
+ "Some osds are in down state or ceph is not found".format(
+ osd_fail.values()[0])
+
+
+def test_ceph_health(local_salt_client):
+ get_status = local_salt_client.cmd(
+ tgt='ceph:mon',
+ param='ceph -s -f json',
+ expr_form='pillar')
+ if not get_status:
+ pytest.skip("Ceph is not found on this environment")
+ status = json.loads(get_status.values()[0])["health"]
+ health = status["status"] if 'status' in status \
+ else status["overall_status"]
+
+ # Health structure depends on Ceph version, so condition is needed:
+ if 'checks' in status:
+ summary = "Summary: {}".format(
+ [i["summary"]["message"] for i in status["checks"].values()])
+ else:
+ summary = status["summary"]
+
+ assert health == "HEALTH_OK",\
+ "Ceph status is not expected. {}".format(summary)
diff --git a/cvp-sanity/cvp_checks/tests/ceph/test_ceph_tell_bench.py b/test_set/cvp-sanity/tests/ceph/test_ceph_tell_bench.py
similarity index 100%
rename from cvp-sanity/cvp_checks/tests/ceph/test_ceph_tell_bench.py
rename to test_set/cvp-sanity/tests/ceph/test_ceph_tell_bench.py
diff --git a/test_set/cvp-sanity/tests/test_cinder_services.py b/test_set/cvp-sanity/tests/test_cinder_services.py
new file mode 100644
index 0000000..a83a3f9
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_cinder_services.py
@@ -0,0 +1,34 @@
+import pytest
+
+
+def test_cinder_services_are_up(local_salt_client, check_cinder_backends):
+ """
+ # Make sure that cinder backend exists with next command: `salt -C "I@cinder:controller" pillar.get cinder:controller:backend`
+ # Check that all services has 'Up' status in output of `cinder service-list` on keystone:server nodes
+ """
+ service_down = local_salt_client.cmd_any(
+ tgt='keystone:server',
+ param='. /root/keystonercv3; cinder service-list | grep "down\|disabled"')
+ assert service_down == '', \
+ '''Some cinder services are in wrong state'''
+
+
+def test_cinder_services_has_all_backends(local_salt_client, check_cinder_backends):
+ """
+ # Make sure that cinder backend exists with next command: `salt -C "I@cinder:controller" pillar.get cinder:controller:backend`
+ # Check that quantity of backend in cinder:controller:backend pillar is similar to list of volumes in cinder service-list
+ """
+ backends_cinder = local_salt_client.pillar_get(
+ tgt='cinder:controller',
+ param='cinder:controller:backend'
+ )
+ cinder_volume = local_salt_client.cmd_any(
+ tgt='keystone:server',
+ param='. /root/keystonercv3; cinder service-list | grep "volume" |grep -c -v -e "lvm"')
+ print(backends_cinder)
+ print(cinder_volume)
+ backends_num = len(backends_cinder.keys())
+ assert cinder_volume == str(backends_num), \
+ 'Number of cinder-volume services ({0}) does not match ' \
+ 'number of volume backends ({1})'.format(
+ cinder_volume, str(backends_num))
\ No newline at end of file
diff --git a/cvp-sanity/cvp_checks/tests/test_contrail.py b/test_set/cvp-sanity/tests/test_contrail.py
similarity index 83%
rename from cvp-sanity/cvp_checks/tests/test_contrail.py
rename to test_set/cvp-sanity/tests/test_contrail.py
index 1c1ad13..fcb96f9 100644
--- a/cvp-sanity/cvp_checks/tests/test_contrail.py
+++ b/test_set/cvp-sanity/tests/test_contrail.py
@@ -1,6 +1,6 @@
import pytest
import json
-from cvp_checks import utils
+import utils
pytestmark = pytest.mark.usefixtures("contrail")
@@ -9,12 +9,12 @@
def get_contrail_status(salt_client, pillar, command, processor):
return salt_client.cmd(
- pillar, 'cmd.run',
- ['{} | {}'.format(command, processor)],
+ tgt=pillar,
+ param='{} | {}'.format(command, processor),
expr_form='pillar'
)
-def test_contrail_compute_status(local_salt_client):
+def test_contrail_compute_status(local_salt_client, check_openstack):
cs = get_contrail_status(local_salt_client, 'nova:compute',
STATUS_COMMAND, STATUS_FILTER)
broken_services = []
@@ -38,7 +38,7 @@
indent=4))
-def test_contrail_node_status(local_salt_client):
+def test_contrail_node_status(local_salt_client, check_openstack):
command = STATUS_COMMAND
# TODO: what will be in OpenContrail 5?
@@ -68,7 +68,7 @@
indent=4))
-def test_contrail_vrouter_count(local_salt_client):
+def test_contrail_vrouter_count(local_salt_client, check_openstack):
cs = get_contrail_status(local_salt_client, 'nova:compute',
STATUS_COMMAND, STATUS_FILTER)
@@ -88,16 +88,14 @@
len(cs.keys()))
-def test_public_ui_contrail(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+def test_public_ui_contrail(local_salt_client, ctl_nodes_pillar, check_openstack):
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '8143'
url = "{}://{}:{}".format(protocol, IP, port)
- result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl -k {}/ 2>&1 | \
- grep Contrail'.format(url)],
- expr_form='pillar')
- assert len(result[result.keys()[0]]) != 0, \
+ result = local_salt_client.cmd_any(
+ tgt=ctl_nodes_pillar,
+ param='curl -k {}/ 2>&1 | \
+ grep Contrail'.format(url))
+ assert len(result) != 0, \
'Public Contrail UI is not reachable on {} from ctl nodes'.format(url)
diff --git a/test_set/cvp-sanity/tests/test_default_gateway.py b/test_set/cvp-sanity/tests/test_default_gateway.py
new file mode 100644
index 0000000..8cea880
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_default_gateway.py
@@ -0,0 +1,24 @@
+import json
+
+
+def test_check_default_gateways(local_salt_client, nodes_in_group):
+ netstat_info = local_salt_client.cmd(
+ tgt="L@"+','.join(nodes_in_group),
+ param='ip r | sed -n 1p',
+ expr_form='compound')
+
+ gateways = {}
+
+ for node in netstat_info.keys():
+ gateway = netstat_info[node]
+ if isinstance(gateway, bool):
+ gateway = 'Cannot access node(-s)'
+ if gateway not in gateways:
+ gateways[gateway] = [node]
+ else:
+ gateways[gateway].append(node)
+
+ assert len(gateways.keys()) == 1, \
+ "There were found few gateways: {gw}".format(
+ gw=json.dumps(gateways, indent=4)
+ )
diff --git a/test_set/cvp-sanity/tests/test_drivetrain.py b/test_set/cvp-sanity/tests/test_drivetrain.py
new file mode 100644
index 0000000..3a9f1b6
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_drivetrain.py
@@ -0,0 +1,435 @@
+import jenkins
+from xml.dom import minidom
+import utils
+import json
+import pytest
+import time
+import os
+from pygerrit2 import GerritRestAPI, HTTPBasicAuth
+from requests import HTTPError
+import git
+import ldap
+import ldap.modlist as modlist
+
+
+def join_to_gerrit(local_salt_client, gerrit_user, gerrit_password):
+ gerrit_port = local_salt_client.pillar_get(
+ tgt='I@gerrit:client and not I@salt:master',
+ param='_param:haproxy_gerrit_bind_port',
+ expr_form='compound')
+ gerrit_address = local_salt_client.pillar_get(
+ tgt='I@gerrit:client and not I@salt:master',
+ param='_param:haproxy_gerrit_bind_host',
+ expr_form='compound')
+ url = 'http://{0}:{1}'.format(gerrit_address,gerrit_port)
+ auth = HTTPBasicAuth(gerrit_user, gerrit_password)
+ rest = GerritRestAPI(url=url, auth=auth)
+ return rest
+
+
+def join_to_jenkins(local_salt_client, jenkins_user, jenkins_password):
+ jenkins_port = local_salt_client.pillar_get(
+ tgt='I@jenkins:client and not I@salt:master',
+ param='_param:haproxy_jenkins_bind_port',
+ expr_form='compound')
+ jenkins_address = local_salt_client.pillar_get(
+ tgt='I@jenkins:client and not I@salt:master',
+ param='_param:haproxy_jenkins_bind_host',
+ expr_form='compound')
+ jenkins_url = 'http://{0}:{1}'.format(jenkins_address,jenkins_port)
+ server = jenkins.Jenkins(jenkins_url, username=jenkins_user, password=jenkins_password)
+ return server
+
+
+def get_password(local_salt_client,service):
+ password = local_salt_client.pillar_get(
+ tgt=service,
+ param='_param:openldap_admin_password')
+ return password
+
+
+def test_drivetrain_gerrit(local_salt_client, check_cicd):
+ gerrit_password = get_password(local_salt_client,'gerrit:client')
+ gerrit_error = ''
+ current_date = time.strftime("%Y%m%d-%H.%M.%S", time.localtime())
+ test_proj_name = "test-dt-{0}".format(current_date)
+ gerrit_port = local_salt_client.pillar_get(
+ tgt='I@gerrit:client and not I@salt:master',
+ param='_param:haproxy_gerrit_bind_port',
+ expr_form='compound')
+ gerrit_address = local_salt_client.pillar_get(
+ tgt='I@gerrit:client and not I@salt:master',
+ param='_param:haproxy_gerrit_bind_host',
+ expr_form='compound')
+ try:
+ #Connecting to gerrit and check connection
+ server = join_to_gerrit(local_salt_client,'admin',gerrit_password)
+ gerrit_check = server.get("/changes/?q=owner:self%20status:open")
+ #Check deleteproject plugin and skip test if the plugin is not installed
+ gerrit_plugins = server.get("/plugins/?all")
+ if 'deleteproject' not in gerrit_plugins:
+ pytest.skip("Delete-project plugin is not installed")
+ #Create test project and add description
+ server.put("/projects/"+test_proj_name)
+ server.put("/projects/"+test_proj_name+"/description",json={"description":"Test DriveTrain project","commit_message": "Update the project description"})
+ except HTTPError, e:
+ gerrit_error = e
+ try:
+ #Create test folder and init git
+ repo_dir = os.path.join(os.getcwd(),test_proj_name)
+ file_name = os.path.join(repo_dir, current_date)
+ repo = git.Repo.init(repo_dir)
+ #Add remote url for this git repo
+ origin = repo.create_remote('origin', 'http://admin:{1}@{2}:{3}/{0}.git'.format(test_proj_name,gerrit_password,gerrit_address,gerrit_port))
+ #Add commit-msg hook to automatically add Change-Id to our commit
+ os.system("curl -Lo {0}/.git/hooks/commit-msg 'http://admin:{1}@{2}:{3}/tools/hooks/commit-msg' > /dev/null 2>&1".format(repo_dir,gerrit_password,gerrit_address,gerrit_port))
+ os.system("chmod u+x {0}/.git/hooks/commit-msg".format(repo_dir))
+ #Create a test file
+ f = open(file_name, 'w+')
+ f.write("This is a test file for DriveTrain test")
+ f.close()
+ #Add file to git and commit it to Gerrit for review
+ repo.index.add([file_name])
+ repo.index.commit("This is a test commit for DriveTrain test")
+ repo.git.push("origin", "HEAD:refs/for/master")
+ #Get change id from Gerrit. Set Code-Review +2 and submit this change
+ changes = server.get("/changes/?q=project:{0}".format(test_proj_name))
+ last_change = changes[0].get('change_id')
+ server.post("/changes/{0}/revisions/1/review".format(last_change),json={"message": "All is good","labels":{"Code-Review":"+2"}})
+ server.post("/changes/{0}/submit".format(last_change))
+ except HTTPError, e:
+ gerrit_error = e
+ finally:
+ #Delete test project
+ server.post("/projects/"+test_proj_name+"/deleteproject~delete")
+ assert gerrit_error == '',\
+ 'Something is wrong with Gerrit'.format(gerrit_error)
+
+
+def test_drivetrain_openldap(local_salt_client, check_cicd):
+ """
+ 1. Create a test user 'DT_test_user' in openldap
+ 2. Add the user to admin group
+ 3. Login using the user to Jenkins
+ 4. Check that no error occurred
+ 5. Add the user to devops group in Gerrit and then login to Gerrit
+ using test_user credentials.
+ 6 Start job in jenkins from this user
+ 7. Get info from gerrit from this user
+ 6. Finally, delete the user from admin
+ group and openldap
+ """
+
+ # TODO split to several test cases. One check - per one test method. Make the login process in fixture
+ ldap_password = get_password(local_salt_client,'openldap:client')
+ #Check that ldap_password is exists, otherwise skip test
+ if not ldap_password:
+ pytest.skip("Openldap service or openldap:client pillar \
+ are not found on this environment.")
+ ldap_port = local_salt_client.pillar_get(
+ tgt='I@openldap:client and not I@salt:master',
+ param='_param:haproxy_openldap_bind_port',
+ expr_form='compound')
+ ldap_address = local_salt_client.pillar_get(
+ tgt='I@openldap:client and not I@salt:master',
+ param='_param:haproxy_openldap_bind_host',
+ expr_form='compound')
+ ldap_dc = local_salt_client.pillar_get(
+ tgt='openldap:client',
+ param='_param:openldap_dn')
+ ldap_con_admin = local_salt_client.pillar_get(
+ tgt='openldap:client',
+ param='openldap:client:server:auth:user')
+ ldap_url = 'ldap://{0}:{1}'.format(ldap_address,ldap_port)
+ ldap_error = ''
+ ldap_result = ''
+ gerrit_result = ''
+ gerrit_error = ''
+ jenkins_error = ''
+ #Test user's CN
+ test_user_name = 'DT_test_user'
+ test_user = 'cn={0},ou=people,{1}'.format(test_user_name,ldap_dc)
+ #Admins group CN
+ admin_gr_dn = 'cn=admins,ou=groups,{0}'.format(ldap_dc)
+ #List of attributes for test user
+ attrs = {}
+ attrs['objectclass'] = ['organizationalRole', 'simpleSecurityObject', 'shadowAccount']
+ attrs['cn'] = test_user_name
+ attrs['uid'] = test_user_name
+ attrs['userPassword'] = 'aSecretPassw'
+ attrs['description'] = 'Test user for CVP DT test'
+ searchFilter = 'cn={0}'.format(test_user_name)
+ #Get a test job name from config
+ config = utils.get_configuration()
+ jenkins_cvp_job = config['jenkins_cvp_job']
+ #Open connection to ldap and creating test user in admins group
+ try:
+ ldap_server = ldap.initialize(ldap_url)
+ ldap_server.simple_bind_s(ldap_con_admin,ldap_password)
+ ldif = modlist.addModlist(attrs)
+ ldap_server.add_s(test_user,ldif)
+ ldap_server.modify_s(admin_gr_dn,[(ldap.MOD_ADD, 'memberUid', [test_user_name],)],)
+ #Check search test user in LDAP
+ searchScope = ldap.SCOPE_SUBTREE
+ ldap_result = ldap_server.search_s(ldap_dc, searchScope, searchFilter)
+ except ldap.LDAPError, e:
+ ldap_error = e
+ try:
+ #Check connection between Jenkins and LDAP
+ jenkins_server = join_to_jenkins(local_salt_client,test_user_name,'aSecretPassw')
+ jenkins_version = jenkins_server.get_job_name(jenkins_cvp_job)
+ #Check connection between Gerrit and LDAP
+ gerrit_server = join_to_gerrit(local_salt_client,'admin',ldap_password)
+ gerrit_check = gerrit_server.get("/changes/?q=owner:self%20status:open")
+ #Add test user to devops-contrib group in Gerrit and check login
+ _link = "/groups/devops-contrib/members/{0}".format(test_user_name)
+ gerrit_add_user = gerrit_server.put(_link)
+ gerrit_server = join_to_gerrit(local_salt_client,test_user_name,'aSecretPassw')
+ gerrit_result = gerrit_server.get("/changes/?q=owner:self%20status:open")
+ except HTTPError, e:
+ gerrit_error = e
+ except jenkins.JenkinsException, e:
+ jenkins_error = e
+ finally:
+ ldap_server.modify_s(admin_gr_dn,[(ldap.MOD_DELETE, 'memberUid', [test_user_name],)],)
+ ldap_server.delete_s(test_user)
+ ldap_server.unbind_s()
+ assert ldap_error == '', \
+ '''Something is wrong with connection to LDAP:
+ {0}'''.format(e)
+ assert jenkins_error == '', \
+ '''Connection to Jenkins was not established:
+ {0}'''.format(e)
+ assert gerrit_error == '', \
+ '''Connection to Gerrit was not established:
+ {0}'''.format(e)
+ assert ldap_result !=[], \
+ '''Test user was not found'''
+
+
+def test_drivetrain_services_replicas(local_salt_client, check_cicd):
+ """
+ # Execute ` salt -C 'I@gerrit:client' cmd.run 'docker service ls'` command to get info for each docker service like that:
+ "x5nzktxsdlm6 jenkins_slave02 replicated 0/1 docker-prod-local.artifactory.mirantis.com/mirantis/cicd/jnlp-slave:2019.2.0 "
+ # Check that each service has all replicas
+ """
+ # TODO: replace with rerunfalures plugin
+ wrong_items = []
+ for _ in range(4):
+ docker_services_by_nodes = local_salt_client.cmd(
+ tgt='I@gerrit:client',
+ param='docker service ls',
+ expr_form='compound')
+ wrong_items = []
+ for line in docker_services_by_nodes[docker_services_by_nodes.keys()[0]].split('\n'):
+ if line[line.find('/') - 1] != line[line.find('/') + 1] \
+ and 'replicated' in line:
+ wrong_items.append(line)
+ if len(wrong_items) == 0:
+ break
+ else:
+ print('''Some DriveTrain services doesn't have expected number of replicas:
+ {}\n'''.format(json.dumps(wrong_items, indent=4)))
+ time.sleep(5)
+ assert len(wrong_items) == 0
+
+
+def test_drivetrain_components_and_versions(local_salt_client, check_cicd):
+ """
+ 1. Execute command `docker service ls --format "{{.Image}}"'` on the 'I@gerrit:client' target
+ 2. Execute ` salt -C 'I@gerrit:client' pillar.get docker:client:images`
+ 3. Check that list of images from step 1 is the same as a list from the step2
+ 4. Check that all docker services has label that equals to mcp_version
+
+ """
+ config = utils.get_configuration()
+ if not config['drivetrain_version']:
+ expected_version = \
+ local_salt_client.pillar_get(param='_param:mcp_version') or \
+ local_salt_client.pillar_get(param='_param:apt_mk_version')
+ if not expected_version:
+ pytest.skip("drivetrain_version is not defined. Skipping")
+ else:
+ expected_version = config['drivetrain_version']
+ table_with_docker_services = local_salt_client.cmd(tgt='I@gerrit:client',
+ param='docker service ls --format "{{.Image}}"',
+ expr_form='compound')
+ expected_images = local_salt_client.pillar_get(tgt='gerrit:client',
+ param='docker:client:images')
+ mismatch = {}
+ actual_images = {}
+ for image in set(table_with_docker_services[table_with_docker_services.keys()[0]].split('\n')):
+ actual_images[image.split(":")[0]] = image.split(":")[-1]
+ for image in set(expected_images):
+ im_name = image.split(":")[0]
+ if im_name not in actual_images:
+ mismatch[im_name] = 'not found on env'
+ elif image.split(":")[-1] != actual_images[im_name]:
+ mismatch[im_name] = 'has {actual} version instead of {expected}'.format(
+ actual=actual_images[im_name], expected=image.split(":")[-1])
+ assert len(mismatch) == 0, \
+ '''Some DriveTrain components do not have expected versions:
+ {}'''.format(json.dumps(mismatch, indent=4))
+
+
+def test_jenkins_jobs_branch(local_salt_client, check_cicd):
+ """ This test compares Jenkins jobs versions
+ collected from the cloud vs collected from pillars.
+ """
+ excludes = ['upgrade-mcp-release', 'deploy-update-salt',
+ 'git-mirror-downstream-mk-pipelines',
+ 'git-mirror-downstream-pipeline-library']
+
+ config = utils.get_configuration()
+ drivetrain_version = config.get('drivetrain_version', '')
+ jenkins_password = get_password(local_salt_client, 'jenkins:client')
+ version_mismatch = []
+ server = join_to_jenkins(local_salt_client, 'admin', jenkins_password)
+ for job_instance in server.get_jobs():
+ job_name = job_instance.get('name')
+ if job_name in excludes:
+ continue
+
+ job_config = server.get_job_config(job_name)
+ xml_data = minidom.parseString(job_config)
+ BranchSpec = xml_data.getElementsByTagName('hudson.plugins.git.BranchSpec')
+
+ # We use master branch for pipeline-library in case of 'testing,stable,nighlty' versions
+ # Leave proposed version as is
+ # in other cases we get release/{drivetrain_version} (e.g release/2019.2.0)
+ if drivetrain_version in ['testing', 'nightly', 'stable']:
+ expected_version = 'master'
+ else:
+ expected_version = local_salt_client.pillar_get(
+ tgt='gerrit:client',
+ param='jenkins:client:job:{}:scm:branch'.format(job_name))
+
+ if not BranchSpec:
+ print("No BranchSpec has found for {} job".format(job_name))
+ continue
+
+ actual_version = BranchSpec[0].getElementsByTagName('name')[0].childNodes[0].data
+ if actual_version not in expected_version and expected_version != '':
+ version_mismatch.append("Job {0} has {1} branch."
+ "Expected {2}".format(job_name,
+ actual_version,
+ expected_version))
+ assert len(version_mismatch) == 0, \
+ '''Some DriveTrain jobs have version/branch mismatch:
+ {}'''.format(json.dumps(version_mismatch, indent=4))
+
+
+def test_drivetrain_jenkins_job(local_salt_client, check_cicd):
+ """
+ # Login to Jenkins on jenkins:client
+ # Read the name of jobs from configuration 'jenkins_test_job'
+ # Start job
+ # Wait till the job completed
+ # Check that job has completed with "SUCCESS" result
+ """
+ job_result = None
+
+ jenkins_password = get_password(local_salt_client, 'jenkins:client')
+ server = join_to_jenkins(local_salt_client, 'admin', jenkins_password)
+ # Getting Jenkins test job name from configuration
+ config = utils.get_configuration()
+ jenkins_test_job = config['jenkins_test_job']
+ if not server.get_job_name(jenkins_test_job):
+ server.create_job(jenkins_test_job, jenkins.EMPTY_CONFIG_XML)
+ if server.get_job_name(jenkins_test_job):
+ next_build_num = server.get_job_info(jenkins_test_job)['nextBuildNumber']
+ # If this is first build number skip building check
+ if next_build_num != 1:
+ # Check that test job is not running at this moment,
+ # Otherwise skip the test
+ last_build_num = server.get_job_info(jenkins_test_job)['lastBuild'].get('number')
+ last_build_status = server.get_build_info(jenkins_test_job, last_build_num)['building']
+ if last_build_status:
+ pytest.skip("Test job {0} is already running").format(jenkins_test_job)
+ server.build_job(jenkins_test_job)
+ timeout = 0
+ # Use job status True by default to exclude timeout between build job and start job.
+ job_status = True
+ while job_status and (timeout < 180):
+ time.sleep(10)
+ timeout += 10
+ job_status = server.get_build_info(jenkins_test_job, next_build_num)['building']
+ job_result = server.get_build_info(jenkins_test_job, next_build_num)['result']
+ else:
+ pytest.skip("The job {0} was not found").format(jenkins_test_job)
+ assert job_result == 'SUCCESS', \
+ '''Test job '{0}' build was not successful or timeout is too small
+ '''.format(jenkins_test_job)
+
+
+def test_kdt_all_pods_are_available(local_salt_client, check_kdt):
+ """
+ # Run kubectl get pods -n drivetrain on kdt-nodes to get status for each pod
+ # Check that each pod has fulfilled status in the READY column
+
+ """
+ pods_statuses_output = local_salt_client.cmd_any(
+ tgt='L@'+','.join(check_kdt),
+ param='kubectl get pods -n drivetrain | awk {\'print $1"; "$2\'} | column -t',
+ expr_form='compound')
+
+ assert pods_statuses_output != "/bin/sh: 1: kubectl: not found", \
+ "Nodes {} don't have kubectl".format(check_kdt)
+ # Convert string to list and remove first row with column names
+ pods_statuses = pods_statuses_output.split('\n')
+ pods_statuses = pods_statuses[1:]
+
+ report_with_errors = ""
+ for pod_status in pods_statuses:
+ pod, status = pod_status.split('; ')
+ actual_replica, expected_replica = status.split('/')
+
+ if actual_replica.strip() != expected_replica.strip():
+ report_with_errors += "Pod [{pod}] doesn't have all containers. Expected {expected} containers, actual {actual}\n".format(
+ pod=pod,
+ expected=expected_replica,
+ actual=actual_replica
+ )
+
+ print report_with_errors
+ assert report_with_errors == "", \
+ "\n{sep}{kubectl_output}{sep} \n\n {report} ".format(
+ sep="\n" + "-"*20 + "\n",
+ kubectl_output=pods_statuses_output,
+ report=report_with_errors
+ )
+
+def test_kfg_all_pods_are_available(local_salt_client, check_kfg):
+ """
+ # Run kubectl get pods -n drivetrain on cfg node to get status for each pod
+ # Check that each pod has fulfilled status in the READY column
+
+ """
+ # TODO collapse similar tests into one to check pods and add new fixture
+ pods_statuses_output = local_salt_client.cmd_any(
+ tgt='L@' + ','.join(check_kfg),
+ param='kubectl get pods -n drivetrain | awk {\'print $1"; "$2\'} | column -t',
+ expr_form='compound')
+ # Convert string to list and remove first row with column names
+ pods_statuses = pods_statuses_output.split('\n')
+ pods_statuses = pods_statuses[1:]
+
+ report_with_errors = ""
+ for pod_status in pods_statuses:
+ pod, status = pod_status.split('; ')
+ actual_replica, expected_replica = status.split('/')
+
+ if actual_replica.strip() == expected_replica.strip():
+ report_with_errors += "Pod [{pod}] doesn't have all containers. Expected {expected} containers, actual {actual}\n".format(
+ pod=pod,
+ expected=expected_replica,
+ actual=actual_replica
+ )
+
+ print report_with_errors
+ assert report_with_errors != "", \
+ "\n{sep}{kubectl_output}{sep} \n\n {report} ".format(
+ sep="\n" + "-" * 20 + "\n",
+ kubectl_output=pods_statuses_output,
+ report=report_with_errors
+ )
\ No newline at end of file
diff --git a/cvp-sanity/cvp_checks/tests/test_duplicate_ips.py b/test_set/cvp-sanity/tests/test_duplicate_ips.py
similarity index 78%
rename from cvp-sanity/cvp_checks/tests/test_duplicate_ips.py
rename to test_set/cvp-sanity/tests/test_duplicate_ips.py
index 9e4b978..3b55a26 100644
--- a/cvp-sanity/cvp_checks/tests/test_duplicate_ips.py
+++ b/test_set/cvp-sanity/tests/test_duplicate_ips.py
@@ -1,8 +1,8 @@
-import pytest
from collections import Counter
from pprint import pformat
import os
-from cvp_checks import utils
+
+import utils
def get_duplicate_ifaces(nodes, ips):
@@ -13,23 +13,26 @@
dup_ifaces[node] = {iface: nodes[node]['ip4_interfaces'][iface]}
return dup_ifaces
-def test_duplicate_ips(local_salt_client):
- active_nodes = utils.get_active_nodes()
+def test_duplicate_ips(local_salt_client):
testname = os.path.basename(__file__).split('.')[0]
config = utils.get_configuration()
skipped_ifaces = config.get(testname)["skipped_ifaces"]
- local_salt_client.cmd('L@'+','.join(active_nodes),
- 'saltutil.refresh_grains',
+ local_salt_client.cmd(tgt='*',
+ fun='saltutil.refresh_grains',
expr_form='compound')
- nodes = local_salt_client.cmd('L@'+','.join(active_nodes),
- 'grains.item',
- ['ip4_interfaces'],
+ nodes = local_salt_client.cmd(tgt='*',
+ fun='grains.item',
+ param='ip4_interfaces',
expr_form='compound')
ipv4_list = []
for node in nodes:
+ if isinstance(nodes[node], bool):
+ # TODO: do not skip node
+ print ("{} node is skipped".format(node))
+ continue
for iface in nodes[node]['ip4_interfaces']:
# Omit 'ip-less' ifaces
if not nodes[node]['ip4_interfaces'][iface]:
diff --git a/test_set/cvp-sanity/tests/test_etc_hosts.py b/test_set/cvp-sanity/tests/test_etc_hosts.py
new file mode 100644
index 0000000..8850ab7
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_etc_hosts.py
@@ -0,0 +1,22 @@
+import json
+
+
+def test_etc_hosts(local_salt_client):
+ nodes_info = local_salt_client.cmd(
+ tgt='*',
+ param='cat /etc/hosts',
+ expr_form='compound')
+ result = {}
+ for node in nodes_info.keys():
+ if isinstance(nodes_info[node], bool):
+ result[node] = 'Cannot access this node'
+ continue
+ for nd in nodes_info.keys():
+ if nd not in nodes_info[node]:
+ if node in result:
+ result[node] += ',' + nd
+ else:
+ result[node] = nd
+ assert len(result) <= 1, \
+ "Some hosts are not presented in /etc/hosts: {0}".format(
+ json.dumps(result, indent=4))
\ No newline at end of file
diff --git a/cvp-sanity/cvp_checks/tests/test_galera_cluster.py b/test_set/cvp-sanity/tests/test_galera_cluster.py
similarity index 83%
rename from cvp-sanity/cvp_checks/tests/test_galera_cluster.py
rename to test_set/cvp-sanity/tests/test_galera_cluster.py
index 676f09b..73f4932 100644
--- a/cvp-sanity/cvp_checks/tests/test_galera_cluster.py
+++ b/test_set/cvp-sanity/tests/test_galera_cluster.py
@@ -3,9 +3,8 @@
def test_galera_cluster_status(local_salt_client):
gs = local_salt_client.cmd(
- 'galera:*',
- 'cmd.run',
- ['salt-call mysql.status | grep -A1 wsrep_cluster_size | tail -n1'],
+ tgt='galera:*',
+ param='salt-call mysql.status | grep -A1 wsrep_cluster_size | tail -n1',
expr_form='pillar')
if not gs:
diff --git a/cvp-sanity/cvp_checks/tests/test_k8s.py b/test_set/cvp-sanity/tests/test_k8s.py
similarity index 66%
rename from cvp-sanity/cvp_checks/tests/test_k8s.py
rename to test_set/cvp-sanity/tests/test_k8s.py
index ebfbfe3..97c3490 100644
--- a/cvp-sanity/cvp_checks/tests/test_k8s.py
+++ b/test_set/cvp-sanity/tests/test_k8s.py
@@ -5,8 +5,8 @@
def test_k8s_get_cs_status(local_salt_client):
result = local_salt_client.cmd(
- 'etcd:server', 'cmd.run',
- ['kubectl get cs'],
+ tgt='etcd:server',
+ param='kubectl get cs',
expr_form='pillar'
)
errors = []
@@ -29,8 +29,8 @@
@pytest.mark.xfail
def test_k8s_get_nodes_status(local_salt_client):
result = local_salt_client.cmd(
- 'etcd:server', 'cmd.run',
- ['kubectl get nodes'],
+ tgt='etcd:server',
+ param='kubectl get nodes',
expr_form='pillar'
)
errors = []
@@ -52,8 +52,8 @@
def test_k8s_get_calico_status(local_salt_client):
result = local_salt_client.cmd(
- 'kubernetes:pool', 'cmd.run',
- ['calicoctl node status'],
+ tgt='kubernetes:pool',
+ param='calicoctl node status',
expr_form='pillar'
)
errors = []
@@ -74,8 +74,8 @@
def test_k8s_cluster_status(local_salt_client):
result = local_salt_client.cmd(
- 'kubernetes:master', 'cmd.run',
- ['kubectl cluster-info'],
+ tgt='kubernetes:master',
+ param='kubectl cluster-info',
expr_form='pillar'
)
errors = []
@@ -96,8 +96,9 @@
def test_k8s_kubelet_status(local_salt_client):
result = local_salt_client.cmd(
- 'kubernetes:pool', 'service.status',
- ['kubelet'],
+ tgt='kubernetes:pool',
+ fun='service.status',
+ param='kubelet',
expr_form='pillar'
)
errors = []
@@ -112,8 +113,8 @@
def test_k8s_check_system_pods_status(local_salt_client):
result = local_salt_client.cmd(
- 'etcd:server', 'cmd.run',
- ['kubectl --namespace="kube-system" get pods'],
+ tgt='etcd:server',
+ param='kubectl --namespace="kube-system" get pods',
expr_form='pillar'
)
errors = []
@@ -141,3 +142,43 @@
print '{} is AVAILABLE'.format(hostname)
else:
print '{} IS NOT AVAILABLE'.format(hostname)
+
+
+def test_k8s_dashboard_available(local_salt_client):
+ """
+ # Check is kubernetes enabled on the cluster with command `salt -C 'etcd:server' cmd.run 'kubectl get svc -n kube-system'`
+ # If yes then check Dashboard addon with next command: `salt -C 'etcd:server' pillar.get kubernetes:common:addons:dashboard:enabled`
+ # If dashboard enabled get its IP from pillar `salt -C 'etcd:server' pillar.get kubernetes:common:addons:dashboard:public_ip`
+ # Check that public_ip exists
+ # Check that public_ip:8443 is accessible with curl
+ """
+ result = local_salt_client.cmd(
+ tgt='etcd:server',
+ param='kubectl get svc -n kube-system',
+ expr_form='pillar'
+ )
+ if not result:
+ pytest.skip("k8s is not found on this environment")
+
+ # service name 'kubernetes-dashboard' is hardcoded in kubernetes formula
+ dashboard_enabled = local_salt_client.pillar_get(
+ tgt='etcd:server',
+ param='kubernetes:common:addons:dashboard:enabled',)
+ if not dashboard_enabled:
+ pytest.skip("Kubernetes dashboard is not enabled in the cluster.")
+
+ external_ip = local_salt_client.pillar_get(
+ tgt='etcd:server',
+ param='kubernetes:common:addons:dashboard:public_ip')
+
+ assert external_ip.__len__() > 0, "Kubernetes dashboard is enabled but not defined in pillars"
+ # dashboard port 8443 is hardcoded in kubernetes formula
+ url = "https://{}:8443".format(external_ip)
+ check = local_salt_client.cmd(
+ tgt='etcd:server',
+ param='curl {} 2>&1 | grep kubernetesDashboard'.format(url),
+ expr_form='pillar'
+ )
+ assert len(check.values()[0]) != 0, \
+ 'Kubernetes dashboard is not reachable on {} ' \
+ 'from ctl nodes'.format(url)
diff --git a/test_set/cvp-sanity/tests/test_mounts.py b/test_set/cvp-sanity/tests/test_mounts.py
new file mode 100644
index 0000000..c9ba9ce
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_mounts.py
@@ -0,0 +1,43 @@
+import json
+import pytest
+
+
+def test_mounted_file_systems(local_salt_client, nodes_in_group):
+ """
+ # Get all mount points from each node in the group with the next command: `df -h | awk '{print $1}'`
+ # Check that all mount points are similar for each node in the group
+ """
+ mounts_by_nodes = local_salt_client.cmd(tgt="L@"+','.join(nodes_in_group),
+ param="df -h | awk '{print $1}'",
+ expr_form='compound')
+
+ # Let's exclude cmp, kvm, ceph OSD nodes, mon, cid, k8s-ctl, k8s-cmp nodes
+ # These nodes will have different mounts and this is expected
+ exclude_nodes = local_salt_client.test_ping(
+ tgt="I@nova:compute or "
+ "I@ceph:osd or "
+ "I@salt:control or "
+ "I@prometheus:server and not I@influxdb:server or "
+ "I@kubernetes:* and not I@etcd:* or "
+ "I@docker:host and not I@prometheus:server and not I@kubernetes:* or "
+ "I@gerrit:client and I@kubernetes:pool and not I@salt:master",
+ expr_form='compound').keys()
+
+ if len(mounts_by_nodes.keys()) < 2:
+ pytest.skip("Nothing to compare - only 1 node")
+
+ result = {}
+ pretty_result = {}
+
+ for node in mounts_by_nodes:
+ if node in exclude_nodes:
+ continue
+ result[node] = "\n".join(sorted(mounts_by_nodes[node].split()))
+ pretty_result[node] = sorted(mounts_by_nodes[node].split())
+
+ if not result:
+ pytest.skip("These nodes are skipped")
+
+ assert len(set(result.values())) == 1,\
+ "The nodes in the same group have different mounts:\n{}".format(
+ json.dumps(pretty_result, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_mtu.py b/test_set/cvp-sanity/tests/test_mtu.py
new file mode 100644
index 0000000..0a3d2d0
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_mtu.py
@@ -0,0 +1,72 @@
+import pytest
+import json
+import utils
+import os
+
+
+def test_mtu(local_salt_client, nodes_in_group):
+ testname = os.path.basename(__file__).split('.')[0]
+ config = utils.get_configuration()
+ skipped_ifaces = config.get(testname)["skipped_ifaces"] or \
+ ["bonding_masters", "lo", "veth", "tap", "cali", "qv", "qb", "br-int", "vxlan"]
+ total = {}
+ network_info = local_salt_client.cmd(
+ tgt="L@"+','.join(nodes_in_group),
+ param='ls /sys/class/net/',
+ expr_form='compound')
+
+ kvm_nodes = local_salt_client.test_ping(tgt='salt:control').keys()
+
+ if len(network_info.keys()) < 2:
+ pytest.skip("Nothing to compare - only 1 node")
+
+ for node, ifaces_info in network_info.iteritems():
+ if isinstance(ifaces_info, bool):
+ print ("{} node is skipped".format(node))
+ continue
+ if node in kvm_nodes:
+ kvm_info = local_salt_client.cmd(tgt=node,
+ param="virsh list | "
+ "awk '{print $2}' | "
+ "xargs -n1 virsh domiflist | "
+ "grep -v br-pxe | grep br- | "
+ "awk '{print $1}'")
+ ifaces_info = kvm_info.get(node)
+ node_ifaces = ifaces_info.split('\n')
+ ifaces = {}
+ for iface in node_ifaces:
+ for skipped_iface in skipped_ifaces:
+ if skipped_iface in iface:
+ break
+ else:
+ iface_mtu = local_salt_client.cmd(tgt=node,
+ param='cat /sys/class/'
+ 'net/{}/mtu'.format(iface))
+ ifaces[iface] = iface_mtu.get(node)
+ total[node] = ifaces
+
+ nodes = []
+ mtu_data = []
+ my_set = set()
+
+ for node in total:
+ nodes.append(node)
+ my_set.update(total[node].keys())
+ for interf in my_set:
+ diff = []
+ row = []
+ for node in nodes:
+ if interf in total[node].keys():
+ diff.append(total[node][interf])
+ row.append("{}: {}".format(node, total[node][interf]))
+ else:
+ # skip node with no virbr0 or virbr0-nic interfaces
+ if interf not in ['virbr0', 'virbr0-nic']:
+ row.append("{}: No interface".format(node))
+ if diff.count(diff[0]) < len(nodes):
+ row.sort()
+ row.insert(0, interf)
+ mtu_data.append(row)
+ assert len(mtu_data) == 0, \
+ "Several problems found: {0}".format(
+ json.dumps(mtu_data, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_nodes.py b/test_set/cvp-sanity/tests/test_nodes.py
new file mode 100644
index 0000000..687f3ae
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_nodes.py
@@ -0,0 +1,18 @@
+import json
+import pytest
+
+
+def test_minions_status(local_salt_client):
+ result = local_salt_client.cmd(
+ tgt='salt:master',
+ param='salt-run manage.status timeout=10 --out=json',
+ expr_form='pillar', check_status=True)
+ statuses = {}
+ try:
+ statuses = json.loads(result.values()[0])
+ except Exception as e:
+ pytest.fail(
+ "Could not check the result: {}\n"
+ "Nodes status result: {}".format(e, result))
+ assert not statuses["down"], "Some minions are down:\n {}".format(
+ statuses["down"])
diff --git a/test_set/cvp-sanity/tests/test_nodes_in_maas.py b/test_set/cvp-sanity/tests/test_nodes_in_maas.py
new file mode 100644
index 0000000..fafd150
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_nodes_in_maas.py
@@ -0,0 +1,69 @@
+import json
+import pytest
+import utils
+
+
+def get_maas_logged_in_profiles(local_salt_client):
+ get_apis = local_salt_client.cmd_any(
+ tgt='maas:cluster',
+ param='maas list')
+ return get_apis
+
+
+def login_to_maas(local_salt_client, user):
+ login = local_salt_client.cmd_any(
+ tgt='maas:cluster',
+ param="source /var/lib/maas/.maas_login.sh ; echo {}=${{PROFILE}}"
+ "".format(user))
+ return login
+
+
+def test_nodes_deployed_in_maas(local_salt_client):
+ config = utils.get_configuration()
+
+ # 1. Check MAAS is present on some node
+ check_maas = local_salt_client.test_ping(tgt='maas:cluster')
+ if not check_maas:
+ pytest.skip("Could not find MAAS on the environment")
+
+ # 2. Get MAAS admin user from model
+ maas_admin_user = local_salt_client.pillar_get(
+ tgt='maas:cluster',
+ param='_param:maas_admin_username')
+ if not maas_admin_user:
+ pytest.skip("Could not find MAAS admin user in the model by parameter "
+ "'maas_admin_username'")
+
+ # 3. Check maas has logged in profiles and try to log in if not
+ logged_profiles = get_maas_logged_in_profiles(local_salt_client)
+ if maas_admin_user not in logged_profiles:
+ login = login_to_maas(local_salt_client, maas_admin_user)
+ newly_logged = get_maas_logged_in_profiles(local_salt_client)
+ if maas_admin_user not in newly_logged:
+ pytest.skip(
+ "Could not find '{}' profile in MAAS and could not log in.\n"
+ "Current MAAS logged in profiles: {}.\nLogin output: {}"
+ "".format(maas_admin_user, newly_logged, login))
+
+ # 4. Get nodes in MAAS
+ get_nodes = local_salt_client.cmd(
+ tgt='maas:cluster',
+ param='maas {} nodes read'.format(maas_admin_user),
+ expr_form='pillar')
+ result = ""
+ try:
+ result = json.loads(get_nodes.values()[0])
+ except ValueError as e:
+ assert result, "Could not get nodes: {}\n{}". \
+ format(get_nodes, e)
+
+ # 5. Check all nodes are in Deployed status
+ failed_nodes = []
+ for node in result:
+ if node["fqdn"] in config.get("skipped_nodes"):
+ continue
+ if "status_name" in node.keys():
+ if node["status_name"] != 'Deployed':
+ failed_nodes.append({node["fqdn"]: node["status_name"]})
+ assert not failed_nodes, "Some nodes have unexpected status in MAAS:" \
+ "\n{}".format(json.dumps(failed_nodes, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_nova_services.py b/test_set/cvp-sanity/tests/test_nova_services.py
new file mode 100644
index 0000000..6505d30
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_nova_services.py
@@ -0,0 +1,39 @@
+import pytest
+
+
+@pytest.mark.usefixtures('check_openstack')
+def test_nova_services_status(local_salt_client):
+ result = local_salt_client.cmd_any(
+ tgt='keystone:server',
+ param='. /root/keystonercv3;'
+ 'nova service-list | grep "down\|disabled" | grep -v "Forced down"')
+
+ assert result == '', \
+ '''Some nova services are in wrong state'''
+
+
+@pytest.mark.usefixtures('check_openstack')
+def test_nova_hosts_consistent(local_salt_client):
+ all_cmp_services = local_salt_client.cmd_any(
+ tgt='keystone:server',
+ param='. /root/keystonercv3;'
+ 'nova service-list | grep "nova-compute" | wc -l')
+ enabled_cmp_services = local_salt_client.cmd_any(
+ tgt='keystone:server',
+ param='. /root/keystonercv3;'
+ 'nova service-list | grep "nova-compute" | grep "enabled" | wc -l')
+ hosts = local_salt_client.cmd_any(
+ tgt='keystone:server',
+ param='. /root/keystonercv3;'
+ 'openstack host list | grep "compute" | wc -l')
+ hypervisors = local_salt_client.cmd_any(
+ tgt='keystone:server',
+ param='. /root/keystonercv3;'
+ 'openstack hypervisor list | egrep -v "\-----|ID" | wc -l')
+
+ assert all_cmp_services == hypervisors, \
+ "Number of nova-compute services ({}) does not match number of " \
+ "hypervisors ({}).".format(all_cmp_services, hypervisors)
+ assert enabled_cmp_services == hosts, \
+ "Number of enabled nova-compute services ({}) does not match number \
+ of hosts ({}).".format(enabled_cmp_services, hosts)
diff --git a/test_set/cvp-sanity/tests/test_ntp_sync.py b/test_set/cvp-sanity/tests/test_ntp_sync.py
new file mode 100644
index 0000000..abf0d8a
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_ntp_sync.py
@@ -0,0 +1,62 @@
+import json
+import utils
+import pytest
+
+@pytest.mark.xfail
+def test_ntp_sync(local_salt_client):
+ """Test checks that system time is the same across all nodes"""
+
+ config = utils.get_configuration()
+ nodes_time = local_salt_client.cmd(
+ tgt='*',
+ param='date +%s',
+ expr_form='compound')
+ result = {}
+ for node, time in nodes_time.iteritems():
+ if isinstance(nodes_time[node], bool):
+ time = 'Cannot access node(-s)'
+ if node in config.get("ntp_skipped_nodes"):
+ continue
+ if time in result:
+ result[time].append(node)
+ result[time].sort()
+ else:
+ result[time] = [node]
+ assert len(result) <= 1, 'Not all nodes have the same time:\n {}'.format(
+ json.dumps(result, indent=4))
+
+
+def test_ntp_peers_state(local_salt_client):
+ """Test gets ntpq peers state and checks the system peer is declared"""
+ state = local_salt_client.cmd(
+ tgt='*',
+ param='ntpq -pn',
+ expr_form='compound')
+ final_result = {}
+ for node in state:
+ sys_peer_declared = False
+ if not state[node]:
+ # TODO: do not skip
+ print ("Node {} is skipped".format(node))
+ continue
+ ntpq_output = state[node].split('\n')
+ # if output has no 'remote' in the head of ntpq output
+ # the 'ntqp -np' command failed and cannot check peers
+ if 'remote' not in ntpq_output[0]:
+ final_result[node] = ntpq_output
+ continue
+
+ # take 3rd+ line of output (the actual peers)
+ try:
+ peers = ntpq_output[2:]
+ except IndexError:
+ final_result[node] = ntpq_output
+ continue
+ for p in peers:
+ if p.split()[0].startswith("*"):
+ sys_peer_declared = True
+ if not sys_peer_declared:
+ final_result[node] = ntpq_output
+ assert not final_result,\
+ "NTP peers state is not expected on some nodes, could not find " \
+ "declared system peer:\n{}".format(json.dumps(final_result, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_oss.py b/test_set/cvp-sanity/tests/test_oss.py
new file mode 100644
index 0000000..9e919c5
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_oss.py
@@ -0,0 +1,41 @@
+import requests
+import csv
+import json
+
+
+def test_oss_status(local_salt_client, check_cicd):
+ """
+ # Get IP of HAPROXY interface from pillar using 'salt -C "I@docker:swarm:role:master" pillar.get haproxy:proxy:listen:stats:binds:address'
+ # Read info from web-page "http://{haproxy:proxy:listen:stats:binds:address}:9600/haproxy?stats;csv"
+ # Check that each service from list 'aptly', 'openldap', 'gerrit', 'jenkins', 'postgresql',
+ 'pushkin', 'rundeck', 'elasticsearch' :
+ * has UP status
+ * has OPEN status
+ """
+ HAPROXY_STATS_IP = local_salt_client.pillar_get(
+ tgt='docker:swarm:role:master',
+ param='haproxy:proxy:listen:stats:binds:address')
+ proxies = {"http": None, "https": None}
+ csv_result = requests.get('http://{}:9600/haproxy?stats;csv"'.format(
+ HAPROXY_STATS_IP),
+ proxies=proxies).content
+ data = csv_result.lstrip('# ')
+ wrong_data = []
+ list_of_services = ['aptly', 'openldap', 'gerrit', 'jenkins', 'postgresql',
+ 'pushkin', 'rundeck', 'elasticsearch']
+ for service in list_of_services:
+ check = local_salt_client.test_ping(tgt='{}:client'.format(service))
+ if check:
+ lines = [row for row in csv.DictReader(data.splitlines())
+ if service in row['pxname']]
+ for row in lines:
+ info = "Service {0} with svname {1} and status {2}".format(
+ row['pxname'], row['svname'], row['status'])
+ if row['svname'] == 'FRONTEND' and row['status'] != 'OPEN':
+ wrong_data.append(info)
+ if row['svname'] != 'FRONTEND' and row['status'] != 'UP':
+ wrong_data.append(info)
+
+ assert len(wrong_data) == 0, \
+ '''Some haproxy services are in wrong state
+ {}'''.format(json.dumps(wrong_data, indent=4))
diff --git a/test_set/cvp-sanity/tests/test_packet_checker.py b/test_set/cvp-sanity/tests/test_packet_checker.py
new file mode 100644
index 0000000..6c1ccc9
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_packet_checker.py
@@ -0,0 +1,118 @@
+import pytest
+import json
+import utils
+
+
+def test_check_package_versions(local_salt_client, nodes_in_group):
+ exclude_packages = utils.get_configuration().get("skipped_packages", [])
+ packages_versions = local_salt_client.cmd(tgt="L@"+','.join(nodes_in_group),
+ fun='lowpkg.list_pkgs',
+ expr_form='compound')
+ # Let's exclude cid01 and dbs01 nodes from this check
+ exclude_nodes = local_salt_client.test_ping(tgt="I@galera:master or I@gerrit:client",
+ expr_form='compound').keys()
+ total_nodes = [i for i in packages_versions.keys() if i not in exclude_nodes]
+ if len(total_nodes) < 2:
+ pytest.skip("Nothing to compare - only 1 node")
+
+ nodes = []
+ pkts_data = []
+ packages_names = set()
+
+ for node in total_nodes:
+ if not packages_versions[node]:
+ # TODO: do not skip node
+ print "Node {} is skipped".format (node)
+ continue
+ nodes.append(node)
+ packages_names.update(packages_versions[node].keys())
+
+ for deb in packages_names:
+ if deb in exclude_packages:
+ continue
+ diff = []
+ row = []
+ for node in nodes:
+ if not packages_versions[node]:
+ continue
+ if deb in packages_versions[node].keys():
+ diff.append(packages_versions[node][deb])
+ row.append("{}: {}".format(node, packages_versions[node][deb]))
+ else:
+ row.append("{}: No package".format(node))
+ if diff.count(diff[0]) < len(nodes):
+ row.sort()
+ row.insert(0, deb)
+ pkts_data.append(row)
+ assert len(pkts_data) <= 1, \
+ "Several problems found: {0}".format(
+ json.dumps(pkts_data, indent=4))
+
+
+def test_packages_are_latest(local_salt_client, nodes_in_group):
+ config = utils.get_configuration()
+ skip = config.get("test_packages")["skip_test"]
+ if skip:
+ pytest.skip("Test for the latest packages is disabled")
+ skipped_pkg = config.get("test_packages")["skipped_packages"]
+ info_salt = local_salt_client.cmd(
+ tgt='L@' + ','.join(nodes_in_group),
+ param='apt list --upgradable 2>/dev/null | grep -v Listing',
+ expr_form='compound')
+ for node in nodes_in_group:
+ result = []
+ if info_salt[node]:
+ upg_list = info_salt[node].split('\n')
+ for i in upg_list:
+ if i.split('/')[0] not in skipped_pkg:
+ result.append(i)
+ assert not result, "Please check not latest packages at {}:\n{}".format(
+ node, "\n".join(result))
+
+
+def test_check_module_versions(local_salt_client, nodes_in_group):
+ exclude_modules = utils.get_configuration().get("skipped_modules", [])
+ pre_check = local_salt_client.cmd(
+ tgt="L@"+','.join(nodes_in_group),
+ param='dpkg -l | grep "python-pip "',
+ expr_form='compound')
+ if pre_check.values().count('') > 0:
+ pytest.skip("pip is not installed on one or more nodes")
+
+ exclude_nodes = local_salt_client.test_ping(tgt="I@galera:master or I@gerrit:client",
+ expr_form='compound').keys()
+ total_nodes = [i for i in pre_check.keys() if i not in exclude_nodes]
+
+ if len(total_nodes) < 2:
+ pytest.skip("Nothing to compare - only 1 node")
+ list_of_pip_packages = local_salt_client.cmd(tgt="L@"+','.join(nodes_in_group),
+ param='pip.freeze', expr_form='compound')
+
+ nodes = []
+
+ pkts_data = []
+ packages_names = set()
+
+ for node in total_nodes:
+ nodes.append(node)
+ packages_names.update([x.split("=")[0] for x in list_of_pip_packages[node]])
+ list_of_pip_packages[node] = dict([x.split("==") for x in list_of_pip_packages[node]])
+
+ for deb in packages_names:
+ if deb in exclude_modules:
+ continue
+ diff = []
+ row = []
+ for node in nodes:
+ if deb in list_of_pip_packages[node].keys():
+ diff.append(list_of_pip_packages[node][deb])
+ row.append("{}: {}".format(node, list_of_pip_packages[node][deb]))
+ else:
+ row.append("{}: No module".format(node))
+ if diff.count(diff[0]) < len(nodes):
+ row.sort()
+ row.insert(0, deb)
+ pkts_data.append(row)
+ assert len(pkts_data) <= 1, \
+ "Several problems found: {0}".format(
+ json.dumps(pkts_data, indent=4))
diff --git a/cvp-sanity/cvp_checks/tests/test_rabbit_cluster.py b/test_set/cvp-sanity/tests/test_rabbit_cluster.py
similarity index 84%
rename from cvp-sanity/cvp_checks/tests/test_rabbit_cluster.py
rename to test_set/cvp-sanity/tests/test_rabbit_cluster.py
index daae7ce..73efb57 100644
--- a/cvp-sanity/cvp_checks/tests/test_rabbit_cluster.py
+++ b/test_set/cvp-sanity/tests/test_rabbit_cluster.py
@@ -1,4 +1,4 @@
-from cvp_checks import utils
+import utils
def test_checking_rabbitmq_cluster(local_salt_client):
@@ -6,23 +6,26 @@
# it may be reintroduced in future
config = utils.get_configuration()
# request pillar data from rmq nodes
+ # TODO: pillar.get
rabbitmq_pillar_data = local_salt_client.cmd(
- 'rabbitmq:server', 'pillar.data',
- ['rabbitmq:cluster'], expr_form='pillar')
+ tgt='rabbitmq:server',
+ fun='pillar.get',
+ param='rabbitmq:cluster',
+ expr_form='pillar')
# creating dictionary {node:cluster_size_for_the_node}
# with required cluster size for each node
control_dict = {}
required_cluster_size_dict = {}
# request actual data from rmq nodes
rabbit_actual_data = local_salt_client.cmd(
- 'rabbitmq:server', 'cmd.run',
- ['rabbitmqctl cluster_status'], expr_form='pillar')
+ tgt='rabbitmq:server',
+ param='rabbitmqctl cluster_status', expr_form='pillar')
for node in rabbitmq_pillar_data:
if node in config.get('skipped_nodes'):
del rabbit_actual_data[node]
continue
cluster_size_from_the_node = len(
- rabbitmq_pillar_data[node]['rabbitmq:cluster']['members'])
+ rabbitmq_pillar_data[node]['members'])
required_cluster_size_dict.update({node: cluster_size_from_the_node})
# find actual cluster size for each node
diff --git a/cvp-sanity/cvp_checks/tests/test_repo_list.py b/test_set/cvp-sanity/tests/test_repo_list.py
similarity index 68%
rename from cvp-sanity/cvp_checks/tests/test_repo_list.py
rename to test_set/cvp-sanity/tests/test_repo_list.py
index 0e35f37..5e70eeb 100644
--- a/cvp-sanity/cvp_checks/tests/test_repo_list.py
+++ b/test_set/cvp-sanity/tests/test_repo_list.py
@@ -1,16 +1,18 @@
-import pytest
-from cvp_checks import utils
-
-
def test_list_of_repo_on_nodes(local_salt_client, nodes_in_group):
- info_salt = local_salt_client.cmd('L@' + ','.join(
- nodes_in_group),
- 'pillar.data', ['linux:system:repo'],
+ # TODO: pillar.get
+ info_salt = local_salt_client.cmd(tgt='L@' + ','.join(
+ nodes_in_group),
+ fun='pillar.get',
+ param='linux:system:repo',
expr_form='compound')
# check if some repos are disabled
for node in info_salt.keys():
- repos = info_salt[node]["linux:system:repo"]
+ repos = info_salt[node]
+ if not info_salt[node]:
+ # TODO: do not skip node
+ print "Node {} is skipped".format (node)
+ continue
for repo in repos.keys():
repository = repos[repo]
if "enabled" in repository:
@@ -18,21 +20,19 @@
repos.pop(repo)
raw_actual_info = local_salt_client.cmd(
- 'L@' + ','.join(
- nodes_in_group),
- 'cmd.run',
- ['cat /etc/apt/sources.list.d/*;'
- 'cat /etc/apt/sources.list|grep deb|grep -v "#"'],
- expr_form='compound')
+ tgt='L@' + ','.join(
+ nodes_in_group),
+ param='cat /etc/apt/sources.list.d/*;'
+ 'cat /etc/apt/sources.list|grep deb|grep -v "#"',
+ expr_form='compound', check_status=True)
actual_repo_list = [item.replace('/ ', ' ').replace('[arch=amd64] ', '')
for item in raw_actual_info.values()[0].split('\n')]
- if info_salt.values()[0]['linux:system:repo'] == '':
+ if info_salt.values()[0] == '':
expected_salt_data = ''
else:
expected_salt_data = [repo['source'].replace('/ ', ' ')
.replace('[arch=amd64] ', '')
- for repo in info_salt.values()[0]
- ['linux:system:repo'].values()
+ for repo in info_salt.values()[0].values()
if 'source' in repo.keys()]
diff = {}
diff --git a/cvp-sanity/cvp_checks/tests/test_salt_master.py b/test_set/cvp-sanity/tests/test_salt_master.py
similarity index 73%
rename from cvp-sanity/cvp_checks/tests/test_salt_master.py
rename to test_set/cvp-sanity/tests/test_salt_master.py
index 7649767..7ae5754 100644
--- a/cvp-sanity/cvp_checks/tests/test_salt_master.py
+++ b/test_set/cvp-sanity/tests/test_salt_master.py
@@ -1,8 +1,7 @@
def test_uncommited_changes(local_salt_client):
git_status = local_salt_client.cmd(
- 'salt:master',
- 'cmd.run',
- ['cd /srv/salt/reclass/classes/cluster/; git status'],
+ tgt='salt:master',
+ param='cd /srv/salt/reclass/classes/cluster/; git status',
expr_form='pillar')
assert 'nothing to commit' in git_status.values()[0], 'Git status showed' \
' some unmerged changes {}'''.format(git_status.values()[0])
@@ -10,9 +9,8 @@
def test_reclass_smoke(local_salt_client):
reclass = local_salt_client.cmd(
- 'salt:master',
- 'cmd.run',
- ['reclass-salt --top; echo $?'],
+ tgt='salt:master',
+ param='reclass-salt --top; echo $?',
expr_form='pillar')
result = reclass[reclass.keys()[0]][-1]
diff --git a/cvp-sanity/cvp_checks/tests/test_services.py b/test_set/cvp-sanity/tests/test_services.py
similarity index 92%
rename from cvp-sanity/cvp_checks/tests/test_services.py
rename to test_set/cvp-sanity/tests/test_services.py
index 4afad0b..c704437 100644
--- a/cvp-sanity/cvp_checks/tests/test_services.py
+++ b/test_set/cvp-sanity/tests/test_services.py
@@ -1,7 +1,7 @@
import pytest
import json
import os
-from cvp_checks import utils
+import utils
# Some nodes can have services that are not applicable for other nodes in similar group.
# For example , there are 3 node in kvm group, but just kvm03 node has srv-volumes-backup.mount service
@@ -16,7 +16,9 @@
Inconsistent services will be checked with another test case
"""
exclude_services = utils.get_configuration().get("skipped_services", [])
- services_by_nodes = local_salt_client.cmd("L@"+','.join(nodes_in_group), 'service.get_all', expr_form='compound')
+ services_by_nodes = local_salt_client.cmd(tgt="L@"+','.join(nodes_in_group),
+ fun='service.get_all',
+ expr_form='compound')
if len(services_by_nodes.keys()) < 2:
pytest.skip("Nothing to compare - only 1 node")
@@ -26,6 +28,10 @@
all_services = set()
for node in services_by_nodes:
+ if not services_by_nodes[node]:
+ # TODO: do not skip node
+ print "Node {} is skipped".format (node)
+ continue
nodes.append(node)
all_services.update(services_by_nodes[node])
diff --git a/test_set/cvp-sanity/tests/test_single_vip.py b/test_set/cvp-sanity/tests/test_single_vip.py
new file mode 100644
index 0000000..7a1c2f8
--- /dev/null
+++ b/test_set/cvp-sanity/tests/test_single_vip.py
@@ -0,0 +1,26 @@
+import utils
+import json
+
+
+def test_single_vip_exists(local_salt_client):
+ """Test checks that there is only one VIP address
+ within one group of nodes (where applicable).
+ Steps:
+ 1. Get IP addresses for nodes via salt cmd.run 'ip a | grep /32'
+ 2. Check that at least 1 node responds with something.
+ """
+ groups = utils.calculate_groups()
+ no_vip = {}
+ for group in groups:
+ if group in ['cmp', 'cfg', 'kvm', 'cmn', 'osd', 'gtw']:
+ continue
+ nodes_list = local_salt_client.cmd(
+ "L@" + ','.join(groups[group]), 'cmd.run', 'ip a | grep /32', expr_form='compound')
+ result = [x for x in nodes_list.values() if x]
+ if len(result) != 1:
+ if len(result) == 0:
+ no_vip[group] = 'No vip found'
+ else:
+ no_vip[group] = nodes_list
+ assert len(no_vip) < 1, "Some groups of nodes have problem with vip " \
+ "\n{}".format(json.dumps(no_vip, indent=4))
diff --git a/cvp-sanity/cvp_checks/tests/test_stacklight.py b/test_set/cvp-sanity/tests/test_stacklight.py
similarity index 64%
rename from cvp-sanity/cvp_checks/tests/test_stacklight.py
rename to test_set/cvp-sanity/tests/test_stacklight.py
index e748f61..703deea 100644
--- a/cvp-sanity/cvp_checks/tests/test_stacklight.py
+++ b/test_set/cvp-sanity/tests/test_stacklight.py
@@ -2,46 +2,42 @@
import requests
import datetime
import pytest
-from cvp_checks import utils
@pytest.mark.usefixtures('check_kibana')
def test_elasticsearch_cluster(local_salt_client):
- salt_output = local_salt_client.cmd(
- 'kibana:server',
- 'pillar.get',
- ['_param:haproxy_elasticsearch_bind_host'],
- expr_form='pillar')
+ salt_output = local_salt_client.pillar_get(
+ tgt='kibana:server',
+ param='_param:haproxy_elasticsearch_bind_host')
proxies = {"http": None, "https": None}
- for node in salt_output.keys():
- IP = salt_output[node]
- assert requests.get('http://{}:9200/'.format(IP),
- proxies=proxies).status_code == 200, \
- 'Cannot check elasticsearch url on {}.'.format(IP)
- resp = requests.get('http://{}:9200/_cat/health'.format(IP),
- proxies=proxies).content
- assert resp.split()[3] == 'green', \
- 'elasticsearch status is not good {}'.format(
- json.dumps(resp, indent=4))
- assert resp.split()[4] == '3', \
- 'elasticsearch status is not good {}'.format(
- json.dumps(resp, indent=4))
- assert resp.split()[5] == '3', \
- 'elasticsearch status is not good {}'.format(
- json.dumps(resp, indent=4))
- assert resp.split()[10] == '0', \
- 'elasticsearch status is not good {}'.format(
- json.dumps(resp, indent=4))
- assert resp.split()[13] == '100.0%', \
- 'elasticsearch status is not good {}'.format(
- json.dumps(resp, indent=4))
+ IP = salt_output
+ assert requests.get('http://{}:9200/'.format(IP),
+ proxies=proxies).status_code == 200, \
+ 'Cannot check elasticsearch url on {}.'.format(IP)
+ resp = requests.get('http://{}:9200/_cat/health'.format(IP),
+ proxies=proxies).content
+ assert resp.split()[3] == 'green', \
+ 'elasticsearch status is not good {}'.format(
+ json.dumps(resp, indent=4))
+ assert resp.split()[4] == '3', \
+ 'elasticsearch status is not good {}'.format(
+ json.dumps(resp, indent=4))
+ assert resp.split()[5] == '3', \
+ 'elasticsearch status is not good {}'.format(
+ json.dumps(resp, indent=4))
+ assert resp.split()[10] == '0', \
+ 'elasticsearch status is not good {}'.format(
+ json.dumps(resp, indent=4))
+ assert resp.split()[13] == '100.0%', \
+ 'elasticsearch status is not good {}'.format(
+ json.dumps(resp, indent=4))
@pytest.mark.usefixtures('check_kibana')
def test_kibana_status(local_salt_client):
proxies = {"http": None, "https": None}
- IP = utils.get_monitoring_ip('stacklight_log_address')
+ IP = local_salt_client.pillar_get(param='_param:stacklight_log_address')
resp = requests.get('http://{}:5601/api/status'.format(IP),
proxies=proxies).content
body = json.loads(resp)
@@ -57,19 +53,16 @@
def test_elasticsearch_node_count(local_salt_client):
now = datetime.datetime.now()
today = now.strftime("%Y.%m.%d")
- active_nodes = utils.get_active_nodes()
- salt_output = local_salt_client.cmd(
- 'kibana:server',
- 'pillar.get',
- ['_param:haproxy_elasticsearch_bind_host'],
- expr_form='pillar')
+ salt_output = local_salt_client.pillar_get(
+ tgt='kibana:server',
+ param='_param:haproxy_elasticsearch_bind_host')
- IP = salt_output.values()[0]
+ IP = salt_output
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}
proxies = {"http": None, "https": None}
data = ('{"size": 0, "aggs": '
'{"uniq_hostname": '
- '{"terms": {"size": 500, '
+ '{"terms": {"size": 1000, '
'"field": "Hostname.keyword"}}}}')
response = requests.post(
'http://{0}:9200/log-{1}/_search?pretty'.format(IP, today),
@@ -79,31 +72,28 @@
assert 200 == response.status_code, 'Unexpected code {}'.format(
response.text)
resp = json.loads(response.text)
- cluster_domain = local_salt_client.cmd('salt:control',
- 'pillar.get',
- ['_param:cluster_domain'],
- expr_form='pillar').values()[0]
+ cluster_domain = local_salt_client.pillar_get(param='_param:cluster_domain')
monitored_nodes = []
for item_ in resp['aggregations']['uniq_hostname']['buckets']:
node_name = item_['key']
monitored_nodes.append(node_name + '.' + cluster_domain)
missing_nodes = []
- for node in active_nodes.keys():
+ all_nodes = local_salt_client.test_ping(tgt='*').keys()
+ for node in all_nodes:
if node not in monitored_nodes:
missing_nodes.append(node)
assert len(missing_nodes) == 0, \
'Not all nodes are in Elasticsearch. Found {0} keys, ' \
'expected {1}. Missing nodes: \n{2}'. \
- format(len(monitored_nodes), len(active_nodes), missing_nodes)
+ format(len(monitored_nodes), len(all_nodes), missing_nodes)
def test_stacklight_services_replicas(local_salt_client):
# TODO
# change to docker:swarm:role:master ?
salt_output = local_salt_client.cmd(
- 'I@docker:client:stack:monitoring and I@prometheus:server',
- 'cmd.run',
- ['docker service ls'],
+ tgt='I@docker:client:stack:monitoring and I@prometheus:server',
+ param='docker service ls',
expr_form='compound')
if not salt_output:
@@ -122,15 +112,14 @@
@pytest.mark.usefixtures('check_prometheus')
def test_prometheus_alert_count(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
# keystone:server can return 3 nodes instead of 1
# this will be fixed later
# TODO
nodes_info = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl -s http://{}:15010/alerts | grep icon-chevron-down | '
- 'grep -v "0 active"'.format(IP)],
+ tgt=ctl_nodes_pillar,
+ param='curl -s http://{}:15010/alerts | grep icon-chevron-down | '
+ 'grep -v "0 active"'.format(IP),
expr_form='pillar')
result = nodes_info[nodes_info.keys()[0]].replace('</td>', '').replace(
@@ -141,9 +130,8 @@
def test_stacklight_containers_status(local_salt_client):
salt_output = local_salt_client.cmd(
- 'I@docker:swarm:role:master and I@prometheus:server',
- 'cmd.run',
- ['docker service ps $(docker stack services -q monitoring)'],
+ tgt='I@docker:swarm:role:master and I@prometheus:server',
+ param='docker service ps $(docker stack services -q monitoring)',
expr_form='compound')
if not salt_output:
@@ -172,10 +160,10 @@
def test_running_telegraf_services(local_salt_client):
- salt_output = local_salt_client.cmd('telegraf:agent',
- 'service.status',
- 'telegraf',
- expr_form='pillar')
+ salt_output = local_salt_client.cmd(tgt='telegraf:agent',
+ fun='service.status',
+ param='telegraf',
+ expr_form='pillar',)
if not salt_output:
pytest.skip("Telegraf or telegraf:agent \
@@ -189,9 +177,9 @@
def test_running_fluentd_services(local_salt_client):
- salt_output = local_salt_client.cmd('fluentd:agent',
- 'service.status',
- 'td-agent',
+ salt_output = local_salt_client.cmd(tgt='fluentd:agent',
+ fun='service.status',
+ param='td-agent',
expr_form='pillar')
result = [{node: status} for node, status
in salt_output.items()
diff --git a/cvp-sanity/cvp_checks/tests/test_ui_addresses.py b/test_set/cvp-sanity/tests/test_ui_addresses.py
similarity index 66%
rename from cvp-sanity/cvp_checks/tests/test_ui_addresses.py
rename to test_set/cvp-sanity/tests/test_ui_addresses.py
index 95565ee..0c65451 100644
--- a/cvp-sanity/cvp_checks/tests/test_ui_addresses.py
+++ b/test_set/cvp-sanity/tests/test_ui_addresses.py
@@ -1,40 +1,33 @@
-from cvp_checks import utils
import pytest
@pytest.mark.usefixtures('check_openstack')
def test_ui_horizon(local_salt_client, ctl_nodes_pillar):
- salt_output = local_salt_client.cmd(
- 'horizon:server',
- 'pillar.get',
- ['_param:cluster_public_host'],
- expr_form='pillar')
- if not salt_output:
+ IP = local_salt_client.pillar_get(
+ tgt='horizon:server',
+ param='_param:cluster_public_host')
+ if not IP:
pytest.skip("Horizon is not enabled on this environment")
- IP = [salt_output[node] for node in salt_output
- if salt_output[node]]
- result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl --insecure https://{}/auth/login/ 2>&1 | \
- grep Login'.format(IP[0])],
+ result = local_salt_client.cmd_any(
+ tgt=ctl_nodes_pillar,
+ param='curl --insecure https://{}/auth/login/ 2>&1 | \
+ grep Login'.format(IP),
expr_form='pillar')
- assert len(result[result.keys()[0]]) != 0, \
+ assert len(result) != 0, \
'Horizon login page is not reachable on {} from ctl nodes'.format(
IP[0])
@pytest.mark.usefixtures('check_openstack')
def test_public_openstack(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '5000'
url = "{}://{}:{}/v3".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl -k {}/ 2>&1 | \
- grep stable'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl -k {}/ 2>&1 | \
+ grep stable'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Public Openstack url is not reachable on {} from ctl nodes'.format(url)
@@ -42,15 +35,14 @@
@pytest.mark.usefixtures('check_kibana')
def test_internal_ui_kibana(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('stacklight_log_address')
+ IP = local_salt_client.pillar_get(param='_param:stacklight_log_address')
protocol = 'http'
port = '5601'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl {}/app/kibana 2>&1 | \
- grep loading'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl {}/app/kibana 2>&1 | \
+ grep loading'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Internal Kibana login page is not reachable on {} ' \
@@ -59,15 +51,14 @@
@pytest.mark.usefixtures('check_kibana')
def test_public_ui_kibana(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '5601'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl {}/app/kibana 2>&1 | \
- grep loading'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl {}/app/kibana 2>&1 | \
+ grep loading'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Public Kibana login page is not reachable on {} ' \
@@ -76,15 +67,14 @@
@pytest.mark.usefixtures('check_prometheus')
def test_internal_ui_prometheus(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('stacklight_monitor_address')
+ IP = local_salt_client.pillar_get(param='_param:stacklight_monitor_address')
protocol = 'http'
port = '15010'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl {}/graph 2>&1 | \
- grep Prometheus'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl {}/graph 2>&1 | \
+ grep Prometheus'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Internal Prometheus page is not reachable on {} ' \
@@ -93,15 +83,14 @@
@pytest.mark.usefixtures('check_prometheus')
def test_public_ui_prometheus(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '15010'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl {}/graph 2>&1 | \
- grep Prometheus'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl {}/graph 2>&1 | \
+ grep Prometheus'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Public Prometheus page is not reachable on {} ' \
@@ -110,14 +99,13 @@
@pytest.mark.usefixtures('check_prometheus')
def test_internal_ui_alert_manager(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('stacklight_monitor_address')
+ IP = local_salt_client.pillar_get(param='_param:stacklight_monitor_address')
protocol = 'http'
port = '15011'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl -s {}/ | grep Alertmanager'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl -s {}/ | grep Alertmanager'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Internal AlertManager page is not reachable on {} ' \
@@ -126,14 +114,13 @@
@pytest.mark.usefixtures('check_prometheus')
def test_public_ui_alert_manager(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '15011'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl -s {}/ | grep Alertmanager'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl -s {}/ | grep Alertmanager'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Public AlertManager page is not reachable on {} ' \
@@ -142,14 +129,13 @@
@pytest.mark.usefixtures('check_grafana')
def test_internal_ui_grafana(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('stacklight_monitor_address')
+ IP = local_salt_client.pillar_get(param='_param:stacklight_monitor_address')
protocol = 'http'
port = '15013'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl {}/login 2>&1 | grep Grafana'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl {}/login 2>&1 | grep Grafana'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Internal Grafana page is not reachable on {} ' \
@@ -158,14 +144,13 @@
@pytest.mark.usefixtures('check_grafana')
def test_public_ui_grafana(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '8084'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl {}/login 2>&1 | grep Grafana'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl {}/login 2>&1 | grep Grafana'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Public Grafana page is not reachable on {} from ctl nodes'.format(url)
@@ -173,15 +158,14 @@
@pytest.mark.usefixtures('check_alerta')
def test_internal_ui_alerta(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('stacklight_monitor_address')
+ IP = local_salt_client.pillar_get(param='_param:stacklight_monitor_address')
protocol = 'http'
port = '15017'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl {}/ 2>&1 | \
- grep Alerta'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl {}/ 2>&1 | \
+ grep Alerta'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Internal Alerta page is not reachable on {} from ctl nodes'.format(url)
@@ -189,47 +173,46 @@
@pytest.mark.usefixtures('check_alerta')
def test_public_ui_alerta(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '15017'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl {}/ 2>&1 | \
- grep Alerta'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl {}/ 2>&1 | \
+ grep Alerta'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Public Alerta page is not reachable on {} from ctl nodes'.format(url)
+@pytest.mark.usefixtures('check_openstack')
@pytest.mark.usefixtures('check_drivetrain')
def test_public_ui_jenkins(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '8081'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl -k {}/ 2>&1 | \
- grep Authentication'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl -k {}/ 2>&1 | \
+ grep Authentication'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Public Jenkins page is not reachable on {} from ctl nodes'.format(url)
+@pytest.mark.usefixtures('check_openstack')
@pytest.mark.usefixtures('check_drivetrain')
def test_public_ui_gerrit(local_salt_client, ctl_nodes_pillar):
- IP = utils.get_monitoring_ip('cluster_public_host')
+ IP = local_salt_client.pillar_get(param='_param:cluster_public_host')
protocol = 'https'
port = '8070'
url = "{}://{}:{}".format(protocol, IP, port)
result = local_salt_client.cmd(
- ctl_nodes_pillar,
- 'cmd.run',
- ['curl -k {}/ 2>&1 | \
- grep "Gerrit Code Review"'.format(url)],
+ tgt=ctl_nodes_pillar,
+ param='curl -k {}/ 2>&1 | \
+ grep "Gerrit Code Review"'.format(url),
expr_form='pillar')
assert len(result[result.keys()[0]]) != 0, \
'Public Gerrit page is not reachable on {} from ctl nodes'.format(url)
diff --git a/test_set/cvp-sanity/utils/__init__.py b/test_set/cvp-sanity/utils/__init__.py
new file mode 100644
index 0000000..62ccae7
--- /dev/null
+++ b/test_set/cvp-sanity/utils/__init__.py
@@ -0,0 +1,195 @@
+import os
+import yaml
+import requests
+import re
+import sys, traceback
+import time
+
+
+class AuthenticationError(Exception):
+ pass
+
+
+class salt_remote:
+ def __init__(self):
+ self.config = get_configuration()
+ self.skipped_nodes = self.config.get('skipped_nodes') or []
+ self.url = self.config['SALT_URL'].strip()
+ if not re.match("^(http|https)://", self.url):
+ raise AuthenticationError("Salt URL should start \
+ with http or https, given - {}".format(url))
+ self.login_payload = {'username': self.config['SALT_USERNAME'],
+ 'password': self.config['SALT_PASSWORD'], 'eauth': 'pam'}
+ # TODO: proxies
+ self.proxies = {"http": None, "https": None}
+ self.expires = ''
+ self.cookies = []
+ self.headers = {'Accept': 'application/json'}
+ self._login()
+
+ def _login (self):
+ try:
+ login_request = requests.post(os.path.join(self.url, 'login'),
+ headers={'Accept': 'application/json'},
+ data=self.login_payload,
+ proxies=self.proxies)
+ if not login_request.ok:
+ raise AuthenticationError("Authentication to SaltMaster failed")
+ except Exception as e:
+ print ("\033[91m\nConnection to SaltMaster "
+ "was not established.\n"
+ "Please make sure that you "
+ "provided correct credentials.\n"
+ "Error message: {}\033[0m\n".format(e.message or e))
+ traceback.print_exc(file=sys.stdout)
+ sys.exit()
+ self.expire = login_request.json()['return'][0]['expire']
+ self.cookies = login_request.cookies
+ self.headers['X-Auth-Token'] = login_request.json()['return'][0]['token']
+
+ def cmd(self, tgt, fun='cmd.run', param=None, expr_form=None, tgt_type=None, check_status=False, retries=3):
+ if self.expire < time.time() + 300:
+ self.headers['X-Auth-Token'] = self._login()
+ accept_key_payload = {'fun': fun, 'tgt': tgt, 'client': 'local',
+ 'expr_form': expr_form, 'tgt_type': tgt_type,
+ 'timeout': self.config['salt_timeout']}
+ if param:
+ accept_key_payload['arg'] = param
+
+ for i in range(retries):
+ request = requests.post(self.url, headers=self.headers,
+ data=accept_key_payload,
+ cookies=self.cookies,
+ proxies=self.proxies)
+ if not request.ok or not isinstance(request.json()['return'][0], dict):
+ print("Salt master is not responding or response is incorrect. Output: {}".format(request))
+ continue
+ response = request.json()['return'][0]
+ result = {key: response[key] for key in response if key not in self.skipped_nodes}
+ if check_status:
+ if False in result.values():
+ print(
+ "One or several nodes are not responding. Output {}".format(json.dumps(result, indent=4)))
+ continue
+ break
+ else:
+ raise Exception("Error with Salt Master response")
+ return result
+
+ def test_ping(self, tgt, expr_form='pillar'):
+ return self.cmd(tgt=tgt, fun='test.ping', param=None, expr_form=expr_form)
+
+ def cmd_any(self, tgt, param=None, expr_form='pillar'):
+ """
+ This method returns first non-empty result on node or nodes.
+ If all nodes returns nothing, then exception is thrown.
+ """
+ response = self.cmd(tgt=tgt, param=param, expr_form=expr_form)
+ for node in response.keys():
+ if response[node] or response[node] == '':
+ return response[node]
+ else:
+ raise Exception("All minions are down")
+
+ def pillar_get(self, tgt='salt:master', param=None, expr_form='pillar', fail_if_empty=False):
+ """
+ This method is for fetching pillars only.
+ Returns value for pillar, False (if no such pillar) or if fail_if_empty=True - exception
+ """
+ response = self.cmd(tgt=tgt, fun='pillar.get', param=param, expr_form=expr_form)
+ for node in response.keys():
+ if response[node] or response[node] != '':
+ return response[node]
+ else:
+ if fail_if_empty:
+ raise Exception("No pillar found or it is empty.")
+ else:
+ return False
+
+
+def init_salt_client():
+ local = salt_remote()
+ return local
+
+
+def list_to_target_string(node_list, separator, add_spaces=True):
+ if add_spaces:
+ separator = ' ' + separator.strip() + ' '
+ return separator.join(node_list)
+
+
+def calculate_groups():
+ config = get_configuration()
+ local_salt_client = init_salt_client()
+ node_groups = {}
+ nodes_names = set ()
+ expr_form = ''
+ all_nodes = set(local_salt_client.test_ping(tgt='*',expr_form=None))
+ print all_nodes
+ if 'groups' in config.keys() and 'PB_GROUPS' in os.environ.keys() and \
+ os.environ['PB_GROUPS'].lower() != 'false':
+ nodes_names.update(config['groups'].keys())
+ expr_form = 'compound'
+ else:
+ for node in all_nodes:
+ index = re.search('[0-9]{1,3}$', node.split('.')[0])
+ if index:
+ nodes_names.add(node.split('.')[0][:-len(index.group(0))])
+ else:
+ nodes_names.add(node)
+ expr_form = 'pcre'
+
+ gluster_nodes = local_salt_client.test_ping(tgt='I@salt:control and '
+ 'I@glusterfs:server',
+ expr_form='compound')
+ kvm_nodes = local_salt_client.test_ping(tgt='I@salt:control and not '
+ 'I@glusterfs:server',
+ expr_form='compound')
+
+ for node_name in nodes_names:
+ skipped_groups = config.get('skipped_groups') or []
+ if node_name in skipped_groups:
+ continue
+ if expr_form == 'pcre':
+ nodes = local_salt_client.test_ping(tgt='{}[0-9]{{1,3}}'.format(node_name),
+ expr_form=expr_form)
+ else:
+ nodes = local_salt_client.test_ping(tgt=config['groups'][node_name],
+ expr_form=expr_form)
+ if nodes == {}:
+ continue
+
+ node_groups[node_name]=[x for x in nodes
+ if x not in config['skipped_nodes']
+ if x not in gluster_nodes.keys()
+ if x not in kvm_nodes.keys()]
+ all_nodes = set(all_nodes - set(node_groups[node_name]))
+ if node_groups[node_name] == []:
+ del node_groups[node_name]
+ if kvm_nodes:
+ node_groups['kvm'] = kvm_nodes.keys()
+ node_groups['kvm_gluster'] = gluster_nodes.keys()
+ all_nodes = set(all_nodes - set(kvm_nodes.keys()))
+ all_nodes = set(all_nodes - set(gluster_nodes.keys()))
+ if all_nodes:
+ print ("These nodes were not collected {0}. Check config (groups section)".format(all_nodes))
+ return node_groups
+
+
+def get_configuration():
+ """function returns configuration for environment
+ and for test if it's specified"""
+ global_config_file = os.path.join(
+ os.path.dirname(os.path.abspath(__file__)), "../global_config.yaml")
+ with open(global_config_file, 'r') as file:
+ global_config = yaml.load(file)
+ for param in global_config.keys():
+ if param in os.environ.keys():
+ if ',' in os.environ[param]:
+ global_config[param] = []
+ for item in os.environ[param].split(','):
+ global_config[param].append(item)
+ else:
+ global_config[param] = os.environ[param]
+
+ return global_config
diff --git a/test_set/cvp-spt/README.md b/test_set/cvp-spt/README.md
new file mode 100644
index 0000000..125521f
--- /dev/null
+++ b/test_set/cvp-spt/README.md
@@ -0,0 +1,12 @@
+# cvp-spt
+Environment variables
+--
+
+* Set IMAGE_SIZE_MB env variable to have specific image size in cvp-spt/test_glance.py tests
+
+* *cvp-spt.test_glance* Error may happen while executing images.upload:
+```python
+CommunicationError: Error finding address for http://os-ctl-vip.<>.local:9292/v2/images/8bce33dd-9837-4646-b747-7f7f5ce01092/file:
+Unable to establish connection to http://os-ctl-vip.<>.local:9292/v2/images/8bce33dd-9837-4646-b747-7f7f5ce01092/file: [Errno 32] Broken pipe
+```
+This may happen because of low disk space on ctl node or old cryptography package (will be fixed after upgrading to Python3)
\ No newline at end of file
diff --git a/cvp-sanity/cvp_checks/__init__.py b/test_set/cvp-spt/__init__.py
similarity index 100%
copy from cvp-sanity/cvp_checks/__init__.py
copy to test_set/cvp-spt/__init__.py
diff --git a/test_set/cvp-spt/conftest.py b/test_set/cvp-spt/conftest.py
new file mode 100644
index 0000000..693d514
--- /dev/null
+++ b/test_set/cvp-spt/conftest.py
@@ -0,0 +1 @@
+from fixtures.base import *
diff --git a/cvp-sanity/cvp_checks/fixtures/__init__.py b/test_set/cvp-spt/fixtures/__init__.py
similarity index 100%
copy from cvp-sanity/cvp_checks/fixtures/__init__.py
copy to test_set/cvp-spt/fixtures/__init__.py
diff --git a/test_set/cvp-spt/fixtures/base.py b/test_set/cvp-spt/fixtures/base.py
new file mode 100644
index 0000000..41fabb4
--- /dev/null
+++ b/test_set/cvp-spt/fixtures/base.py
@@ -0,0 +1,98 @@
+import pytest
+import utils
+import random
+import time
+from utils import os_client
+
+@pytest.fixture(scope='session')
+def local_salt_client():
+ return utils.init_salt_client()
+
+
+# TODO: fix
+# should not be executed on any test run
+nodes = utils.get_pairs()
+hw_nodes = utils.get_hw_pairs()
+
+
+@pytest.fixture(scope='session', params=nodes.values(), ids=nodes.keys())
+def pair(request):
+ return request.param
+
+
+@pytest.fixture(scope='session', params=hw_nodes.values(), ids=hw_nodes.keys())
+def hw_pair(request):
+ return request.param
+
+
+@pytest.fixture(scope='session')
+def openstack_clients(local_salt_client):
+ nodes_info = local_salt_client.cmd(
+ 'keystone:server', 'pillar.get',
+ ['keystone:server'],
+ expr_form='pillar')
+ if nodes_info.__len__() < 1:
+ pytest.skip("No keystone server found")
+ return False
+ keystone = nodes_info[nodes_info.keys()[0]]
+ url = 'http://{ip}:{port}/'.format(ip=keystone['bind']['public_address'],
+ port=keystone['bind']['public_port'])
+ return os_client.OfficialClientManager(
+ username=keystone['admin_name'],
+ password=keystone['admin_password'],
+ tenant_name=keystone['admin_tenant'],
+ auth_url=url,
+ cert=False,
+ domain='Default',
+ )
+
+
+@pytest.fixture(scope='session')
+def os_resources(openstack_clients):
+ os_actions = os_client.OSCliActions(openstack_clients)
+ os_resource = {}
+ config = utils.get_configuration()
+ image_name = config.get('image_name') or ['Ubuntu']
+
+ os_images_list = [image.id for image in openstack_clients.image.images.list(filters={'name': image_name})]
+ if os_images_list.__len__() == 0:
+ pytest.skip("No images with name {}. This name can be redefined with 'image_name' env var ".format(image_name))
+
+ os_resource['image_id'] = str(os_images_list[0])
+
+ os_resource['flavor_id'] = [flavor.id for flavor in openstack_clients.compute.flavors.list() if flavor.name == 'spt-test']
+ if not os_resource['flavor_id']:
+ os_resource['flavor_id'] = os_actions.create_flavor('spt-test', 1536, 1, 3).id
+ else:
+ os_resource['flavor_id'] = str(os_resource['flavor_id'][0])
+
+ os_resource['sec_group'] = os_actions.create_sec_group()
+ os_resource['keypair'] = openstack_clients.compute.keypairs.create('spt-test-{}'.format(random.randrange(100, 999)))
+ os_resource['net1'] = os_actions.create_network_resources()
+ os_resource['ext_net'] = os_actions.get_external_network()
+ adm_tenant = os_actions.get_admin_tenant()
+ os_resource['router'] = os_actions.create_router(os_resource['ext_net'], adm_tenant.id)
+ os_resource['net2'] = os_actions.create_network(adm_tenant.id)
+ os_resource['subnet2'] = os_actions.create_subnet(os_resource['net2'], adm_tenant.id, '10.2.7.0/24')
+ for subnet in openstack_clients.network.list_subnets()['subnets']:
+ if subnet['network_id'] == os_resource['net1']['id']:
+ os_resource['subnet1'] = subnet['id']
+
+ openstack_clients.network.add_interface_router(os_resource['router']['id'], {'subnet_id': os_resource['subnet1']})
+ openstack_clients.network.add_interface_router(os_resource['router']['id'], {'subnet_id': os_resource['subnet2']['id']})
+ yield os_resource
+ # time.sleep(5)
+ openstack_clients.network.remove_interface_router(os_resource['router']['id'], {'subnet_id': os_resource['subnet1']})
+ openstack_clients.network.remove_interface_router(os_resource['router']['id'], {'subnet_id': os_resource['subnet2']['id']})
+ openstack_clients.network.remove_gateway_router(os_resource['router']['id'])
+ time.sleep(5)
+ openstack_clients.network.delete_router(os_resource['router']['id'])
+ time.sleep(5)
+ # openstack_clients.network.delete_subnet(subnet1['id'])
+ openstack_clients.network.delete_network(os_resource['net1']['id'])
+ openstack_clients.network.delete_network(os_resource['net2']['id'])
+
+ openstack_clients.compute.security_groups.delete(os_resource['sec_group'].id)
+ openstack_clients.compute.keypairs.delete(os_resource['keypair'].name)
+
+ openstack_clients.compute.flavors.delete(os_resource['flavor_id'])
diff --git a/test_set/cvp-spt/global_config.yaml b/test_set/cvp-spt/global_config.yaml
new file mode 100644
index 0000000..9a8e738
--- /dev/null
+++ b/test_set/cvp-spt/global_config.yaml
@@ -0,0 +1,31 @@
+---
+# MANDATORY: Credentials for Salt Master
+# SALT_URL should consist of url and port.
+# For example: http://10.0.0.1:6969
+# 6969 - default Salt Master port to listen
+# Can be found on cfg* node using
+# "salt-call pillar.get _param:salt_master_host"
+# and "salt-call pillar.get _param:salt_master_port"
+# or "salt-call pillar.get _param:jenkins_salt_api_url"
+# SALT_USERNAME by default: salt
+# It can be verified with "salt-call shadow.info salt"
+# SALT_PASSWORD you can find on cfg* node using
+# "salt-call pillar.get _param:salt_api_password"
+# or "grep -r salt_api_password /srv/salt/reclass/classes"
+SALT_URL: <URL>
+SALT_USERNAME: <USERNAME>
+SALT_PASSWORD: <PASSWORD>
+
+# How many seconds to wait for salt-minion to respond
+salt_timeout: 1
+
+image_name: "Ubuntu"
+skipped_nodes: []
+# example for Jenkins: networks=net1,net2
+networks: "10.101.0.0/24"
+external_network: ''
+HW_NODES: []
+CMP_HOSTS: []
+nova_timeout: 30
+iperf_prep_string: "sudo /bin/bash -c 'echo \"91.189.88.161 archive.ubuntu.com\" >> /etc/hosts'"
+IMAGE_SIZE_MB: 2000
diff --git a/test_set/cvp-spt/pytest.ini b/test_set/cvp-spt/pytest.ini
new file mode 100644
index 0000000..32f15a2
--- /dev/null
+++ b/test_set/cvp-spt/pytest.ini
@@ -0,0 +1,2 @@
+[pytest]
+norecursedirs = venv
\ No newline at end of file
diff --git a/test_set/cvp-spt/requirements.txt b/test_set/cvp-spt/requirements.txt
new file mode 100644
index 0000000..55011ce
--- /dev/null
+++ b/test_set/cvp-spt/requirements.txt
@@ -0,0 +1,10 @@
+paramiko==2.0.0 # LGPLv2.1+
+pytest>=3.0.4 # MIT
+python-cinderclient>=1.6.0,!=1.7.0,!=1.7.1 # Apache-2.0
+python-glanceclient>=2.5.0 # Apache-2.0
+python-keystoneclient>=3.8.0 # Apache-2.0
+python-neutronclient>=5.1.0 # Apache-2.0
+python-novaclient==7.1.0
+PyYAML>=3.12 # MIT
+requests>=2.10.0,!=2.12.2 # Apache-2.0
+texttable==1.2.0
diff --git a/cvp-sanity/cvp_checks/tests/__init__.py b/test_set/cvp-spt/tests/__init__.py
similarity index 100%
copy from cvp-sanity/cvp_checks/tests/__init__.py
copy to test_set/cvp-spt/tests/__init__.py
diff --git a/test_set/cvp-spt/tests/test_glance.py b/test_set/cvp-spt/tests/test_glance.py
new file mode 100644
index 0000000..5514944
--- /dev/null
+++ b/test_set/cvp-spt/tests/test_glance.py
@@ -0,0 +1,88 @@
+import pytest
+import time
+import subprocess
+import utils
+
+
+def is_parsable(value, to):
+ """
+ Check if value can be converted into some type
+ :param value: input value that should be converted
+ :param to: type of output value like int, float. It's not a string!
+ :return: bool
+ """
+ try:
+ to(value)
+ except:
+ return False
+ return True
+
+
+@pytest.fixture
+def create_image():
+ image_size_megabytes = utils.get_configuration().get("IMAGE_SIZE_MB")
+ create_file_cmdline = 'dd if=/dev/zero of=/tmp/image_mk_framework.dd bs=1M count={image_size}'.format(
+ image_size=image_size_megabytes)
+
+ is_cmd_successful = subprocess.call(create_file_cmdline.split()) == 0
+ yield is_cmd_successful
+ # teardown
+ subprocess.call('rm -f /tmp/image_mk_framework.dd'.split())
+ subprocess.call('rm -f /tmp/image_mk_framework.download'.split())
+
+
+def test_speed_glance(create_image, openstack_clients, record_property):
+ """
+ Simplified Performance Tests Download / upload Glance
+ 1. Create file with random data (dd)
+ 2. Upload data as image to glance.
+ 3. Download image.
+ 4. Measure download/upload speed and print them into stdout
+ """
+ image_size_megabytes = utils.get_configuration().get("IMAGE_SIZE_MB")
+ if not is_parsable(image_size_megabytes, int):
+ pytest.fail("Can't convert IMAGE_SIZE_MB={} to 'int'".format(image_size_megabytes))
+ image_size_megabytes = int(image_size_megabytes)
+ if not create_image:
+ pytest.skip("Can't create image, maybe there is lack of disk space to create file {}MB".
+ format(image_size_megabytes))
+ try:
+ image = openstack_clients.image.images.create(
+ name="test_image",
+ disk_format='iso',
+ container_format='bare')
+ except BaseException as e:
+ pytest.fail("Can't create image in Glance. Occurred error: {}".format(e))
+
+ # FIXME: Error may happens while executing images.upload:
+ # CommunicationError: Error finding address for
+ # http://os-ctl-vip.harhipova-cicd-os-test.local:9292/v2/images/8bce33dd-9837-4646-b747-7f7f5ce01092/file: Unable to establish connection to http://os-ctl-vip.harhipova-cicd-os-test.local:9292/v2/images/8bce33dd-9837-4646-b747-7f7f5ce01092/file: [Errno 32] Broken pipe
+ # This may happen because of low disk space on ctl node or old cryptography package
+ # (will be fixed after upgrading to Python3)
+ start_time = time.time()
+ try:
+ openstack_clients.image.images.upload(
+ image.id,
+ image_data=open("/tmp/image_mk_framework.dd", 'rb'))
+ except BaseException as e:
+ pytest.fail("Can't upload image in Glance. Occurred error: {}".format(e))
+ end_time = time.time()
+
+ speed_upload = image_size_megabytes / (end_time - start_time)
+
+ start_time = time.time()
+ # it creates new file /tmp/image_mk_framework.download . It should be removed in teardown
+ with open("/tmp/image_mk_framework.download", 'wb') as image_file:
+ for item in openstack_clients.image.images.data(image.id):
+ image_file.write(item)
+ end_time = time.time()
+
+ speed_download = image_size_megabytes / (end_time - start_time)
+
+ openstack_clients.image.images.delete(image.id)
+ record_property("Upload", speed_upload)
+ record_property("Download", speed_download)
+
+ print("++++++++++++++++++++++++++++++++++++++++")
+ print('upload - {} Mb/s'.format(speed_upload))
+ print('download - {} Mb/s'.format(speed_download))
diff --git a/test_set/cvp-spt/tests/test_hw2hw.py b/test_set/cvp-spt/tests/test_hw2hw.py
new file mode 100644
index 0000000..625fed8
--- /dev/null
+++ b/test_set/cvp-spt/tests/test_hw2hw.py
@@ -0,0 +1,52 @@
+#!/usr/bin/env python
+import itertools
+import re
+import os
+import yaml
+import requests
+import utils
+from utils import helpers
+from netaddr import IPNetwork, IPAddress
+
+
+def test_hw2hw (local_salt_client,hw_pair,record_property):
+ helpp = helpers.helpers(local_salt_client)
+ config = utils.get_configuration()
+ nodes = local_salt_client.cmd(expr_form='compound', tgt=str(hw_pair[0]+' or '+hw_pair[1]),
+ fun='network.interfaces')
+ short_name = []
+ short_name.append(hw_pair[0].split('.')[0])
+ short_name.append(hw_pair[1].split('.')[0])
+ nets = config.get('networks').split(',')
+ local_salt_client.cmd(expr_form='compound', tgt=str(hw_pair[0]+' or '+hw_pair[1]),
+ fun='cmd.run', param=['nohup iperf -s > file 2>&1 &'])
+ global_results = []
+ for net in nets:
+ for interf in nodes[hw_pair[0]]:
+ if 'inet' not in nodes[hw_pair[0]][interf].keys():
+ continue
+ ip = nodes[hw_pair[0]][interf]['inet'][0]['address']
+ if (IPAddress(ip) in IPNetwork(net)) and (nodes[hw_pair[0]][interf]['inet'][0]['broadcast']):
+ for interf2 in nodes[hw_pair[1]]:
+ if 'inet' not in nodes[hw_pair[1]][interf2].keys():
+ continue
+ ip2 = nodes[hw_pair[1]][interf2]['inet'][0]['address']
+ if (IPAddress(ip2) in IPNetwork(net)) and (nodes[hw_pair[1]][interf2]['inet'][0]['broadcast']):
+ print "Will IPERF between {0} and {1}".format(ip,ip2)
+ try:
+ res = helpp.start_iperf_between_hosts(global_results, hw_pair[0], hw_pair[1],
+ ip, ip2, net)
+ record_property("1-worst {0}-{1}".format(short_name[0],short_name[1]), res[0] if res[0] < res[2] else res[2])
+ record_property("1-best {0}-{1}".format(short_name[0],short_name[1]), res[0] if res[0] > res[2] else res[2])
+ record_property("10-best {0}-{1}".format(short_name[0],short_name[1]), res[1] if res[1] > res[3] else res[3])
+ record_property("10-best {0}-{1}".format(short_name[0],short_name[1]), res[1] if res[1] > res[3] else res[3])
+ print "Measurement between {} and {} " \
+ "has been finished".format(hw_pair[0],
+ hw_pair[1])
+ except Exception as e:
+ print "Failed for {0} {1}".format(
+ hw_pair[0], hw_pair[1])
+ print e
+ local_salt_client.cmd(expr_form='compound', tgt=str(hw_pair[0]+' or '+hw_pair[1]),
+ fun='cmd.run', param=['killall -9 iperf'])
+ helpp.draw_table_with_results(global_results)
diff --git a/test_set/cvp-spt/tests/test_vm2vm.py b/test_set/cvp-spt/tests/test_vm2vm.py
new file mode 100644
index 0000000..fc93347
--- /dev/null
+++ b/test_set/cvp-spt/tests/test_vm2vm.py
@@ -0,0 +1,108 @@
+import os
+import random
+import time
+import pytest
+import utils
+from utils import os_client
+from utils import ssh
+
+
+def test_vm2vm(openstack_clients, pair, os_resources, record_property):
+ os_actions = os_client.OSCliActions(openstack_clients)
+ config = utils.get_configuration()
+ timeout = int(config.get('nova_timeout', 30))
+ try:
+ zone1 = [service.zone for service in openstack_clients.compute.services.list() if service.host == pair[0]]
+ zone2 = [service.zone for service in openstack_clients.compute.services.list() if service.host == pair[1]]
+ vm1 = os_actions.create_basic_server(os_resources['image_id'],
+ os_resources['flavor_id'],
+ os_resources['net1'],
+ '{0}:{1}'.format(zone1[0],pair[0]),
+ [os_resources['sec_group'].name],
+ os_resources['keypair'].name)
+
+ vm2 = os_actions.create_basic_server(os_resources['image_id'],
+ os_resources['flavor_id'],
+ os_resources['net1'],
+ '{0}:{1}'.format(zone1[0],pair[0]),
+ [os_resources['sec_group'].name],
+ os_resources['keypair'].name)
+
+ vm3 = os_actions.create_basic_server(os_resources['image_id'],
+ os_resources['flavor_id'],
+ os_resources['net1'],
+ '{0}:{1}'.format(zone2[0],pair[1]),
+ [os_resources['sec_group'].name],
+ os_resources['keypair'].name)
+
+ vm4 = os_actions.create_basic_server(os_resources['image_id'],
+ os_resources['flavor_id'],
+ os_resources['net2'],
+ '{0}:{1}'.format(zone2[0],pair[1]),
+ [os_resources['sec_group'].name],
+ os_resources['keypair'].name)
+
+ vm_info = []
+ vms = []
+ vms.extend([vm1,vm2,vm3,vm4])
+ fips = []
+ time.sleep(5)
+ for i in range(4):
+ fip = openstack_clients.compute.floating_ips.create(os_resources['ext_net']['name'])
+ fips.append(fip.id)
+ status = openstack_clients.compute.servers.get(vms[i]).status
+ if status != 'ACTIVE':
+ print("VM #{0} {1} is not ready. Status {2}".format(i,vms[i].id,status))
+ time.sleep(timeout)
+ status = openstack_clients.compute.servers.get(vms[i]).status
+ if status != 'ACTIVE':
+ raise Exception('VM is not ready')
+ vms[i].add_floating_ip(fip)
+ private_address = vms[i].addresses[vms[i].addresses.keys()[0]][0]['addr']
+ time.sleep(5)
+ try:
+ ssh.prepare_iperf(fip.ip,private_key=os_resources['keypair'].private_key)
+ except Exception as e:
+ print(e)
+ print("ssh.prepare_iperf was not successful, retry after {} sec".format(timeout))
+ time.sleep(timeout)
+ ssh.prepare_iperf(fip.ip,private_key=os_resources['keypair'].private_key)
+ vm_info.append({'vm': vms[i], 'fip': fip.ip, 'private_address': private_address})
+
+ transport1 = ssh.SSHTransport(vm_info[0]['fip'], 'ubuntu', password='dd', private_key=os_resources['keypair'].private_key)
+
+ result1 = transport1.exec_command('iperf -c {} -t 60 | tail -n 1'.format(vm_info[1]['private_address']))
+ print(' '.join(result1.split()[-2::]))
+
+ record_property("same {0}-{1}".format(zone1[0],zone2[0]), ' '.join(result1.split()[-2::]))
+ result2 = transport1.exec_command('iperf -c {} -t 60 | tail -n 1'.format(vm_info[2]['private_address']))
+ print(' '.join(result2.split()[-2::]))
+
+ record_property("diff host {0}-{1}".format(zone1[0],zone2[0]), ' '.join(result2.split()[-2::]))
+ result3 = transport1.exec_command('iperf -c {} -P 10 -t 60 | tail -n 1'.format(vm_info[2]['private_address']))
+ print(' '.join(result3.split()[-2::]))
+
+ record_property("dif host 10 threads {0}-{1}".format(zone1[0],zone2[0]), ' '.join(result3.split()[-2::]))
+ result4 = transport1.exec_command('iperf -c {} -t 60 | tail -n 1'.format(vm_info[2]['fip']))
+ print(' '.join(result4.split()[-2::]))
+
+ record_property("diff host fip {0}-{1}".format(zone1[0],zone2[0]), ' '.join(result4.split()[-2::]))
+ result5 = transport1.exec_command('iperf -c {} -t 60 | tail -n 1'.format(vm_info[3]['private_address']))
+ print(' '.join(result5.split()[-2::]))
+
+ record_property("diff host, diff net {0}-{1}".format(zone1[0],zone2[0]), ' '.join(result5.split()[-2::]))
+
+ print("Remove VMs")
+ for vm in vms:
+ openstack_clients.compute.servers.delete(vm)
+ print("Remove FIPs")
+ for fip in fips:
+ openstack_clients.compute.floating_ips.delete(fip)
+ except Exception as e:
+ print(e)
+ print("Something went wrong")
+ for vm in vms:
+ openstack_clients.compute.servers.delete(vm)
+ for fip in fips:
+ openstack_clients.compute.floating_ips.delete(fip)
+ pytest.fail("Something went wrong")
diff --git a/test_set/cvp-spt/utils/__init__.py b/test_set/cvp-spt/utils/__init__.py
new file mode 100644
index 0000000..36cf239
--- /dev/null
+++ b/test_set/cvp-spt/utils/__init__.py
@@ -0,0 +1,111 @@
+import os
+import yaml
+import requests
+import re
+import sys, traceback
+import itertools
+import helpers
+from utils import os_client
+
+
+class salt_remote:
+ def cmd(self, tgt, fun, param=None, expr_form=None, tgt_type=None):
+ config = get_configuration()
+ url = config['SALT_URL']
+ proxies = {"http": None, "https": None}
+ headers = {'Accept': 'application/json'}
+ login_payload = {'username': config['SALT_USERNAME'],
+ 'password': config['SALT_PASSWORD'], 'eauth': 'pam'}
+ accept_key_payload = {'fun': fun, 'tgt': tgt, 'client': 'local',
+ 'expr_form': expr_form, 'tgt_type': tgt_type,
+ 'timeout': config['salt_timeout']}
+ if param:
+ accept_key_payload['arg'] = param
+
+ try:
+ login_request = requests.post(os.path.join(url, 'login'),
+ headers=headers, data=login_payload,
+ proxies=proxies)
+ if login_request.ok:
+ request = requests.post(url, headers=headers,
+ data=accept_key_payload,
+ cookies=login_request.cookies,
+ proxies=proxies)
+ return request.json()['return'][0]
+ except Exception:
+ print "\033[91m\nConnection to SaltMaster " \
+ "was not established.\n" \
+ "Please make sure that you " \
+ "provided correct credentials.\033[0m\n"
+ traceback.print_exc(file=sys.stdout)
+ sys.exit()
+
+
+def init_salt_client():
+ local = salt_remote()
+ return local
+
+
+def compile_pairs (nodes):
+ result = {}
+ if len(nodes) %2 != 0:
+ nodes.pop(1)
+ pairs = zip(*[iter(nodes)]*2)
+ for pair in pairs:
+ result[pair[0]+'<>'+pair[1]] = pair
+ return result
+
+
+def get_pairs():
+ # TODO
+ # maybe collect cmp from nova service-list
+ config = get_configuration()
+ local_salt_client = init_salt_client()
+ cmp_hosts = config.get('CMP_HOSTS') or []
+ skipped_nodes = config.get('skipped_nodes') or []
+ if skipped_nodes:
+ print "Notice: {0} nodes will be skipped for vm2vm test".format(skipped_nodes)
+ if not cmp_hosts:
+ nodes = local_salt_client.cmd(
+ 'I@nova:compute',
+ 'test.ping',
+ expr_form='compound')
+ cmp_hosts = [node.split('.')[0] for node in nodes.keys() if node not in skipped_nodes]
+ return compile_pairs(cmp_hosts)
+
+
+def get_hw_pairs():
+ config = get_configuration()
+ local_salt_client = init_salt_client()
+ hw_nodes = config.get('HW_NODES') or []
+ skipped_nodes = config.get('skipped_nodes') or []
+ if skipped_nodes:
+ print "Notice: {0} nodes will be skipped for hw2hw test".format(skipped_nodes)
+ if not hw_nodes:
+ nodes = local_salt_client.cmd(
+ 'I@salt:control or I@nova:compute',
+ 'test.ping',
+ expr_form='compound')
+ hw_nodes = [node for node in nodes.keys() if node not in skipped_nodes]
+ print local_salt_client.cmd(expr_form='compound', tgt="L@"+','.join(hw_nodes),
+ fun='pkg.install', param=['iperf'])
+ return compile_pairs(hw_nodes)
+
+def get_configuration():
+ """function returns configuration for environment
+ and for test if it's specified"""
+
+ global_config_file = os.path.join(
+ os.path.dirname(os.path.abspath(__file__)), "../global_config.yaml")
+ with open(global_config_file, 'r') as file:
+ global_config = yaml.load(file)
+ for param in global_config.keys():
+ if param in os.environ.keys():
+ if ',' in os.environ[param]:
+ global_config[param] = []
+ for item in os.environ[param].split(','):
+ global_config[param].append(item)
+ else:
+ global_config[param] = os.environ[param]
+
+ return global_config
diff --git a/test_set/cvp-spt/utils/helpers.py b/test_set/cvp-spt/utils/helpers.py
new file mode 100644
index 0000000..97a3bae
--- /dev/null
+++ b/test_set/cvp-spt/utils/helpers.py
@@ -0,0 +1,79 @@
+import texttable as tt
+
+class helpers(object):
+ def __init__ (self, local_salt_client):
+ self.local_salt_client = local_salt_client
+
+ def start_iperf_between_hosts(self, global_results, node_i, node_j, ip_i, ip_j, net_name):
+ result = []
+ direct_raw_results = self.start_iperf_client(node_i, ip_j)
+ result.append(direct_raw_results)
+ print "1 forward"
+ forward = "1 thread:\n"
+ forward += direct_raw_results + " Gbits/sec"
+
+ direct_raw_results = self.start_iperf_client(node_i, ip_j, 10)
+ result.append(direct_raw_results)
+ print "10 forward"
+ forward += "\n\n10 thread:\n"
+ forward += direct_raw_results + " Gbits/sec"
+
+ reverse_raw_results = self.start_iperf_client(node_j, ip_i)
+ result.append(reverse_raw_results)
+ print "1 backward"
+ backward = "1 thread:\n"
+ backward += reverse_raw_results + " Gbits/sec"
+
+ reverse_raw_results = self.start_iperf_client(node_j, ip_i, 10)
+ result.append(reverse_raw_results)
+ print "10 backward"
+ backward += "\n\n10 thread:\n"
+ backward += reverse_raw_results + " Gbits/sec"
+ global_results.append([node_i, node_j,
+ net_name, forward, backward])
+
+ self.kill_iperf_processes(node_i)
+ self.kill_iperf_processes(node_j)
+ return result
+
+ def draw_table_with_results(self, global_results):
+ tab = tt.Texttable()
+ header = [
+ 'node name 1',
+ 'node name 2',
+ 'network',
+ 'bandwidth >',
+ 'bandwidth <',
+ ]
+ tab.set_cols_align(['l', 'l', 'l', 'l', 'l'])
+ tab.set_cols_width([27, 27, 15, 20, '20'])
+ tab.header(header)
+ for row in global_results:
+ tab.add_row(row)
+ s = tab.draw()
+ print s
+
+ def start_iperf_client(self, minion_name, target_ip, thread_count=None):
+ iperf_command = 'timeout --kill-after=20 19 iperf -c {0}'.format(target_ip)
+ if thread_count:
+ iperf_command += ' -P {0}'.format(thread_count)
+ output = self.local_salt_client.cmd(tgt=minion_name,
+ fun='cmd.run',
+ param=[iperf_command])
+ # self.kill_iperf_processes(minion_name)
+ try:
+ result = output.values()[0].split('\n')[-1].split(' ')[-2:]
+ if result[1] == 'Mbits/sec':
+ return str(float(result[0])*0.001)
+ if result[1] != 'Gbits/sec':
+ return "0"
+ return result[0]
+ except:
+ print "No iperf result between {} and {} (maybe they don't have connectivity)".format(minion_name, target_ip)
+
+
+ def kill_iperf_processes(self, minion_name):
+ kill_command = "for pid in $(pgrep iperf); do kill $pid; done"
+ output = self.local_salt_client.cmd(tgt=minion_name,
+ fun='cmd.run',
+ param=[kill_command])
diff --git a/test_set/cvp-spt/utils/os_client.py b/test_set/cvp-spt/utils/os_client.py
new file mode 100644
index 0000000..fb84265
--- /dev/null
+++ b/test_set/cvp-spt/utils/os_client.py
@@ -0,0 +1,399 @@
+from cinderclient import client as cinder_client
+from glanceclient import client as glance_client
+from keystoneauth1 import identity as keystone_identity
+from keystoneauth1 import session as keystone_session
+from keystoneclient.v3 import client as keystone_client
+from neutronclient.v2_0 import client as neutron_client
+from novaclient import client as novaclient
+
+import os
+import random
+import time
+import utils
+
+class OfficialClientManager(object):
+ """Manager that provides access to the official python clients for
+ calling various OpenStack APIs.
+ """
+
+ CINDERCLIENT_VERSION = 3
+ GLANCECLIENT_VERSION = 2
+ KEYSTONECLIENT_VERSION = 3
+ NEUTRONCLIENT_VERSION = 2
+ NOVACLIENT_VERSION = 2
+ INTERFACE = 'admin'
+ if "OS_ENDPOINT_TYPE" in os.environ.keys():
+ INTERFACE = os.environ["OS_ENDPOINT_TYPE"]
+
+ def __init__(self, username=None, password=None,
+ tenant_name=None, auth_url=None, endpoint_type="internalURL",
+ cert=False, domain="Default", **kwargs):
+ self.traceback = ""
+
+ self.client_attr_names = [
+ "auth",
+ "compute",
+ "network",
+ "volume",
+ "image",
+ ]
+ self.username = username
+ self.password = password
+ self.tenant_name = tenant_name
+ self.project_name = tenant_name
+ self.auth_url = auth_url
+ self.endpoint_type = endpoint_type
+ self.cert = cert
+ self.domain = domain
+ self.kwargs = kwargs
+
+ # Lazy clients
+ self._auth = None
+ self._compute = None
+ self._network = None
+ self._volume = None
+ self._image = None
+
+ @classmethod
+ def _get_auth_session(cls, username=None, password=None,
+ tenant_name=None, auth_url=None, cert=None,
+ domain='Default'):
+ if None in (username, password, tenant_name):
+ print(username, password, tenant_name)
+ msg = ("Missing required credentials for identity client. "
+ "username: {username}, password: {password}, "
+ "tenant_name: {tenant_name}").format(
+ username=username,
+ password=password,
+ tenant_name=tenant_name, )
+ raise msg
+
+ if cert and "https" not in auth_url:
+ auth_url = auth_url.replace("http", "https")
+
+ if cls.KEYSTONECLIENT_VERSION == (2, 0):
+ # auth_url = "{}{}".format(auth_url, "v2.0/")
+ auth = keystone_identity.v2.Password(
+ username=username,
+ password=password,
+ auth_url=auth_url,
+ tenant_name=tenant_name)
+ else:
+ auth_url = "{}{}".format(auth_url, "/v3")
+ auth = keystone_identity.v3.Password(
+ auth_url=auth_url,
+ user_domain_name=domain,
+ username=username,
+ password=password,
+ project_domain_name=domain,
+ project_name=tenant_name)
+
+ auth_session = keystone_session.Session(auth=auth, verify=cert)
+ # auth_session.get_auth_headers()
+ return auth_session
+
+ @classmethod
+ def get_auth_client(cls, username=None, password=None,
+ tenant_name=None, auth_url=None, cert=None,
+ domain='Default', **kwargs):
+ session = cls._get_auth_session(
+ username=username,
+ password=password,
+ tenant_name=tenant_name,
+ auth_url=auth_url,
+ cert=cert,
+ domain=domain)
+ keystone = keystone_client.Client(version=cls.KEYSTONECLIENT_VERSION,
+ session=session, **kwargs)
+ keystone.management_url = auth_url
+ return keystone
+
+ @classmethod
+ def get_compute_client(cls, username=None, password=None,
+ tenant_name=None, auth_url=None, cert=None,
+ domain='Default', **kwargs):
+ session = cls._get_auth_session(
+ username=username, password=password, tenant_name=tenant_name,
+ auth_url=auth_url, cert=cert, domain=domain)
+ service_type = 'compute'
+ compute_client = novaclient.Client(
+ version=cls.NOVACLIENT_VERSION, session=session,
+ service_type=service_type, os_cache=False, **kwargs)
+ return compute_client
+
+ @classmethod
+ def get_network_client(cls, username=None, password=None,
+ tenant_name=None, auth_url=None, cert=None,
+ domain='Default', **kwargs):
+ session = cls._get_auth_session(
+ username=username, password=password, tenant_name=tenant_name,
+ auth_url=auth_url, cert=cert, domain=domain)
+ service_type = 'network'
+ return neutron_client.Client(
+ service_type=service_type, session=session, interface=cls.INTERFACE, **kwargs)
+
+ @classmethod
+ def get_volume_client(cls, username=None, password=None,
+ tenant_name=None, auth_url=None, cert=None,
+ domain='Default', **kwargs):
+ session = cls._get_auth_session(
+ username=username, password=password, tenant_name=tenant_name,
+ auth_url=auth_url, cert=cert, domain=domain)
+ service_type = 'volume'
+ return cinder_client.Client(
+ version=cls.CINDERCLIENT_VERSION,
+ service_type=service_type,
+ interface=cls.INTERFACE,
+ session=session, **kwargs)
+
+ @classmethod
+ def get_image_client(cls, username=None, password=None,
+ tenant_name=None, auth_url=None, cert=None,
+ domain='Default', **kwargs):
+ session = cls._get_auth_session(
+ username=username, password=password, tenant_name=tenant_name,
+ auth_url=auth_url, cert=cert, domain=domain)
+ service_type = 'image'
+ return glance_client.Client(
+ version=cls.GLANCECLIENT_VERSION,
+ service_type=service_type,
+ session=session, interface=cls.INTERFACE,
+ **kwargs)
+
+ @property
+ def auth(self):
+ if self._auth is None:
+ self._auth = self.get_auth_client(
+ self.username, self.password, self.tenant_name, self.auth_url,
+ self.cert, self.domain, endpoint_type=self.endpoint_type
+ )
+ return self._auth
+
+ @property
+ def compute(self):
+ if self._compute is None:
+ self._compute = self.get_compute_client(
+ self.username, self.password, self.tenant_name, self.auth_url,
+ self.cert, self.domain, endpoint_type=self.endpoint_type
+ )
+ return self._compute
+
+ @property
+ def network(self):
+ if self._network is None:
+ self._network = self.get_network_client(
+ self.username, self.password, self.tenant_name, self.auth_url,
+ self.cert, self.domain, endpoint_type=self.endpoint_type
+ )
+ return self._network
+
+ @property
+ def volume(self):
+ if self._volume is None:
+ self._volume = self.get_volume_client(
+ self.username, self.password, self.tenant_name, self.auth_url,
+ self.cert, self.domain, endpoint_type=self.endpoint_type
+ )
+ return self._volume
+
+ @property
+ def image(self):
+ if self._image is None:
+ self._image = self.get_image_client(
+ self.username, self.password, self.tenant_name, self.auth_url,
+ self.cert, self.domain
+ )
+ return self._image
+
+
+class OSCliActions(object):
+ def __init__(self, os_clients):
+ self.os_clients = os_clients
+
+ def get_admin_tenant(self):
+ # TODO Keystone v3 doesnt have tenants attribute
+ return self.os_clients.auth.projects.find(name="admin")
+
+ # TODO: refactor
+ def get_cirros_image(self):
+ images_list = list(self.os_clients.image.images.list(name='TestVM'))
+ if images_list:
+ image = images_list[0]
+ else:
+ image = self.os_clients.image.images.create(
+ name="TestVM",
+ disk_format='qcow2',
+ container_format='bare')
+ with file_cache.get_file(settings.CIRROS_QCOW2_URL) as f:
+ self.os_clients.image.images.upload(image.id, f)
+ return image
+
+ def get_internal_network(self):
+ networks = [
+ net for net in self.os_clients.network.list_networks()["networks"]
+ if net["admin_state_up"] and not net["router:external"] and
+ len(net["subnets"])
+ ]
+ if networks:
+ net = networks[0]
+ else:
+ net = self.create_network_resources()
+ return net
+
+ def get_external_network(self):
+ config = utils.get_configuration()
+ ext_net = config.get('external_network') or ''
+ if not ext_net:
+ networks = [
+ net for net in self.os_clients.network.list_networks()["networks"]
+ if net["admin_state_up"] and net["router:external"] and
+ len(net["subnets"])
+ ]
+ if networks:
+ ext_net = networks[0]
+ else:
+ ext_net = self.create_fake_external_network()
+ return ext_net
+
+ def create_flavor(self, name, ram=256, vcpus=1, disk=2):
+ return self.os_clients.compute.flavors.create(name, ram, vcpus, disk)
+
+ def create_sec_group(self, rulesets=None):
+ if rulesets is None:
+ rulesets = [
+ {
+ # ssh
+ 'ip_protocol': 'tcp',
+ 'from_port': 22,
+ 'to_port': 22,
+ 'cidr': '0.0.0.0/0',
+ },
+ {
+ # iperf
+ 'ip_protocol': 'tcp',
+ 'from_port':5001,
+ 'to_port': 5001,
+ 'cidr': '0.0.0.0/0',
+ },
+ {
+ # ping
+ 'ip_protocol': 'icmp',
+ 'from_port': -1,
+ 'to_port': -1,
+ 'cidr': '0.0.0.0/0',
+ }
+ ]
+ sg_name = "spt-test-secgroup-{}".format(random.randrange(100, 999))
+ sg_desc = sg_name + " SPT"
+ secgroup = self.os_clients.compute.security_groups.create(
+ sg_name, sg_desc)
+ for ruleset in rulesets:
+ self.os_clients.compute.security_group_rules.create(
+ secgroup.id, **ruleset)
+ return secgroup
+
+
+ def wait(predicate, interval=5, timeout=60, timeout_msg="Waiting timed out"):
+ start_time = time.time()
+ if not timeout:
+ return predicate()
+ while not predicate():
+ if start_time + timeout < time.time():
+ raise exceptions.TimeoutError(timeout_msg)
+
+ seconds_to_sleep = max(
+ 0,
+ min(interval, start_time + timeout - time.time()))
+ time.sleep(seconds_to_sleep)
+
+ return timeout + start_time - time.time()
+
+ def create_basic_server(self, image=None, flavor=None, net=None,
+ availability_zone=None, sec_groups=(),
+ keypair=None,
+ wait_timeout=3 * 60):
+ os_conn = self.os_clients
+ image = image or self.get_cirros_image()
+ flavor = flavor or self.get_micro_flavor()
+ net = net or self.get_internal_network()
+ kwargs = {}
+ if sec_groups:
+ kwargs['security_groups'] = sec_groups
+ server = os_conn.compute.servers.create(
+ "spt-test-server-{}".format(random.randrange(100, 999)),
+ image, flavor, nics=[{"net-id": net["id"]}],
+ availability_zone=availability_zone, key_name=keypair, **kwargs)
+ # TODO
+ #if wait_timeout:
+ # self.wait(
+ # lambda: os_conn.compute.servers.get(server).status == "ACTIVE",
+ # timeout=wait_timeout,
+ # timeout_msg=(
+ # "Create server {!r} failed by timeout. "
+ # "Please, take a look at OpenStack logs".format(server.id)))
+ return server
+
+ def create_network(self, tenant_id):
+ net_name = "spt-test-net-{}".format(random.randrange(100, 999))
+ net_body = {
+ 'network': {
+ 'name': net_name,
+ 'tenant_id': tenant_id
+ }
+ }
+ net = self.os_clients.network.create_network(net_body)['network']
+ return net
+ #yield net
+ #self.os_clients.network.delete_network(net['id'])
+
+ def create_subnet(self, net, tenant_id, cidr=None):
+ subnet_name = "spt-test-subnet-{}".format(random.randrange(100, 999))
+ subnet_body = {
+ 'subnet': {
+ "name": subnet_name,
+ 'network_id': net['id'],
+ 'ip_version': 4,
+ 'cidr': cidr if cidr else '10.1.7.0/24',
+ 'tenant_id': tenant_id
+ }
+ }
+ subnet = self.os_clients.network.create_subnet(subnet_body)['subnet']
+ return subnet
+ #yield subnet
+ #self.os_clients.network.delete_subnet(subnet['id'])
+
+ def create_router(self, ext_net, tenant_id):
+ name = 'spt-test-router-{}'.format(random.randrange(100, 999))
+ router_body = {
+ 'router': {
+ 'name': name,
+ 'external_gateway_info': {
+ 'network_id': ext_net['id']
+ },
+ 'tenant_id': tenant_id
+ }
+ }
+ router = self.os_clients.network.create_router(router_body)['router']
+ return router
+ #yield router
+ #self.os_clients.network.delete_router(router['id'])
+
+ def create_network_resources(self):
+ tenant_id = self.get_admin_tenant().id
+ ext_net = self.get_external_network()
+ net = self.create_network(tenant_id)
+ subnet = self.create_subnet(net, tenant_id)
+ #router = self.create_router(ext_net, tenant_id)
+ #self.os_clients.network.add_interface_router(
+ # router['id'], {'subnet_id': subnet['id']})
+
+ private_net_id = net['id']
+ # floating_ip_pool = ext_net['id']
+
+ return net
+ #yield private_net_id, floating_ip_pool
+ #yield private_net_id
+ #
+ #self.os_clients.network.remove_interface_router(
+ # router['id'], {'subnet_id': subnet['id']})
+ #self.os_clients.network.remove_gateway_router(router['id'])
diff --git a/test_set/cvp-spt/utils/ssh.py b/test_set/cvp-spt/utils/ssh.py
new file mode 100644
index 0000000..66551eb
--- /dev/null
+++ b/test_set/cvp-spt/utils/ssh.py
@@ -0,0 +1,140 @@
+import cStringIO
+import logging
+import select
+import utils
+import paramiko
+
+
+logger = logging.getLogger(__name__)
+
+# Suppress paramiko logging
+logging.getLogger("paramiko").setLevel(logging.WARNING)
+
+
+class SSHTransport(object):
+ def __init__(self, address, username, password=None,
+ private_key=None, look_for_keys=False, *args, **kwargs):
+
+ self.address = address
+ self.username = username
+ self.password = password
+ if private_key is not None:
+ self.private_key = paramiko.RSAKey.from_private_key(
+ cStringIO.StringIO(private_key))
+ else:
+ self.private_key = None
+
+ self.look_for_keys = look_for_keys
+ self.buf_size = 1024
+ self.channel_timeout = 10.0
+
+ def _get_ssh_connection(self):
+ ssh = paramiko.SSHClient()
+ ssh.set_missing_host_key_policy(
+ paramiko.AutoAddPolicy())
+ ssh.connect(self.address, username=self.username,
+ password=self.password, pkey=self.private_key,
+ timeout=self.channel_timeout)
+ logger.debug("Successfully connected to: {0}".format(self.address))
+ return ssh
+
+ def _get_sftp_connection(self):
+ transport = paramiko.Transport((self.address, 22))
+ transport.connect(username=self.username,
+ password=self.password,
+ pkey=self.private_key)
+
+ return paramiko.SFTPClient.from_transport(transport)
+
+ def exec_sync(self, cmd):
+ logger.debug("Executing {0} on host {1}".format(cmd, self.address))
+ ssh = self._get_ssh_connection()
+ transport = ssh.get_transport()
+ channel = transport.open_session()
+ channel.fileno()
+ channel.exec_command(cmd)
+ channel.shutdown_write()
+ out_data = []
+ err_data = []
+ poll = select.poll()
+ poll.register(channel, select.POLLIN)
+
+ while True:
+ ready = poll.poll(self.channel_timeout)
+ if not any(ready):
+ continue
+ if not ready[0]:
+ continue
+ out_chunk = err_chunk = None
+ if channel.recv_ready():
+ out_chunk = channel.recv(self.buf_size)
+ out_data += out_chunk,
+ if channel.recv_stderr_ready():
+ err_chunk = channel.recv_stderr(self.buf_size)
+ err_data += err_chunk,
+ if channel.closed and not err_chunk and not out_chunk:
+ break
+ exit_status = channel.recv_exit_status()
+ logger.debug("Command {0} executed with status: {1}"
+ .format(cmd, exit_status))
+ return (
+ exit_status, ''.join(out_data).strip(), ''.join(err_data).strip())
+
+ def exec_command(self, cmd):
+ exit_status, stdout, stderr = self.exec_sync(cmd)
+ return stdout
+
+ def check_call(self, command, error_info=None, expected=None,
+ raise_on_err=True):
+ """Execute command and check for return code
+ :type command: str
+ :type error_info: str
+ :type expected: list
+ :type raise_on_err: bool
+ :rtype: ExecResult
+ :raises: DevopsCalledProcessError
+ """
+ if expected is None:
+ expected = [0]
+ ret = self.exec_sync(command)
+ exit_code, stdout_str, stderr_str = ret
+ if exit_code not in expected:
+ message = (
+ "{append}Command '{cmd}' returned exit code {code} while "
+ "expected {expected}\n"
+ "\tSTDOUT:\n"
+ "{stdout}"
+ "\n\tSTDERR:\n"
+ "{stderr}".format(
+ append=error_info + '\n' if error_info else '',
+ cmd=command,
+ code=exit_code,
+ expected=expected,
+ stdout=stdout_str,
+ stderr=stderr_str
+ ))
+ logger.error(message)
+ if raise_on_err:
+ exit()
+ return ret
+
+ def put_file(self, source_path, destination_path):
+ sftp = self._get_sftp_connection()
+ sftp.put(source_path, destination_path)
+ sftp.close()
+
+ def get_file(self, source_path, destination_path):
+ sftp = self._get_sftp_connection()
+ sftp.get(source_path, destination_path)
+ sftp.close()
+
+
+class prepare_iperf(object):
+
+ def __init__(self,fip,user='ubuntu',password='password', private_key=None):
+ transport = SSHTransport(fip, user, password, private_key)
+ config = utils.get_configuration()
+ preparation_cmd = config.get('iperf_prep_string') or ['']
+ transport.exec_command(preparation_cmd)
+ transport.exec_command('sudo apt-get update; sudo apt-get install -y iperf')
+ transport.exec_command('nohup iperf -s > file 2>&1 &')