Merge "Improve test_implied_domain_roles"
diff --git a/README.rst b/README.rst
index 281516b..3d7c804 100644
--- a/README.rst
+++ b/README.rst
@@ -209,15 +209,9 @@
Python 3.x
----------
-Starting during the Liberty release development cycle work began on enabling
-Tempest to run under both Python 2.7 and Python 3.4. Tempest strives to fully
-support running with Python 3.4 and newer. A gating unit test job was added to
-also run Tempest's unit tests under Python 3. This means that the Tempest
-code at least imports under Python 3.4 and things that have unit test coverage
-will work on Python 3.4. However, because large parts of Tempest are
-self-verifying there might be uncaught issues running on Python 3. So until
-there is a gating job which does a full Tempest run using Python 3 there
-isn't any guarantee that running Tempest under Python 3 is bug free.
+Starting during the Pike cycle Tempest has a gating CI job that runs tempest
+with Python 3. Any tempest release after 15.0.0 should fully support running
+under Python 3 as well as Python 2.7.
Legacy run method
-----------------
@@ -263,9 +257,7 @@
$ testr run tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_reboot_non_existent_server
-Alternatively, you can use the run_tempest.sh script which will create a venv
-and run the tests or use tox to do the same. Tox also contains several existing
-job configurations. For example::
+Tox also contains several existing job configurations. For example::
$ tox -efull
diff --git a/doc/source/library/clients.rst b/doc/source/library/clients.rst
index 086cfc9..0f4ba4c 100644
--- a/doc/source/library/clients.rst
+++ b/doc/source/library/clients.rst
@@ -16,9 +16,18 @@
The ``ServiceClients`` class provides a convenient way to get access to all
available service clients initialized with a provided set of credentials.
-------------------
-The clients module
-------------------
+-----------------------------
+The clients management module
+-----------------------------
.. automodule:: tempest.lib.services.clients
:members:
+
+------------------------------
+Compute service client modules
+------------------------------
+
+.. toctree::
+ :maxdepth: 2
+
+ service_clients/compute_clients
diff --git a/doc/source/library/service_clients/compute_clients.rst b/doc/source/library/service_clients/compute_clients.rst
new file mode 100644
index 0000000..4ca55d4
--- /dev/null
+++ b/doc/source/library/service_clients/compute_clients.rst
@@ -0,0 +1,7 @@
+.. _servers_client:
+
+Compute Client Usage
+====================
+
+.. automodule:: tempest.lib.services.compute.servers_client
+ :members:
diff --git a/doc/source/test-removal.rst b/doc/source/test-removal.rst
index 79a5846..4757dc4 100644
--- a/doc/source/test-removal.rst
+++ b/doc/source/test-removal.rst
@@ -38,8 +38,10 @@
#. The test proposed for removal has a failure rate < 0.50% in the gate over
the past release (the value and interval will likely be adjusted in the
future)
- #. There must not be an external user/consumer of tempest that depends on the
- test proposed for removal
+
+ .. _`prong #3`:
+ #. There must not be an external user/consumer of tempest
+ that depends on the test proposed for removal
The answers to 1 and 2 are easy to verify. For 1 just provide a link to the new
test location. If you are linking to the tempest removal patch please also put
@@ -133,6 +135,10 @@
#. A revert for a patch which added a broken test, or testing which didn't
actually run in the gate (basically any revert for something which
shouldn't have been added)
+ #. Tests that would become out of scope as a consequence of an API change,
+ as described in `API Compatibility`_.
+ Such tests cannot live in Tempest because of the branchless nature of
+ Tempest. Such test must still honor `prong #3`_.
For the first exception type the only types of testing in tree which have been
declared out of scope at this point are:
@@ -149,7 +155,7 @@
Tempest Scope
^^^^^^^^^^^^^
-Also starting in the liberty cycle tempest has defined a set of projects which
+Starting in the liberty cycle tempest has defined a set of projects which
are defined as in scope for direct testing in tempest. As of today that list
is:
@@ -166,3 +172,17 @@
to maintain continuity after migrating the tests out of tempest.
.. _tempest plugin mechanism: http://docs.openstack.org/developer/tempest/plugin.html
+
+API Compatibility
+"""""""""""""""""
+
+If an API introduces a non-discoverable, backward incompatible change, and
+such change is not backported to all versions supported by Tempest, tests for
+that API cannot live in Tempest anymore.
+This is because tests would not be able to know or control which API response
+to expect, and thus would not be able to enforce a specific behavior.
+
+If a test exists in Tempest that would meet this criteria as consequence of a
+change, the test must be removed according to the procedure discussed into
+this document. The API change should not be merged until all conditions
+required for test removal can be met.
\ No newline at end of file
diff --git a/releasenotes/notes/12.2.0-clients_module-16f3025f515bf9ec.yaml b/releasenotes/notes/12.2.0-clients_module-16f3025f515bf9ec.yaml
index 484d543..d07448a 100644
--- a/releasenotes/notes/12.2.0-clients_module-16f3025f515bf9ec.yaml
+++ b/releasenotes/notes/12.2.0-clients_module-16f3025f515bf9ec.yaml
@@ -4,7 +4,7 @@
plugins to declare and automatically register any service client defined
in the plugin.
- tempest.lib exposes a new stable interface, the clients module and
- ServiceClients class, which provides a convinient way for plugin tests to
+ ServiceClients class, which provides a convenient way for plugin tests to
access service clients defined in Tempest as well as service clients
defined in all loaded plugins.
The new ServiceClients class only exposes for now the service clients
diff --git a/releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml b/releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml
index 543cf7b..52c04af 100644
--- a/releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml
+++ b/releasenotes/notes/13.0.0-move-call-until-true-to-tempest-lib-c9ea70dd6fe9bd15.yaml
@@ -2,4 +2,4 @@
deprecations:
- The ``call_until_true`` function is moved from the ``tempest.test`` module
to the ``tempest.lib.common.utils.test_utils`` module. Backward
- compatibilty is preserved until Ocata.
+ compatibility is preserved until Ocata.
diff --git a/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml b/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml
index 20f310d..813e47f 100644
--- a/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml
+++ b/releasenotes/notes/13.0.0-tempest-cleanup-nostandalone-39df2aafb2545d35.yaml
@@ -1,5 +1,5 @@
---
upgrade:
- - the already depreacted tempest-cleanup standalone command has been
+ - the already deprecated tempest-cleanup standalone command has been
removed. The corresponding functionalities can be accessed through
the unified `tempest` command (`tempest cleanup`).
diff --git a/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml b/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
index aa3a78e..389b29f 100644
--- a/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
+++ b/releasenotes/notes/14.0.0-remo-stress-tests-81052b211ad95d2e.yaml
@@ -1,4 +1,13 @@
---
+prelude: >
+ This release is marking the end of Liberty release support in Tempest
upgrade:
- The Stress tests framework and all the stress tests have been removed.
+other:
+ - |
+ OpenStack releases supported at this time are **Mitaka** and **Newton**.
+ The release under current development as of this tag is Ocata, meaning that
+ every Tempest commit is also tested against master during the Ocata cycle.
+ However, this does not necessarily mean that using Tempest as of this tag
+ will work against a Ocata (or future releases) cloud.
diff --git a/releasenotes/notes/add-identity-v3-clients-as-a-library-d34b4fdf376984ad.yaml b/releasenotes/notes/15.0.0-add-identity-v3-clients-as-a-library-d34b4fdf376984ad.yaml
similarity index 100%
rename from releasenotes/notes/add-identity-v3-clients-as-a-library-d34b4fdf376984ad.yaml
rename to releasenotes/notes/15.0.0-add-identity-v3-clients-as-a-library-d34b4fdf376984ad.yaml
diff --git a/releasenotes/notes/add-image-clients-tests-49dbc0a0a4281a77.yaml b/releasenotes/notes/15.0.0-add-image-clients-tests-49dbc0a0a4281a77.yaml
similarity index 100%
rename from releasenotes/notes/add-image-clients-tests-49dbc0a0a4281a77.yaml
rename to releasenotes/notes/15.0.0-add-image-clients-tests-49dbc0a0a4281a77.yaml
diff --git a/releasenotes/notes/add-implied-roles-to-roles-client-library-edf96408ad9ba82e.yaml b/releasenotes/notes/15.0.0-add-implied-roles-to-roles-client-library-edf96408ad9ba82e.yaml
similarity index 100%
rename from releasenotes/notes/add-implied-roles-to-roles-client-library-edf96408ad9ba82e.yaml
rename to releasenotes/notes/15.0.0-add-implied-roles-to-roles-client-library-edf96408ad9ba82e.yaml
diff --git a/releasenotes/notes/add-snapshot-manage-client-as-library-a76ffdba9d8d01cb.yaml b/releasenotes/notes/15.0.0-add-snapshot-manage-client-as-library-a76ffdba9d8d01cb.yaml
similarity index 100%
rename from releasenotes/notes/add-snapshot-manage-client-as-library-a76ffdba9d8d01cb.yaml
rename to releasenotes/notes/15.0.0-add-snapshot-manage-client-as-library-a76ffdba9d8d01cb.yaml
diff --git a/releasenotes/notes/deprecate-allow_port_security_disabled-option-2d3d87f6bd11d03a.yaml b/releasenotes/notes/15.0.0-deprecate-allow_port_security_disabled-option-2d3d87f6bd11d03a.yaml
similarity index 100%
rename from releasenotes/notes/deprecate-allow_port_security_disabled-option-2d3d87f6bd11d03a.yaml
rename to releasenotes/notes/15.0.0-deprecate-allow_port_security_disabled-option-2d3d87f6bd11d03a.yaml
diff --git a/releasenotes/notes/deprecate-identity-feature-enabled.reseller-84800a8232fe217f.yaml b/releasenotes/notes/15.0.0-deprecate-identity-feature-enabled.reseller-84800a8232fe217f.yaml
similarity index 100%
rename from releasenotes/notes/deprecate-identity-feature-enabled.reseller-84800a8232fe217f.yaml
rename to releasenotes/notes/15.0.0-deprecate-identity-feature-enabled.reseller-84800a8232fe217f.yaml
diff --git a/releasenotes/notes/deprecate-volume_feature_enabled.volume_services-dbe024ea067d5ab2.yaml b/releasenotes/notes/15.0.0-deprecate-volume_feature_enabled.volume_services-dbe024ea067d5ab2.yaml
similarity index 100%
rename from releasenotes/notes/deprecate-volume_feature_enabled.volume_services-dbe024ea067d5ab2.yaml
rename to releasenotes/notes/15.0.0-deprecate-volume_feature_enabled.volume_services-dbe024ea067d5ab2.yaml
diff --git a/releasenotes/notes/jsonschema-validator-2377ba131e12d3c7.yaml b/releasenotes/notes/15.0.0-jsonschema-validator-2377ba131e12d3c7.yaml
similarity index 100%
rename from releasenotes/notes/jsonschema-validator-2377ba131e12d3c7.yaml
rename to releasenotes/notes/15.0.0-jsonschema-validator-2377ba131e12d3c7.yaml
diff --git a/releasenotes/notes/15.0.0-remove-deprecated-compute-microversion-config-options-eaee6a7d2f8390a8.yaml b/releasenotes/notes/15.0.0-remove-deprecated-compute-microversion-config-options-eaee6a7d2f8390a8.yaml
new file mode 100644
index 0000000..b1c0c62
--- /dev/null
+++ b/releasenotes/notes/15.0.0-remove-deprecated-compute-microversion-config-options-eaee6a7d2f8390a8.yaml
@@ -0,0 +1,9 @@
+---
+upgrade:
+ - The deprecated compute microversion config options from
+ 'compute-feature-enabled' group have been removed. Those config options
+ are available under 'compute' group to configure the min and max
+ microversion for compute service.
+
+ * CONF.compute.min_microversion
+ * CONF.compute.max_microversion
diff --git a/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml b/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
new file mode 100644
index 0000000..104bf27
--- /dev/null
+++ b/releasenotes/notes/15.0.0-remove-deprecated-compute-validation-config-options-e3d1b89ce074d71c.yaml
@@ -0,0 +1,25 @@
+---
+prelude: >
+ This release is marking the start of Ocata release support in Tempest
+upgrade:
+ - |
+ Below deprecated config options from compute group have been removed.
+ Corresponding config options already been available in validation group.
+
+ - ``compute.use_floatingip_for_ssh`` (available as ``validation.connect_method``)
+ - ``compute.ssh_auth_method`` (available as ``validation.auth_method``)
+ - ``compute.image_ssh_password`` (available as ``validation.image_ssh_password``)
+ - ``compute.ssh_shell_prologue`` (available as ``validation.ssh_shell_prologue``)
+ - ``compute.ping_size `` (available as ``validation.ping_size``)
+ - ``compute.ping_count `` (available as ``validation.ping_count``)
+ - ``compute.floating_ip_range `` (available as ``validation.floating_ip_range``)
+other:
+ - |
+ OpenStack releases supported at this time are **Mitaka**, **Newton**,
+ and **Ocata**.
+
+ The release under current development as of this tag is Pike,
+ meaning that every Tempest commit is also tested against master during
+ the Pike cycle. However, this does not necessarily mean that using
+ Tempest as of this tag will work against a Pike (or future releases)
+ cloud.
diff --git a/releasenotes/notes/15.0.0-remove-deprecated-input-scenario-config-options-414e0c5442e967e9.yaml b/releasenotes/notes/15.0.0-remove-deprecated-input-scenario-config-options-414e0c5442e967e9.yaml
new file mode 100644
index 0000000..371c061
--- /dev/null
+++ b/releasenotes/notes/15.0.0-remove-deprecated-input-scenario-config-options-414e0c5442e967e9.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+ - The deprecated input-scenario config options and group
+ have been removed.
+ The input scenarios functionality already being removed from tempest
+ and from this release, their corresponding config options too.
diff --git a/releasenotes/notes/15.0.0-remove-deprecated-network-config-options-f9ce276231578fe6.yaml b/releasenotes/notes/15.0.0-remove-deprecated-network-config-options-f9ce276231578fe6.yaml
new file mode 100644
index 0000000..e445fb3
--- /dev/null
+++ b/releasenotes/notes/15.0.0-remove-deprecated-network-config-options-f9ce276231578fe6.yaml
@@ -0,0 +1,11 @@
+---
+upgrade:
+ - |
+ Below deprecated network config options have been removed.
+ Those config options already been renamed to below meaningful names.
+
+ - ``tenant_network_cidr`` (removed) -> ``project_network_cidr``
+ - ``tenant_network_mask_bits`` (removed) -> ``project_network_mask_bits``
+ - ``tenant_network_v6_cidr`` (removed) -> ``project_network_v6_cidr``
+ - ``tenant_network_v6_mask_bits`` (removed) -> ``project_network_v6_mask_bits``
+ - ``tenant_networks_reachable`` (removed) -> ``project_networks_reachable``
diff --git a/releasenotes/notes/add-list-security-groups-by-servers-to-servers-client-library-088df48f6d81f4be.yaml b/releasenotes/notes/add-list-security-groups-by-servers-to-servers-client-library-088df48f6d81f4be.yaml
new file mode 100644
index 0000000..67f9541
--- /dev/null
+++ b/releasenotes/notes/add-list-security-groups-by-servers-to-servers-client-library-088df48f6d81f4be.yaml
@@ -0,0 +1,6 @@
+---
+features:
+ - |
+ Add the list security groups by server API to the servers_client
+ library. This feature enables the possibility to list security
+ groups for a server instance.
diff --git a/releasenotes/notes/create-server-tags-client-8c0042a77e859af6.yaml b/releasenotes/notes/create-server-tags-client-8c0042a77e859af6.yaml
new file mode 100644
index 0000000..9927971
--- /dev/null
+++ b/releasenotes/notes/create-server-tags-client-8c0042a77e859af6.yaml
@@ -0,0 +1,8 @@
+---
+features:
+ - |
+ Add server tags APIs to the servers_client library.
+ This feature enables the possibility of upating, deleting
+ and checking existence of a tag on a server, as well
+ as updating and deleting all tags on a server.
+
diff --git a/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml b/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml
new file mode 100644
index 0000000..6285ea6
--- /dev/null
+++ b/releasenotes/notes/deprecate-skip_unless_config-decorator-64c32d588043ab12.yaml
@@ -0,0 +1,5 @@
+---
+deprecations:
+ - The ``skip_unless_config`` and ``skip_if_config`` decorators in the
+ ``config`` module have been deprecated and will be removed in the Queens
+ dev cycle. Use the ``testtools.skipUnless`` (or a variation of) instead.
diff --git a/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml b/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml
new file mode 100644
index 0000000..5670821
--- /dev/null
+++ b/releasenotes/notes/remove-call_until_true-of-test-de9c13bc8f969921.yaml
@@ -0,0 +1,6 @@
+---
+upgrade:
+ - The *call_until_true* of *test* module is removed because it was marked
+ as deprecated and Tempest provides it from *test_utils* as a stable
+ interface instead. Please switch to use *test_utils.call_until_true* if
+ necessary.
diff --git a/releasenotes/source/index.rst b/releasenotes/source/index.rst
index 242d133..cea76b4 100644
--- a/releasenotes/source/index.rst
+++ b/releasenotes/source/index.rst
@@ -6,6 +6,7 @@
:maxdepth: 1
unreleased
+ v15.0.0
v14.0.0
v13.0.0
v12.0.0
diff --git a/releasenotes/source/v15.0.0.rst b/releasenotes/source/v15.0.0.rst
new file mode 100644
index 0000000..2ee1894
--- /dev/null
+++ b/releasenotes/source/v15.0.0.rst
@@ -0,0 +1,6 @@
+=====================
+v15.0.0 Release Notes
+=====================
+
+.. release-notes:: 15.0.0 Release Notes
+ :version: 15.0.0
diff --git a/requirements.txt b/requirements.txt
index d9a9ebb..124da7a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -12,7 +12,7 @@
oslo.config!=3.18.0,>=3.14.0 # Apache-2.0
oslo.log>=3.11.0 # Apache-2.0
oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.18.0 # Apache-2.0
+oslo.utils>=3.20.0 # Apache-2.0
six>=1.9.0 # MIT
fixtures>=3.0.0 # Apache-2.0/BSD
PyYAML>=3.10.0 # MIT
diff --git a/run_tempest.sh b/run_tempest.sh
deleted file mode 100755
index 414146b..0000000
--- a/run_tempest.sh
+++ /dev/null
@@ -1,135 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-function usage {
- echo "Usage: $0 [OPTION]..."
- echo "Run Tempest test suite"
- echo ""
- echo " -V, --virtual-env Always use virtualenv. Install automatically if not present"
- echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment"
- echo " -n, --no-site-packages Isolate the virtualenv from the global Python environment"
- echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added."
- echo " -u, --update Update the virtual environment with any newer package versions"
- echo " -s, --smoke Only run smoke tests"
- echo " -t, --serial Run testr serially"
- echo " -C, --config Config file location"
- echo " -h, --help Print this usage message"
- echo " -d, --debug Run tests with testtools instead of testr. This allows you to use PDB"
- echo " -- [TESTROPTIONS] After the first '--' you can pass arbitrary arguments to testr "
-}
-
-testrargs=""
-venv=${VENV:-.venv}
-with_venv=tools/with_venv.sh
-serial=0
-always_venv=0
-never_venv=0
-no_site_packages=0
-debug=0
-force=0
-wrapper=""
-config_file=""
-update=0
-
-if ! options=$(getopt -o VNnfusthdC:lL: -l virtual-env,no-virtual-env,no-site-packages,force,update,smoke,serial,help,debug,config: -- "$@")
-then
- # parse error
- usage
- exit 1
-fi
-
-eval set -- $options
-first_uu=yes
-while [ $# -gt 0 ]; do
- case "$1" in
- -h|--help) usage; exit;;
- -V|--virtual-env) always_venv=1; never_venv=0;;
- -N|--no-virtual-env) always_venv=0; never_venv=1;;
- -n|--no-site-packages) no_site_packages=1;;
- -f|--force) force=1;;
- -u|--update) update=1;;
- -d|--debug) debug=1;;
- -C|--config) config_file=$2; shift;;
- -s|--smoke) testrargs+="smoke";;
- -t|--serial) serial=1;;
- --) [ "yes" == "$first_uu" ] || testrargs="$testrargs $1"; first_uu=no ;;
- *) testrargs="$testrargs $1";;
- esac
- shift
-done
-
-if [ -n "$config_file" ]; then
- config_file=`readlink -f "$config_file"`
- export TEMPEST_CONFIG_DIR=`dirname "$config_file"`
- export TEMPEST_CONFIG=`basename "$config_file"`
-fi
-
-cd `dirname "$0"`
-
-if [ $no_site_packages -eq 1 ]; then
- installvenvopts="--no-site-packages"
-fi
-
-function testr_init {
- if [ ! -d .testrepository ]; then
- ${wrapper} testr init
- fi
-}
-
-function run_tests {
- testr_init
- ${wrapper} find . -type f -name "*.pyc" -delete
- export OS_TEST_PATH=./tempest/test_discover
- if [ $debug -eq 1 ]; then
- if [ "$testrargs" = "" ]; then
- testrargs="discover ./tempest/test_discover"
- fi
- ${wrapper} python -m testtools.run $testrargs
- return $?
- fi
-
- if [ $serial -eq 1 ]; then
- ${wrapper} testr run --subunit $testrargs | ${wrapper} subunit-trace -n -f
- else
- ${wrapper} testr run --parallel --subunit $testrargs | ${wrapper} subunit-trace -n -f
- fi
-}
-
-if [ $never_venv -eq 0 ]
-then
- # Remove the virtual environment if --force used
- if [ $force -eq 1 ]; then
- echo "Cleaning virtualenv..."
- rm -rf ${venv}
- fi
- if [ $update -eq 1 ]; then
- echo "Updating virtualenv..."
- virtualenv $installvenvopts $venv
- $venv/bin/pip install -U -r requirements.txt
- fi
- if [ -e ${venv} ]; then
- wrapper="${with_venv}"
- else
- if [ $always_venv -eq 1 ]; then
- # Automatically install the virtualenv
- virtualenv $installvenvopts $venv
- wrapper="${with_venv}"
- ${wrapper} pip install -U -r requirements.txt
- else
- echo -e "No virtual environment found...create one? (Y/n) \c"
- read use_ve
- if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
- # Install the virtualenv and run the test suite in it
- virtualenv $installvenvopts $venv
- wrapper=${with_venv}
- ${wrapper} pip install -U -r requirements.txt
- fi
- fi
- fi
-fi
-
-run_tests
-retval=$?
-
-exit $retval
diff --git a/run_tests.sh b/run_tests.sh
deleted file mode 100755
index a856bb4..0000000
--- a/run_tests.sh
+++ /dev/null
@@ -1,193 +0,0 @@
-#!/usr/bin/env bash
-
-function usage {
- echo "Usage: $0 [OPTION]..."
- echo "Run Tempest unit tests"
- echo ""
- echo " -V, --virtual-env Always use virtualenv. Install automatically if not present"
- echo " -N, --no-virtual-env Don't use virtualenv. Run tests in local environment"
- echo " -n, --no-site-packages Isolate the virtualenv from the global Python environment"
- echo " -f, --force Force a clean re-build of the virtual environment. Useful when dependencies have been added."
- echo " -u, --update Update the virtual environment with any newer package versions"
- echo " -t, --serial Run testr serially"
- echo " -p, --pep8 Just run pep8"
- echo " -c, --coverage Generate coverage report"
- echo " -h, --help Print this usage message"
- echo " -d, --debug Run tests with testtools instead of testr. This allows you to use PDB"
- echo " -- [TESTROPTIONS] After the first '--' you can pass arbitrary arguments to testr "
-}
-
-function deprecation_warning {
- cat <<EOF
--------------------------------------------------------------------------
-WARNING: run_tests.sh is deprecated and this script will be removed after
-the Newton release. All tests should be run through testr/ostestr or tox.
-
-To run style checks:
-
- tox -e pep8
-
-To run python 2.7 unit tests
-
- tox -e py27
-
-To run unit tests and generate coverage report
-
- tox -e cover
-
-To run a subset of any of these tests:
-
- tox -e py27 someregex
-
- i.e.: tox -e py27 test_servers
-
-Additional tox targets are available in tox.ini. For more information
-see:
-http://docs.openstack.org/project-team-guide/project-setup/python.html
-
-NOTE: if you want to use testr to run tests, you can instead use:
-
- OS_TEST_PATH=./tempest/tests testr run
-
-Documentation on using testr directly can be found at
-http://testrepository.readthedocs.org/en/latest/MANUAL.html
--------------------------------------------------------------------------
-EOF
-}
-
-testrargs=""
-just_pep8=0
-venv=${VENV:-.venv}
-with_venv=tools/with_venv.sh
-serial=0
-always_venv=0
-never_venv=0
-no_site_packages=0
-debug=0
-force=0
-coverage=0
-wrapper=""
-config_file=""
-update=0
-
-deprecation_warning
-
-if ! options=$(getopt -o VNnfuctphd -l virtual-env,no-virtual-env,no-site-packages,force,update,serial,coverage,pep8,help,debug -- "$@")
-then
- # parse error
- usage
- exit 1
-fi
-
-eval set -- $options
-first_uu=yes
-while [ $# -gt 0 ]; do
- case "$1" in
- -h|--help) usage; exit;;
- -V|--virtual-env) always_venv=1; never_venv=0;;
- -N|--no-virtual-env) always_venv=0; never_venv=1;;
- -n|--no-site-packages) no_site_packages=1;;
- -f|--force) force=1;;
- -u|--update) update=1;;
- -d|--debug) debug=1;;
- -p|--pep8) let just_pep8=1;;
- -c|--coverage) coverage=1;;
- -t|--serial) serial=1;;
- --) [ "yes" == "$first_uu" ] || testrargs="$testrargs $1"; first_uu=no ;;
- *) testrargs="$testrargs $1";;
- esac
- shift
-done
-
-
-cd `dirname "$0"`
-
-if [ $no_site_packages -eq 1 ]; then
- installvenvopts="--no-site-packages"
-fi
-
-function testr_init {
- if [ ! -d .testrepository ]; then
- ${wrapper} testr init
- fi
-}
-
-function run_tests {
- testr_init
- ${wrapper} find . -type f -name "*.pyc" -delete
- export OS_TEST_PATH=./tempest/tests
- if [ $debug -eq 1 ]; then
- if [ "$testrargs" = "" ]; then
- testrargs="discover ./tempest/tests"
- fi
- ${wrapper} python -m testtools.run $testrargs
- return $?
- fi
-
- if [ $coverage -eq 1 ]; then
- ${wrapper} python setup.py test --coverage
- return $?
- fi
-
- if [ $serial -eq 1 ]; then
- ${wrapper} testr run --subunit $testrargs | ${wrapper} subunit-trace -n -f
- else
- ${wrapper} testr run --parallel --subunit $testrargs | ${wrapper} subunit-trace -n -f
- fi
-}
-
-function run_pep8 {
- echo "Running flake8 ..."
- if [ $never_venv -eq 1 ]; then
- echo "**WARNING**:" >&2
- echo "Running flake8 without virtual env may miss OpenStack HACKING detection" >&2
- fi
- ${wrapper} flake8
-}
-
-if [ $never_venv -eq 0 ]
-then
- # Remove the virtual environment if --force used
- if [ $force -eq 1 ]; then
- echo "Cleaning virtualenv..."
- rm -rf ${venv}
- fi
- if [ $update -eq 1 ]; then
- echo "Updating virtualenv..."
- virtualenv $installvenvopts $venv
- $venv/bin/pip install -U -r requirements.txt -r test-requirements.txt
- fi
- if [ -e ${venv} ]; then
- wrapper="${with_venv}"
- else
- if [ $always_venv -eq 1 ]; then
- # Automatically install the virtualenv
- virtualenv $installvenvopts $venv
- wrapper="${with_venv}"
- ${wrapper} pip install -U -r requirements.txt -r test-requirements.txt
- else
- echo -e "No virtual environment found...create one? (Y/n) \c"
- read use_ve
- if [ "x$use_ve" = "xY" -o "x$use_ve" = "x" -o "x$use_ve" = "xy" ]; then
- # Install the virtualenv and run the test suite in it
- virtualenv $installvenvopts $venv
- wrapper=${with_venv}
- ${wrapper} pip install -U -r requirements.txt -r test-requirements.txt
- fi
- fi
- fi
-fi
-
-if [ $just_pep8 -eq 1 ]; then
- run_pep8
- exit
-fi
-
-run_tests
-retval=$?
-
-if [ -z "$testrargs" ]; then
- run_pep8
-fi
-
-exit $retval
diff --git a/tempest/api/compute/admin/test_aggregates_negative.py b/tempest/api/compute/admin/test_aggregates_negative.py
index deded73..00107cd 100644
--- a/tempest/api/compute/admin/test_aggregates_negative.py
+++ b/tempest/api/compute/admin/test_aggregates_negative.py
@@ -34,7 +34,6 @@
def resource_setup(cls):
super(AggregatesAdminNegativeTestJSON, cls).resource_setup()
cls.aggregate_name_prefix = 'test_aggregate'
- cls.az_name_prefix = 'test_az'
hosts_all = cls.os_adm.hosts_client.list_hosts()['hosts']
hosts = ([host['host_name']
diff --git a/tempest/api/compute/admin/test_flavors.py b/tempest/api/compute/admin/test_flavors.py
index 3fd1612..c3c88a5 100644
--- a/tempest/api/compute/admin/test_flavors.py
+++ b/tempest/api/compute/admin/test_flavors.py
@@ -33,12 +33,6 @@
raise cls.skipException(msg)
@classmethod
- def setup_clients(cls):
- super(FlavorsAdminTestJSON, cls).setup_clients()
- cls.client = cls.os_adm.flavors_client
- cls.user_client = cls.os.flavors_client
-
- @classmethod
def resource_setup(cls):
super(FlavorsAdminTestJSON, cls).resource_setup()
@@ -50,50 +44,22 @@
cls.swap = 1024
cls.rxtx = 2
- def flavor_clean_up(self, flavor_id):
- self.client.delete_flavor(flavor_id)
- self.client.wait_for_resource_deletion(flavor_id)
-
- def _create_flavor(self, flavor_id):
- # Create a flavor and ensure it is listed
- # This operation requires the user to have 'admin' role
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
-
- # Create the flavor
- flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=flavor_id,
- ephemeral=self.ephemeral,
- swap=self.swap,
- rxtx_factor=self.rxtx)['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
- self.assertEqual(flavor['name'], flavor_name)
- self.assertEqual(flavor['vcpus'], self.vcpus)
- self.assertEqual(flavor['disk'], self.disk)
- self.assertEqual(flavor['ram'], self.ram)
- self.assertEqual(flavor['swap'], self.swap)
- self.assertEqual(flavor['rxtx_factor'], self.rxtx)
- self.assertEqual(flavor['OS-FLV-EXT-DATA:ephemeral'],
- self.ephemeral)
- self.assertEqual(flavor['os-flavor-access:is_public'], True)
-
- # Verify flavor is retrieved
- flavor = self.client.show_flavor(flavor['id'])['flavor']
- self.assertEqual(flavor['name'], flavor_name)
-
- return flavor['id']
-
@decorators.idempotent_id('8b4330e1-12c4-4554-9390-e6639971f086')
def test_create_flavor_with_int_id(self):
flavor_id = data_utils.rand_int_id(start=1000)
- new_flavor_id = self._create_flavor(flavor_id)
+ new_flavor_id = self.create_flavor(ram=self.ram,
+ vcpus=self.vcpus,
+ disk=self.disk,
+ id=flavor_id)['id']
self.assertEqual(new_flavor_id, str(flavor_id))
@decorators.idempotent_id('94c9bb4e-2c2a-4f3c-bb1f-5f0daf918e6d')
def test_create_flavor_with_uuid_id(self):
flavor_id = data_utils.rand_uuid()
- new_flavor_id = self._create_flavor(flavor_id)
+ new_flavor_id = self.create_flavor(ram=self.ram,
+ vcpus=self.vcpus,
+ disk=self.disk,
+ id=flavor_id)['id']
self.assertEqual(new_flavor_id, flavor_id)
@decorators.idempotent_id('f83fe669-6758-448a-a85e-32d351f36fe0')
@@ -101,7 +67,10 @@
# If nova receives a request with None as flavor_id,
# nova generates flavor_id of uuid.
flavor_id = None
- new_flavor_id = self._create_flavor(flavor_id)
+ new_flavor_id = self.create_flavor(ram=self.ram,
+ vcpus=self.vcpus,
+ disk=self.disk,
+ id=flavor_id)['id']
self.assertEqual(new_flavor_id, str(uuid.UUID(new_flavor_id)))
@decorators.idempotent_id('8261d7b0-be58-43ec-a2e5-300573c3f6c5')
@@ -109,24 +78,19 @@
# Create a flavor and ensure it's details are listed
# This operation requires the user to have 'admin' role
flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
# Create the flavor
- flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- ephemeral=self.ephemeral,
- swap=self.swap,
- rxtx_factor=self.rxtx)['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
- flag = False
- # Verify flavor is retrieved
- flavors = self.client.list_flavors(detail=True)['flavors']
- for flavor in flavors:
- if flavor['name'] == flavor_name:
- flag = True
- self.assertTrue(flag)
+ self.create_flavor(name=flavor_name,
+ ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk,
+ ephemeral=self.ephemeral,
+ swap=self.swap,
+ rxtx_factor=self.rxtx)
+
+ # Check if flavor is present in list
+ flavors_list = self.admin_flavors_client.list_flavors(
+ detail=True)['flavors']
+ self.assertIn(flavor_name, [f['name'] for f in flavors_list])
@decorators.idempotent_id('63dc64e6-2e79-4fdf-868f-85500d308d66')
def test_create_list_flavor_without_extra_data(self):
@@ -137,18 +101,17 @@
# check some extensions for the flavor create/show/detail response
self.assertEqual(flavor['swap'], '')
self.assertEqual(int(flavor['rxtx_factor']), 1)
- self.assertEqual(int(flavor['OS-FLV-EXT-DATA:ephemeral']), 0)
+ self.assertEqual(flavor['OS-FLV-EXT-DATA:ephemeral'], 0)
self.assertEqual(flavor['os-flavor-access:is_public'], True)
flavor_name = data_utils.rand_name(self.flavor_name_prefix)
new_flavor_id = data_utils.rand_int_id(start=1000)
# Create the flavor
- flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id)['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
+ flavor = self.create_flavor(name=flavor_name,
+ ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk,
+ id=new_flavor_id)
self.assertEqual(flavor['name'], flavor_name)
self.assertEqual(flavor['ram'], self.ram)
self.assertEqual(flavor['vcpus'], self.vcpus)
@@ -157,18 +120,17 @@
verify_flavor_response_extension(flavor)
# Verify flavor is retrieved
- flavor = self.client.show_flavor(new_flavor_id)['flavor']
+ flavor = self.admin_flavors_client.show_flavor(new_flavor_id)['flavor']
self.assertEqual(flavor['name'], flavor_name)
verify_flavor_response_extension(flavor)
# Check if flavor is present in list
- flag = False
- flavors = self.user_client.list_flavors(detail=True)['flavors']
- for flavor in flavors:
- if flavor['name'] == flavor_name:
- verify_flavor_response_extension(flavor)
- flag = True
- self.assertTrue(flag)
+ flavors_list = [
+ f for f in self.flavors_client.list_flavors(detail=True)['flavors']
+ if f['name'] == flavor_name
+ ]
+ self.assertNotEmpty(flavors_list)
+ verify_flavor_response_extension(flavors_list[0])
@decorators.idempotent_id('be6cc18c-7c5d-48c0-ac16-17eaf03c54eb')
def test_list_non_public_flavor(self):
@@ -177,44 +139,27 @@
# tenant is not automatically added access list.
# This operation requires the user to have 'admin' role
flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
# Create the flavor
- flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public="False")['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
- # Verify flavor is retrieved
- flag = False
- flavors = self.client.list_flavors(detail=True)['flavors']
- for flavor in flavors:
- if flavor['name'] == flavor_name:
- flag = True
- self.assertFalse(flag)
+ self.create_flavor(name=flavor_name,
+ ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk,
+ is_public="False")
+ # Verify flavor is not retrieved
+ flavors_list = self.admin_flavors_client.list_flavors(
+ detail=True)['flavors']
+ self.assertNotIn(flavor_name, [f['name'] for f in flavors_list])
# Verify flavor is not retrieved with other user
- flag = False
- flavors = self.user_client.list_flavors(detail=True)['flavors']
- for flavor in flavors:
- if flavor['name'] == flavor_name:
- flag = True
- self.assertFalse(flag)
+ flavors_list = self.flavors_client.list_flavors(detail=True)['flavors']
+ self.assertNotIn(flavor_name, [f['name'] for f in flavors_list])
@decorators.idempotent_id('bcc418ef-799b-47cc-baa1-ce01368b8987')
def test_create_server_with_non_public_flavor(self):
# Create a flavor with os-flavor-access:is_public false
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
-
- # Create the flavor
- flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public="False")['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
+ flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk,
+ is_public="False")
# Verify flavor is not used by other user
self.assertRaises(lib_exc.BadRequest,
@@ -227,60 +172,40 @@
# Create a Flavor with public access.
# Try to List/Get flavor with another user
flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
# Create the flavor
- flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public="True")['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
- flag = False
- self.new_client = self.flavors_client
+ self.create_flavor(name=flavor_name,
+ ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk,
+ is_public="True")
# Verify flavor is retrieved with new user
- flavors = self.new_client.list_flavors(detail=True)['flavors']
- for flavor in flavors:
- if flavor['name'] == flavor_name:
- flag = True
- self.assertTrue(flag)
+ flavors_list = self.flavors_client.list_flavors(detail=True)['flavors']
+ self.assertIn(flavor_name, [f['name'] for f in flavors_list])
@decorators.idempotent_id('fb9cbde6-3a0e-41f2-a983-bdb0a823c44e')
def test_is_public_string_variations(self):
- flavor_id_not_public = data_utils.rand_int_id(start=1000)
flavor_name_not_public = data_utils.rand_name(self.flavor_name_prefix)
- flavor_id_public = data_utils.rand_int_id(start=1000)
flavor_name_public = data_utils.rand_name(self.flavor_name_prefix)
# Create a non public flavor
- flavor = self.client.create_flavor(name=flavor_name_not_public,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=flavor_id_not_public,
- is_public="False")['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
+ self.create_flavor(name=flavor_name_not_public,
+ ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk,
+ is_public="False")
# Create a public flavor
- flavor = self.client.create_flavor(name=flavor_name_public,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=flavor_id_public,
- is_public="True")['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
-
- def _flavor_lookup(flavors, flavor_name):
- for flavor in flavors:
- if flavor['name'] == flavor_name:
- return flavor
- return None
+ self.create_flavor(name=flavor_name_public,
+ ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk,
+ is_public="True")
def _test_string_variations(variations, flavor_name):
for string in variations:
params = {'is_public': string}
- flavors = (self.client.list_flavors(detail=True, **params)
+ flavors = (self.admin_flavors_client.list_flavors(detail=True,
+ **params)
['flavors'])
- flavor = _flavor_lookup(flavors, flavor_name)
- self.assertIsNotNone(flavor)
+ self.assertIn(flavor_name, [f['name'] for f in flavors])
_test_string_variations(['f', 'false', 'no', '0'],
flavor_name_not_public)
@@ -290,17 +215,11 @@
@decorators.idempotent_id('3b541a2e-2ac2-4b42-8b8d-ba6e22fcd4da')
def test_create_flavor_using_string_ram(self):
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
new_flavor_id = data_utils.rand_int_id(start=1000)
ram = "1024"
- flavor = self.client.create_flavor(name=flavor_name,
- ram=ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id)['flavor']
- self.addCleanup(self.flavor_clean_up, flavor['id'])
- self.assertEqual(flavor['name'], flavor_name)
- self.assertEqual(flavor['vcpus'], self.vcpus)
- self.assertEqual(flavor['disk'], self.disk)
+ flavor = self.create_flavor(ram=ram, vcpus=self.vcpus,
+ disk=self.disk,
+ id=new_flavor_id)
self.assertEqual(flavor['ram'], int(ram))
self.assertEqual(int(flavor['id']), new_flavor_id)
diff --git a/tempest/api/compute/admin/test_flavors_access.py b/tempest/api/compute/admin/test_flavors_access.py
index 38ff4c0..5a38acc 100644
--- a/tempest/api/compute/admin/test_flavors_access.py
+++ b/tempest/api/compute/admin/test_flavors_access.py
@@ -14,7 +14,6 @@
# under the License.
from tempest.api.compute import base
-from tempest.common.utils import data_utils
from tempest.lib import decorators
from tempest import test
@@ -33,17 +32,11 @@
raise cls.skipException(msg)
@classmethod
- def setup_clients(cls):
- super(FlavorsAccessTestJSON, cls).setup_clients()
- cls.client = cls.os_adm.flavors_client
-
- @classmethod
def resource_setup(cls):
super(FlavorsAccessTestJSON, cls).resource_setup()
# Non admin tenant ID
cls.tenant_id = cls.flavors_client.tenant_id
- cls.flavor_name_prefix = 'test_flavor_access_'
cls.ram = 512
cls.vcpus = 1
cls.disk = 10
@@ -52,49 +45,37 @@
def test_flavor_access_list_with_private_flavor(self):
# Test to make sure that list flavor access on a newly created
# private flavor will return an empty access list
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
- new_flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public='False')['flavor']
- self.addCleanup(self.client.delete_flavor, new_flavor['id'])
- flavor_access = (self.client.list_flavor_access(new_flavor_id)
- ['flavor_access'])
+ flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk, is_public='False')
+
+ flavor_access = (self.admin_flavors_client.list_flavor_access(
+ flavor['id'])['flavor_access'])
self.assertEqual(len(flavor_access), 0, str(flavor_access))
@decorators.idempotent_id('59e622f6-bdf6-45e3-8ba8-fedad905a6b4')
def test_flavor_access_add_remove(self):
# Test to add and remove flavor access to a given tenant.
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
- new_flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public='False')['flavor']
- self.addCleanup(self.client.delete_flavor, new_flavor['id'])
+ flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk, is_public='False')
+
# Add flavor access to a tenant.
resp_body = {
"tenant_id": str(self.tenant_id),
- "flavor_id": str(new_flavor['id']),
+ "flavor_id": str(flavor['id']),
}
- add_body = (self.client.add_flavor_access(new_flavor['id'],
- self.tenant_id)
- ['flavor_access'])
+ add_body = (self.admin_flavors_client.add_flavor_access(
+ flavor['id'], self.tenant_id)['flavor_access'])
self.assertIn(resp_body, add_body)
# The flavor is present in list.
flavors = self.flavors_client.list_flavors(detail=True)['flavors']
- self.assertIn(new_flavor['id'], map(lambda x: x['id'], flavors))
+ self.assertIn(flavor['id'], map(lambda x: x['id'], flavors))
# Remove flavor access from a tenant.
- remove_body = (self.client.remove_flavor_access(new_flavor['id'],
- self.tenant_id)
- ['flavor_access'])
+ remove_body = (self.admin_flavors_client.remove_flavor_access(
+ flavor['id'], self.tenant_id)['flavor_access'])
self.assertNotIn(resp_body, remove_body)
# The flavor is not present in list.
flavors = self.flavors_client.list_flavors(detail=True)['flavors']
- self.assertNotIn(new_flavor['id'], map(lambda x: x['id'], flavors))
+ self.assertNotIn(flavor['id'], map(lambda x: x['id'], flavors))
diff --git a/tempest/api/compute/admin/test_flavors_access_negative.py b/tempest/api/compute/admin/test_flavors_access_negative.py
index 2719cc4..12e4587 100644
--- a/tempest/api/compute/admin/test_flavors_access_negative.py
+++ b/tempest/api/compute/admin/test_flavors_access_negative.py
@@ -14,7 +14,6 @@
# under the License.
from tempest.api.compute import base
-from tempest.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest import test
@@ -26,6 +25,8 @@
Add and remove Flavor Access require admin privileges.
"""
+ credentials = ['primary', 'admin', 'alt']
+
@classmethod
def skip_checks(cls):
super(FlavorsAccessNegativeTestJSON, cls).skip_checks()
@@ -34,16 +35,10 @@
raise cls.skipException(msg)
@classmethod
- def setup_clients(cls):
- super(FlavorsAccessNegativeTestJSON, cls).setup_clients()
- cls.client = cls.os_adm.flavors_client
-
- @classmethod
def resource_setup(cls):
super(FlavorsAccessNegativeTestJSON, cls).resource_setup()
cls.tenant_id = cls.flavors_client.tenant_id
- cls.flavor_name_prefix = 'test_flavor_access_'
cls.ram = 512
cls.vcpus = 1
cls.disk = 10
@@ -52,96 +47,69 @@
@decorators.idempotent_id('0621c53e-d45d-40e7-951d-43e5e257b272')
def test_flavor_access_list_with_public_flavor(self):
# Test to list flavor access with exceptions by querying public flavor
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
- new_flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public='True')['flavor']
- self.addCleanup(self.client.delete_flavor, new_flavor['id'])
+ flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk, is_public='True')
self.assertRaises(lib_exc.NotFound,
- self.client.list_flavor_access,
- new_flavor_id)
+ self.admin_flavors_client.list_flavor_access,
+ flavor['id'])
@test.attr(type=['negative'])
@decorators.idempotent_id('41eaaade-6d37-4f28-9c74-f21b46ca67bd')
def test_flavor_non_admin_add(self):
# Test to add flavor access as a user without admin privileges.
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
- new_flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public='False')['flavor']
- self.addCleanup(self.client.delete_flavor, new_flavor['id'])
+ flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk, is_public='False')
self.assertRaises(lib_exc.Forbidden,
self.flavors_client.add_flavor_access,
- new_flavor['id'],
+ flavor['id'],
self.tenant_id)
@test.attr(type=['negative'])
@decorators.idempotent_id('073e79a6-c311-4525-82dc-6083d919cb3a')
def test_flavor_non_admin_remove(self):
# Test to remove flavor access as a user without admin privileges.
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
- new_flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public='False')['flavor']
- self.addCleanup(self.client.delete_flavor, new_flavor['id'])
+ flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk, is_public='False')
+
# Add flavor access to a tenant.
- self.client.add_flavor_access(new_flavor['id'], self.tenant_id)
- self.addCleanup(self.client.remove_flavor_access,
- new_flavor['id'], self.tenant_id)
+ self.admin_flavors_client.add_flavor_access(flavor['id'],
+ self.tenant_id)
+ self.addCleanup(self.admin_flavors_client.remove_flavor_access,
+ flavor['id'], self.tenant_id)
self.assertRaises(lib_exc.Forbidden,
self.flavors_client.remove_flavor_access,
- new_flavor['id'],
+ flavor['id'],
self.tenant_id)
@test.attr(type=['negative'])
@decorators.idempotent_id('f3592cc0-0306-483c-b210-9a7b5346eddc')
def test_add_flavor_access_duplicate(self):
# Create a new flavor.
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
- new_flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public='False')['flavor']
- self.addCleanup(self.client.delete_flavor, new_flavor['id'])
+ flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk, is_public='False')
# Add flavor access to a tenant.
- self.client.add_flavor_access(new_flavor['id'], self.tenant_id)
- self.addCleanup(self.client.remove_flavor_access,
- new_flavor['id'], self.tenant_id)
+ self.admin_flavors_client.add_flavor_access(flavor['id'],
+ self.tenant_id)
+ self.addCleanup(self.admin_flavors_client.remove_flavor_access,
+ flavor['id'], self.tenant_id)
# An exception should be raised when adding flavor access to the same
# tenant
self.assertRaises(lib_exc.Conflict,
- self.client.add_flavor_access,
- new_flavor['id'],
+ self.admin_flavors_client.add_flavor_access,
+ flavor['id'],
self.tenant_id)
@test.attr(type=['negative'])
@decorators.idempotent_id('1f710927-3bc7-4381-9f82-0ca6e42644b7')
def test_remove_flavor_access_not_found(self):
# Create a new flavor.
- flavor_name = data_utils.rand_name(self.flavor_name_prefix)
- new_flavor_id = data_utils.rand_int_id(start=1000)
- new_flavor = self.client.create_flavor(name=flavor_name,
- ram=self.ram, vcpus=self.vcpus,
- disk=self.disk,
- id=new_flavor_id,
- is_public='False')['flavor']
- self.addCleanup(self.client.delete_flavor, new_flavor['id'])
+ flavor = self.create_flavor(ram=self.ram, vcpus=self.vcpus,
+ disk=self.disk, is_public='False')
# An exception should be raised when flavor access is not found
self.assertRaises(lib_exc.NotFound,
- self.client.remove_flavor_access,
- new_flavor['id'],
- data_utils.rand_uuid())
+ self.admin_flavors_client.remove_flavor_access,
+ flavor['id'],
+ self.os_alt.servers_client.tenant_id)
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs.py b/tempest/api/compute/admin/test_flavors_extra_specs.py
index 70e662c..ee1e3a0 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs.py
@@ -34,11 +34,6 @@
raise cls.skipException(msg)
@classmethod
- def setup_clients(cls):
- super(FlavorsExtraSpecsTestJSON, cls).setup_clients()
- cls.client = cls.os_adm.flavors_client
-
- @classmethod
def resource_setup(cls):
super(FlavorsExtraSpecsTestJSON, cls).resource_setup()
flavor_name = data_utils.rand_name('test_flavor')
@@ -46,22 +41,23 @@
vcpus = 1
disk = 10
ephemeral = 10
- cls.new_flavor_id = data_utils.rand_int_id(start=1000)
+ new_flavor_id = data_utils.rand_int_id(start=1000)
swap = 1024
rxtx = 1
# Create a flavor so as to set/get/unset extra specs
- cls.flavor = cls.client.create_flavor(name=flavor_name,
- ram=ram, vcpus=vcpus,
- disk=disk,
- id=cls.new_flavor_id,
- ephemeral=ephemeral,
- swap=swap,
- rxtx_factor=rxtx)['flavor']
+ cls.flavor = cls.admin_flavors_client.create_flavor(
+ name=flavor_name,
+ ram=ram, vcpus=vcpus,
+ disk=disk,
+ id=new_flavor_id,
+ ephemeral=ephemeral,
+ swap=swap,
+ rxtx_factor=rxtx)['flavor']
@classmethod
def resource_cleanup(cls):
- cls.client.delete_flavor(cls.flavor['id'])
- cls.client.wait_for_resource_deletion(cls.flavor['id'])
+ cls.admin_flavors_client.delete_flavor(cls.flavor['id'])
+ cls.admin_flavors_client.wait_for_resource_deletion(cls.flavor['id'])
super(FlavorsExtraSpecsTestJSON, cls).resource_cleanup()
@decorators.idempotent_id('0b2f9d4b-1ca2-4b99-bb40-165d4bb94208')
@@ -71,46 +67,47 @@
# Assigning extra specs values that are to be set
specs = {"key1": "value1", "key2": "value2"}
# SET extra specs to the flavor created in setUp
- set_body = self.client.set_flavor_extra_spec(self.flavor['id'],
- **specs)['extra_specs']
+ set_body = self.admin_flavors_client.set_flavor_extra_spec(
+ self.flavor['id'], **specs)['extra_specs']
self.assertEqual(set_body, specs)
# GET extra specs and verify
- get_body = (self.client.list_flavor_extra_specs(self.flavor['id'])
- ['extra_specs'])
+ get_body = (self.admin_flavors_client.list_flavor_extra_specs(
+ self.flavor['id'])['extra_specs'])
self.assertEqual(get_body, specs)
# UPDATE the value of the extra specs key1
update_body = \
- self.client.update_flavor_extra_spec(self.flavor['id'],
- "key1",
- key1="value")
+ self.admin_flavors_client.update_flavor_extra_spec(
+ self.flavor['id'], "key1", key1="value")
self.assertEqual({"key1": "value"}, update_body)
# GET extra specs and verify the value of the key2
# is the same as before
- get_body = (self.client.list_flavor_extra_specs(self.flavor['id'])
- ['extra_specs'])
+ get_body = (self.admin_flavors_client.list_flavor_extra_specs(
+ self.flavor['id'])['extra_specs'])
self.assertEqual(get_body, {"key1": "value", "key2": "value2"})
# UNSET extra specs that were set in this test
- self.client.unset_flavor_extra_spec(self.flavor['id'], "key1")
- self.client.unset_flavor_extra_spec(self.flavor['id'], "key2")
+ self.admin_flavors_client.unset_flavor_extra_spec(self.flavor['id'],
+ "key1")
+ self.admin_flavors_client.unset_flavor_extra_spec(self.flavor['id'],
+ "key2")
@decorators.idempotent_id('a99dad88-ae1c-4fba-aeb4-32f898218bd0')
def test_flavor_non_admin_get_all_keys(self):
specs = {"key1": "value1", "key2": "value2"}
- self.client.set_flavor_extra_spec(self.flavor['id'], **specs)
- body = (self.flavors_client.list_flavor_extra_specs(self.flavor['id'])
- ['extra_specs'])
+ self.admin_flavors_client.set_flavor_extra_spec(self.flavor['id'],
+ **specs)
+ body = (self.flavors_client.list_flavor_extra_specs(
+ self.flavor['id'])['extra_specs'])
for key in specs:
self.assertEqual(body[key], specs[key])
@decorators.idempotent_id('12805a7f-39a3-4042-b989-701d5cad9c90')
def test_flavor_non_admin_get_specific_key(self):
- body = self.client.set_flavor_extra_spec(self.flavor['id'],
- key1="value1",
- key2="value2")['extra_specs']
+ body = self.admin_flavors_client.set_flavor_extra_spec(
+ self.flavor['id'], key1="value1", key2="value2")['extra_specs']
self.assertEqual(body['key1'], 'value1')
self.assertIn('key2', body)
body = self.flavors_client.show_flavor_extra_spec(
diff --git a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
index 767a1ca..dab83e5 100644
--- a/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
+++ b/tempest/api/compute/admin/test_flavors_extra_specs_negative.py
@@ -35,11 +35,6 @@
raise cls.skipException(msg)
@classmethod
- def setup_clients(cls):
- super(FlavorsExtraSpecsNegativeTestJSON, cls).setup_clients()
- cls.client = cls.os_adm.flavors_client
-
- @classmethod
def resource_setup(cls):
super(FlavorsExtraSpecsNegativeTestJSON, cls).resource_setup()
@@ -48,22 +43,23 @@
vcpus = 1
disk = 10
ephemeral = 10
- cls.new_flavor_id = data_utils.rand_int_id(start=1000)
+ new_flavor_id = data_utils.rand_int_id(start=1000)
swap = 1024
rxtx = 1
# Create a flavor
- cls.flavor = cls.client.create_flavor(name=flavor_name,
- ram=ram, vcpus=vcpus,
- disk=disk,
- id=cls.new_flavor_id,
- ephemeral=ephemeral,
- swap=swap,
- rxtx_factor=rxtx)['flavor']
+ cls.flavor = cls.admin_flavors_client.create_flavor(
+ name=flavor_name,
+ ram=ram, vcpus=vcpus,
+ disk=disk,
+ id=new_flavor_id,
+ ephemeral=ephemeral,
+ swap=swap,
+ rxtx_factor=rxtx)['flavor']
@classmethod
def resource_cleanup(cls):
- cls.client.delete_flavor(cls.flavor['id'])
- cls.client.wait_for_resource_deletion(cls.flavor['id'])
+ cls.admin_flavors_client.delete_flavor(cls.flavor['id'])
+ cls.admin_flavors_client.wait_for_resource_deletion(cls.flavor['id'])
super(FlavorsExtraSpecsNegativeTestJSON, cls).resource_cleanup()
@test.attr(type=['negative'])
@@ -79,7 +75,7 @@
@decorators.idempotent_id('1ebf4ef8-759e-48fe-a801-d451d80476fb')
def test_flavor_non_admin_update_specific_key(self):
# non admin user is not allowed to update flavor extra spec
- body = self.client.set_flavor_extra_spec(
+ body = self.admin_flavors_client.set_flavor_extra_spec(
self.flavor['id'], key1="value1", key2="value2")['extra_specs']
self.assertEqual(body['key1'], 'value1')
self.assertRaises(lib_exc.Forbidden,
@@ -92,8 +88,8 @@
@test.attr(type=['negative'])
@decorators.idempotent_id('28f12249-27c7-44c1-8810-1f382f316b11')
def test_flavor_non_admin_unset_keys(self):
- self.client.set_flavor_extra_spec(self.flavor['id'],
- key1="value1", key2="value2")
+ self.admin_flavors_client.set_flavor_extra_spec(
+ self.flavor['id'], key1="value1", key2="value2")
self.assertRaises(lib_exc.Forbidden,
self.flavors_client.unset_flavor_extra_spec,
@@ -104,7 +100,7 @@
@decorators.idempotent_id('440b9f3f-3c7f-4293-a106-0ceda350f8de')
def test_flavor_unset_nonexistent_key(self):
self.assertRaises(lib_exc.NotFound,
- self.client.unset_flavor_extra_spec,
+ self.admin_flavors_client.unset_flavor_extra_spec,
self.flavor['id'],
'nonexistent_key')
@@ -121,7 +117,7 @@
def test_flavor_update_mismatch_key(self):
# the key will be updated should be match the key in the body
self.assertRaises(lib_exc.BadRequest,
- self.client.update_flavor_extra_spec,
+ self.admin_flavors_client.update_flavor_extra_spec,
self.flavor['id'],
"key2",
key1="value")
@@ -131,7 +127,7 @@
def test_flavor_update_more_key(self):
# there should be just one item in the request body
self.assertRaises(lib_exc.BadRequest,
- self.client.update_flavor_extra_spec,
+ self.admin_flavors_client.update_flavor_extra_spec,
self.flavor['id'],
"key1",
key1="value",
diff --git a/tempest/api/compute/admin/test_live_migration.py b/tempest/api/compute/admin/test_live_migration.py
index 39797f7..3ffd238 100644
--- a/tempest/api/compute/admin/test_live_migration.py
+++ b/tempest/api/compute/admin/test_live_migration.py
@@ -45,7 +45,6 @@
def setup_clients(cls):
super(LiveBlockMigrationTestJSON, cls).setup_clients()
cls.admin_hosts_client = cls.os_adm.hosts_client
- cls.admin_servers_client = cls.os_adm.servers_client
cls.admin_migration_client = cls.os_adm.migrations_client
@classmethod
diff --git a/tempest/api/compute/admin/test_migrations.py b/tempest/api/compute/admin/test_migrations.py
index 21f5c68..18655cb 100644
--- a/tempest/api/compute/admin/test_migrations.py
+++ b/tempest/api/compute/admin/test_migrations.py
@@ -30,8 +30,6 @@
def setup_clients(cls):
super(MigrationsAdminTest, cls).setup_clients()
cls.client = cls.os_adm.migrations_client
- cls.flavors_admin_client = cls.os_adm.flavors_client
- cls.admin_servers_client = cls.os_adm.servers_client
@decorators.idempotent_id('75c0b83d-72a0-4cf8-a153-631e83e7d53f')
def test_list_migrations(self):
@@ -55,8 +53,8 @@
def _flavor_clean_up(self, flavor_id):
try:
- self.flavors_admin_client.delete_flavor(flavor_id)
- self.flavors_admin_client.wait_for_resource_deletion(flavor_id)
+ self.admin_flavors_client.delete_flavor(flavor_id)
+ self.admin_flavors_client.wait_for_resource_deletion(flavor_id)
except exceptions.NotFound:
pass
@@ -69,9 +67,9 @@
# First we have to create a flavor that we can delete so make a copy
# of the normal flavor from which we'd create a server.
- flavor = self.flavors_admin_client.show_flavor(
+ flavor = self.admin_flavors_client.show_flavor(
self.flavor_ref)['flavor']
- flavor = self.flavors_admin_client.create_flavor(
+ flavor = self.admin_flavors_client.create_flavor(
name=data_utils.rand_name('test_resize_flavor_'),
ram=flavor['ram'],
disk=flavor['disk'],
diff --git a/tempest/api/compute/admin/test_quotas_negative.py b/tempest/api/compute/admin/test_quotas_negative.py
index 0850205..ca8382f 100644
--- a/tempest/api/compute/admin/test_quotas_negative.py
+++ b/tempest/api/compute/admin/test_quotas_negative.py
@@ -87,6 +87,7 @@
@decorators.skip_because(bug="1186354",
condition=CONF.service_available.neutron)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('7c6c8f3b-2bf6-4918-b240-57b136a66aa0')
@test.services('network')
def test_security_groups_exceed_limit(self):
diff --git a/tempest/api/compute/admin/test_servers_negative.py b/tempest/api/compute/admin/test_servers_negative.py
index 1283629..adb49a5 100644
--- a/tempest/api/compute/admin/test_servers_negative.py
+++ b/tempest/api/compute/admin/test_servers_negative.py
@@ -34,7 +34,6 @@
super(ServersAdminNegativeTestJSON, cls).setup_clients()
cls.client = cls.os_adm.servers_client
cls.non_adm_client = cls.servers_client
- cls.flavors_client = cls.os_adm.flavors_client
cls.quotas_client = cls.os_adm.quotas_client
@classmethod
@@ -42,21 +41,9 @@
super(ServersAdminNegativeTestJSON, cls).resource_setup()
cls.tenant_id = cls.client.tenant_id
- cls.s1_name = data_utils.rand_name(cls.__name__ + '-server')
- server = cls.create_test_server(name=cls.s1_name,
- wait_until='ACTIVE')
+ server = cls.create_test_server(wait_until='ACTIVE')
cls.s1_id = server['id']
- def _get_unused_flavor_id(self):
- flavor_id = data_utils.rand_int_id(start=1000)
- while True:
- try:
- self.flavors_client.show_flavor(flavor_id)
- except lib_exc.NotFound:
- break
- flavor_id = data_utils.rand_int_id(start=1000)
- return flavor_id
-
@decorators.idempotent_id('28dcec23-f807-49da-822c-56a92ea3c687')
@testtools.skipUnless(CONF.compute_feature_enabled.resize,
'Resize not available.')
@@ -64,22 +51,16 @@
def test_resize_server_using_overlimit_ram(self):
# NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
self.useFixture(fixtures.LockFixture('compute_quotas'))
- flavor_name = data_utils.rand_name("flavor")
- flavor_id = self._get_unused_flavor_id()
quota_set = self.quotas_client.show_quota_set(
self.tenant_id)['quota_set']
- ram = int(quota_set['ram'])
+ ram = quota_set['ram']
if ram == -1:
raise self.skipException("ram quota set is -1,"
" cannot test overlimit")
ram += 1
vcpus = 1
disk = 5
- flavor_ref = self.flavors_client.create_flavor(name=flavor_name,
- ram=ram, vcpus=vcpus,
- disk=disk,
- id=flavor_id)['flavor']
- self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
+ flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
self.client.resize_server,
self.servers[0]['id'],
@@ -92,22 +73,16 @@
def test_resize_server_using_overlimit_vcpus(self):
# NOTE(mriedem): Avoid conflicts with os-quota-class-sets tests.
self.useFixture(fixtures.LockFixture('compute_quotas'))
- flavor_name = data_utils.rand_name("flavor")
- flavor_id = self._get_unused_flavor_id()
quota_set = self.quotas_client.show_quota_set(
self.tenant_id)['quota_set']
- vcpus = int(quota_set['cores'])
+ vcpus = quota_set['cores']
if vcpus == -1:
raise self.skipException("cores quota set is -1,"
" cannot test overlimit")
vcpus += 1
ram = 512
disk = 5
- flavor_ref = self.flavors_client.create_flavor(name=flavor_name,
- ram=ram, vcpus=vcpus,
- disk=disk,
- id=flavor_id)['flavor']
- self.addCleanup(self.flavors_client.delete_flavor, flavor_id)
+ flavor_ref = self.create_flavor(ram=ram, vcpus=vcpus, disk=disk)
self.assertRaises((lib_exc.Forbidden, lib_exc.OverLimit),
self.client.resize_server,
self.servers[0]['id'],
diff --git a/tempest/api/compute/admin/test_volume_swap.py b/tempest/api/compute/admin/test_volume_swap.py
index 5f2444a..45472df 100644
--- a/tempest/api/compute/admin/test_volume_swap.py
+++ b/tempest/api/compute/admin/test_volume_swap.py
@@ -38,12 +38,6 @@
if not CONF.compute_feature_enabled.swap_volume:
raise cls.skipException("Swapping volumes is not supported.")
- @classmethod
- def setup_clients(cls):
- super(TestVolumeSwap, cls).setup_clients()
- # We need the admin client for performing the update (swap) volume call
- cls.servers_admin_client = cls.os_adm.servers_client
-
@decorators.idempotent_id('1769f00d-a693-4d67-a631-6a3496773813')
@test.services('volume')
def test_volume_swap(self):
@@ -58,12 +52,12 @@
# Attach "volume1" to server
self.attach_volume(server, volume1)
# Swap volume from "volume1" to "volume2"
- self.servers_admin_client.update_attached_volume(
+ self.admin_servers_client.update_attached_volume(
server['id'], volume1['id'], volumeId=volume2['id'])
- waiters.wait_for_volume_status(self.volumes_client,
- volume1['id'], 'available')
- waiters.wait_for_volume_status(self.volumes_client,
- volume2['id'], 'in-use')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume1['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume2['id'], 'in-use')
self.addCleanup(self.servers_client.detach_volume,
server['id'], volume2['id'])
# Verify "volume2" is attached to the server
diff --git a/tempest/api/compute/admin/test_volumes_negative.py b/tempest/api/compute/admin/test_volumes_negative.py
index 1f85c18..905bc3d 100644
--- a/tempest/api/compute/admin/test_volumes_negative.py
+++ b/tempest/api/compute/admin/test_volumes_negative.py
@@ -32,25 +32,22 @@
raise cls.skipException(skip_msg)
@classmethod
- def setup_clients(cls):
- super(VolumesAdminNegativeTest, cls).setup_clients()
- cls.servers_admin_client = cls.os_adm.servers_client
-
- @classmethod
def resource_setup(cls):
super(VolumesAdminNegativeTest, cls).resource_setup()
cls.server = cls.create_test_server(wait_until='ACTIVE')
+ @test.attr(type=['negative'])
@decorators.idempotent_id('309b5ecd-0585-4a7e-a36f-d2b2bf55259d')
def test_update_attached_volume_with_nonexistent_volume_in_uri(self):
volume = self.create_volume()
nonexistent_volume = data_utils.rand_uuid()
self.assertRaises(lib_exc.NotFound,
- self.servers_admin_client.update_attached_volume,
+ self.admin_servers_client.update_attached_volume,
self.server['id'], nonexistent_volume,
volumeId=volume['id'])
@test.related_bug('1629110', status_code=400)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('7dcac15a-b107-46d3-a5f6-cb863f4e454a')
def test_update_attached_volume_with_nonexistent_volume_in_body(self):
volume = self.create_volume()
@@ -58,6 +55,6 @@
nonexistent_volume = data_utils.rand_uuid()
self.assertRaises(lib_exc.BadRequest,
- self.servers_admin_client.update_attached_volume,
+ self.admin_servers_client.update_attached_volume,
self.server['id'], volume['id'],
volumeId=nonexistent_volume)
diff --git a/tempest/api/compute/base.py b/tempest/api/compute/base.py
index c3c5460..55cc293 100644
--- a/tempest/api/compute/base.py
+++ b/tempest/api/compute/base.py
@@ -326,6 +326,10 @@
raise
image = cls.compute_images_client.show_image(image_id)['image']
+ if kwargs['wait_until'] == 'ACTIVE':
+ if kwargs.get('wait_for_server', True):
+ waiters.wait_for_server_status(cls.servers_client,
+ server_id, 'ACTIVE')
return image
@classmethod
@@ -406,8 +410,8 @@
kwargs['imageRef'] = image_ref
volume = cls.volumes_client.create_volume(**kwargs)['volume']
cls.volumes.append(volume)
- waiters.wait_for_volume_status(cls.volumes_client,
- volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(cls.volumes_client,
+ volume['id'], 'available')
return volume
@classmethod
@@ -441,20 +445,21 @@
attach_kwargs = dict(volumeId=volume['id'])
if device:
attach_kwargs['device'] = device
- self.servers_client.attach_volume(
- server['id'], **attach_kwargs)
+ attachment = self.servers_client.attach_volume(
+ server['id'], **attach_kwargs)['volumeAttachment']
# On teardown detach the volume and wait for it to be available. This
# is so we don't error out when trying to delete the volume during
# teardown.
- self.addCleanup(waiters.wait_for_volume_status,
+ self.addCleanup(waiters.wait_for_volume_resource_status,
self.volumes_client, volume['id'], 'available')
# Ignore 404s on detach in case the server is deleted or the volume
# is already detached.
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self.servers_client.detach_volume,
server['id'], volume['id'])
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'in-use')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'in-use')
+ return attachment
class BaseV2ComputeAdminTest(BaseV2ComputeTest):
@@ -467,3 +472,18 @@
super(BaseV2ComputeAdminTest, cls).setup_clients()
cls.availability_zone_admin_client = (
cls.os_adm.availability_zone_client)
+ cls.admin_flavors_client = cls.os_adm.flavors_client
+ cls.admin_servers_client = cls.os_adm.servers_client
+
+ def create_flavor(self, ram, vcpus, disk, name=None,
+ is_public='True', **kwargs):
+ if name is None:
+ name = data_utils.rand_name(self.__class__.__name__ + "-flavor")
+ id = kwargs.pop('id', data_utils.rand_int_id(start=1000))
+ client = self.admin_flavors_client
+ flavor = client.create_flavor(
+ ram=ram, vcpus=vcpus, disk=disk, name=name,
+ id=id, is_public=is_public, **kwargs)['flavor']
+ self.addCleanup(client.wait_for_resource_deletion, flavor['id'])
+ self.addCleanup(client.delete_flavor, flavor['id'])
+ return flavor
diff --git a/tempest/api/compute/flavors/test_flavors.py b/tempest/api/compute/flavors/test_flavors.py
index 546667f..89051c1 100644
--- a/tempest/api/compute/flavors/test_flavors.py
+++ b/tempest/api/compute/flavors/test_flavors.py
@@ -22,17 +22,12 @@
_min_disk = 'minDisk'
_min_ram = 'minRam'
- @classmethod
- def setup_clients(cls):
- super(FlavorsV2TestJSON, cls).setup_clients()
- cls.client = cls.flavors_client
-
@test.attr(type='smoke')
@decorators.idempotent_id('e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59')
def test_list_flavors(self):
# List of all flavors should contain the expected flavor
- flavors = self.client.list_flavors()['flavors']
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavors = self.flavors_client.list_flavors()['flavors']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_min_detail = {'id': flavor['id'], 'links': flavor['links'],
'name': flavor['name']}
self.assertIn(flavor_min_detail, flavors)
@@ -40,89 +35,93 @@
@decorators.idempotent_id('6e85fde4-b3cd-4137-ab72-ed5f418e8c24')
def test_list_flavors_with_detail(self):
# Detailed list of all flavors should contain the expected flavor
- flavors = self.client.list_flavors(detail=True)['flavors']
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavors = self.flavors_client.list_flavors(detail=True)['flavors']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
self.assertIn(flavor, flavors)
@test.attr(type='smoke')
@decorators.idempotent_id('1f12046b-753d-40d2-abb6-d8eb8b30cb2f')
def test_get_flavor(self):
# The expected flavor details should be returned
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
self.assertEqual(self.flavor_ref, flavor['id'])
@decorators.idempotent_id('8d7691b3-6ed4-411a-abc9-2839a765adab')
def test_list_flavors_limit_results(self):
# Only the expected number of flavors should be returned
params = {'limit': 1}
- flavors = self.client.list_flavors(**params)['flavors']
+ flavors = self.flavors_client.list_flavors(**params)['flavors']
self.assertEqual(1, len(flavors))
@decorators.idempotent_id('b26f6327-2886-467a-82be-cef7a27709cb')
def test_list_flavors_detailed_limit_results(self):
# Only the expected number of flavors (detailed) should be returned
params = {'limit': 1}
- flavors = self.client.list_flavors(detail=True, **params)['flavors']
+ flavors = self.flavors_client.list_flavors(detail=True,
+ **params)['flavors']
self.assertEqual(1, len(flavors))
@decorators.idempotent_id('e800f879-9828-4bd0-8eae-4f17189951fb')
def test_list_flavors_using_marker(self):
# The list of flavors should start from the provided marker
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {'marker': flavor_id}
- flavors = self.client.list_flavors(**params)['flavors']
+ flavors = self.flavors_client.list_flavors(**params)['flavors']
self.assertFalse(any([i for i in flavors if i['id'] == flavor_id]),
'The list of flavors did not start after the marker.')
@decorators.idempotent_id('6db2f0c0-ddee-4162-9c84-0703d3dd1107')
def test_list_flavors_detailed_using_marker(self):
# The list of flavors should start from the provided marker
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {'marker': flavor_id}
- flavors = self.client.list_flavors(detail=True, **params)['flavors']
+ flavors = self.flavors_client.list_flavors(detail=True,
+ **params)['flavors']
self.assertFalse(any([i for i in flavors if i['id'] == flavor_id]),
'The list of flavors did not start after the marker.')
@decorators.idempotent_id('3df2743e-3034-4e57-a4cb-b6527f6eac79')
def test_list_flavors_detailed_filter_by_min_disk(self):
# The detailed list of flavors should be filtered by disk space
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {self._min_disk: flavor['disk'] + 1}
- flavors = self.client.list_flavors(detail=True, **params)['flavors']
+ flavors = self.flavors_client.list_flavors(detail=True,
+ **params)['flavors']
self.assertFalse(any([i for i in flavors if i['id'] == flavor_id]))
@decorators.idempotent_id('09fe7509-b4ee-4b34-bf8b-39532dc47292')
def test_list_flavors_detailed_filter_by_min_ram(self):
# The detailed list of flavors should be filtered by RAM
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {self._min_ram: flavor['ram'] + 1}
- flavors = self.client.list_flavors(detail=True, **params)['flavors']
+ flavors = self.flavors_client.list_flavors(detail=True,
+ **params)['flavors']
self.assertFalse(any([i for i in flavors if i['id'] == flavor_id]))
@decorators.idempotent_id('10645a4d-96f5-443f-831b-730711e11dd4')
def test_list_flavors_filter_by_min_disk(self):
# The list of flavors should be filtered by disk space
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {self._min_disk: flavor['disk'] + 1}
- flavors = self.client.list_flavors(**params)['flavors']
+ flavors = self.flavors_client.list_flavors(**params)['flavors']
self.assertFalse(any([i for i in flavors if i['id'] == flavor_id]))
@decorators.idempotent_id('935cf550-e7c8-4da6-8002-00f92d5edfaa')
def test_list_flavors_filter_by_min_ram(self):
# The list of flavors should be filtered by RAM
- flavor = self.client.show_flavor(self.flavor_ref)['flavor']
+ flavor = self.flavors_client.show_flavor(self.flavor_ref)['flavor']
flavor_id = flavor['id']
params = {self._min_ram: flavor['ram'] + 1}
- flavors = self.client.list_flavors(**params)['flavors']
+ flavors = self.flavors_client.list_flavors(**params)['flavors']
self.assertFalse(any([i for i in flavors if i['id'] == flavor_id]))
diff --git a/tempest/api/compute/flavors/test_flavors_negative.py b/tempest/api/compute/flavors/test_flavors_negative.py
new file mode 100644
index 0000000..b313f44
--- /dev/null
+++ b/tempest/api/compute/flavors/test_flavors_negative.py
@@ -0,0 +1,90 @@
+# Copyright 2017 Red Hat, Inc.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import random
+
+import six
+
+from tempest.api.compute import base
+from tempest.common import image as common_image
+from tempest.common.utils import data_utils
+from tempest import config
+from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
+from tempest import test
+
+CONF = config.CONF
+
+
+class FlavorsV2NegativeTest(base.BaseV2ComputeTest):
+
+ @classmethod
+ def setup_clients(cls):
+ super(FlavorsV2NegativeTest, cls).setup_clients()
+ if CONF.image_feature_enabled.api_v1:
+ cls.images_client = cls.os.image_client
+ elif CONF.image_feature_enabled.api_v2:
+ cls.images_client = cls.os.image_client_v2
+ else:
+ raise lib_exc.InvalidConfiguration(
+ 'Either api_v1 or api_v2 must be True in '
+ '[image-feature-enabled].')
+
+ @test.attr(type=['negative'])
+ @test.services('image')
+ @decorators.idempotent_id('90f0d93a-91c1-450c-91e6-07d18172cefe')
+ def test_boot_with_low_ram(self):
+ """Try boot a vm with lower than min ram
+
+ Create an image with min_ram value
+ Try to create server with flavor of insufficient ram size from
+ that image
+ """
+ flavor = self.flavors_client.show_flavor(
+ CONF.compute.flavor_ref)['flavor']
+ min_img_ram = flavor['ram'] + 1
+ size = random.randint(1024, 4096)
+ image_file = six.BytesIO(data_utils.random_bytes(size))
+ params = {
+ 'name': data_utils.rand_name('image'),
+ 'container_format': CONF.image.container_formats[0],
+ 'disk_format': CONF.image.disk_formats[0],
+ 'min_ram': min_img_ram
+ }
+
+ if CONF.image_feature_enabled.api_v1:
+ params.update({'is_public': False})
+ params = {'headers': common_image.image_meta_to_headers(**params)}
+ else:
+ params.update({'visibility': 'private'})
+
+ image = self.images_client.create_image(**params)
+ image = image['image'] if 'image' in image else image
+ self.addCleanup(self.images_client.delete_image, image['id'])
+
+ if CONF.image_feature_enabled.api_v1:
+ self.images_client.update_image(image['id'], data=image_file)
+ else:
+ self.images_client.store_image_file(image['id'], data=image_file)
+
+ self.assertEqual(min_img_ram, image['min_ram'])
+
+ # Try to create server with flavor of insufficient ram size
+ self.assertRaisesRegexp(lib_exc.BadRequest,
+ "Flavor's memory is too small for "
+ "requested image",
+ self.create_test_server,
+ image_id=image['id'],
+ flavor=flavor['id'])
diff --git a/tempest/api/compute/floating_ips/test_floating_ips_actions.py b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
index 4d8416f..2769245 100644
--- a/tempest/api/compute/floating_ips/test_floating_ips_actions.py
+++ b/tempest/api/compute/floating_ips/test_floating_ips_actions.py
@@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations
# under the License.
+import testtools
+
from tempest.api.compute.floating_ips import base
from tempest import config
from tempest.lib.common.utils import test_utils
@@ -86,6 +88,8 @@
@decorators.idempotent_id('307efa27-dc6f-48a0-8cd2-162ce3ef0b52')
@test.services('network')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_associate_disassociate_floating_ip(self):
# Positive test:Associate and disassociate the provided floating IP
# to a specific server should be successful
@@ -107,6 +111,8 @@
@decorators.idempotent_id('6edef4b2-aaf1-4abc-bbe3-993e2561e0fe')
@test.services('network')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_associate_already_associated_floating_ip(self):
# positive test:Association of an already associated floating IP
# to specific server should change the association of the Floating IP
diff --git a/tempest/api/compute/images/test_images.py b/tempest/api/compute/images/test_images.py
index d9db0b5..a0c860a 100644
--- a/tempest/api/compute/images/test_images.py
+++ b/tempest/api/compute/images/test_images.py
@@ -60,6 +60,7 @@
snapshot_name = data_utils.rand_name('test-snap')
image = self.create_image_from_server(server['id'],
name=snapshot_name,
- wait_until='ACTIVE')
+ wait_until='ACTIVE',
+ wait_for_server=False)
self.addCleanup(self.client.delete_image, image['id'])
self.assertEqual(snapshot_name, image['name'])
diff --git a/tempest/api/compute/limits/test_absolute_limits_negative.py b/tempest/api/compute/limits/test_absolute_limits_negative.py
index b9ae0c6..21b4b1c 100644
--- a/tempest/api/compute/limits/test_absolute_limits_negative.py
+++ b/tempest/api/compute/limits/test_absolute_limits_negative.py
@@ -41,11 +41,11 @@
max_meta = limits['absolute']['maxImageMeta']
# No point in running this test if there is no limit.
- if int(max_meta) == -1:
+ if max_meta == -1:
raise self.skipException('no limit for maxImageMeta')
# Create server should fail, since we are passing > metadata Limit!
- max_meta_data = int(max_meta) + 1
+ max_meta_data = max_meta + 1
meta_data = {}
for xx in range(max_meta_data):
diff --git a/tempest/api/compute/security_groups/test_security_group_rules.py b/tempest/api/compute/security_groups/test_security_group_rules.py
index 7658848..b82fa3b 100644
--- a/tempest/api/compute/security_groups/test_security_group_rules.py
+++ b/tempest/api/compute/security_groups/test_security_group_rules.py
@@ -31,7 +31,6 @@
@classmethod
def resource_setup(cls):
super(SecurityGroupRulesTestJSON, cls).resource_setup()
- cls.neutron_available = CONF.service_available.neutron
cls.ip_protocol = 'tcp'
cls.from_port = 22
cls.to_port = 22
diff --git a/tempest/api/compute/security_groups/test_security_groups.py b/tempest/api/compute/security_groups/test_security_groups.py
index e070336..e90a1fc 100644
--- a/tempest/api/compute/security_groups/test_security_groups.py
+++ b/tempest/api/compute/security_groups/test_security_groups.py
@@ -144,3 +144,31 @@
['security_group'])
self.assertEqual(s_new_name, fetched_group['name'])
self.assertEqual(s_new_des, fetched_group['description'])
+
+ @decorators.idempotent_id('79517d60-535a-438f-af3d-e6feab1cbea7')
+ @test.services('network')
+ def test_list_security_groups_by_server(self):
+ # Create a couple security groups that we will use
+ # for the server resource this test creates
+ sg = self.create_security_group()
+ sg2 = self.create_security_group()
+ assigned_security_groups_ids = [sg['id'], sg2['id']]
+ # Create server and add the security group created
+ # above to the server we just created
+ server_id = self.create_test_server(wait_until='ACTIVE')['id']
+ # add security groups to server
+ self.servers_client.add_security_group(server_id, name=sg['name'])
+ self.servers_client.add_security_group(server_id, name=sg2['name'])
+
+ # list security groups for a server
+ fetched_groups = (
+ self.servers_client.list_security_groups_by_server(
+ server_id)['security_groups'])
+ fetched_security_groups_ids = [i['id'] for i in fetched_groups]
+ # verifying the security groups ids in list
+ missing_security_groups =\
+ [p for p in assigned_security_groups_ids
+ if p not in fetched_security_groups_ids]
+ self.assertEmpty(missing_security_groups,
+ "Failed to find security_groups %s in fetched list" %
+ ', '.join(missing_security_groups))
diff --git a/tempest/api/compute/security_groups/test_security_groups_negative.py b/tempest/api/compute/security_groups/test_security_groups_negative.py
index ad18861..48bb1b6 100644
--- a/tempest/api/compute/security_groups/test_security_groups_negative.py
+++ b/tempest/api/compute/security_groups/test_security_groups_negative.py
@@ -32,11 +32,6 @@
super(SecurityGroupsNegativeTestJSON, cls).setup_clients()
cls.client = cls.security_groups_client
- @classmethod
- def resource_setup(cls):
- super(SecurityGroupsNegativeTestJSON, cls).resource_setup()
- cls.neutron_available = CONF.service_available.neutron
-
@test.attr(type=['negative'])
@decorators.idempotent_id('673eaec1-9b3e-48ed-bdf1-2786c1b9661c')
@test.services('network')
diff --git a/tempest/api/compute/servers/test_attach_interfaces.py b/tempest/api/compute/servers/test_attach_interfaces.py
index 9bba733..e0c8887 100644
--- a/tempest/api/compute/servers/test_attach_interfaces.py
+++ b/tempest/api/compute/servers/test_attach_interfaces.py
@@ -79,7 +79,6 @@
def _check_interface(self, iface, port_id=None, network_id=None,
fixed_ip=None, mac_addr=None):
- self.assertIn('port_state', iface)
if port_id:
self.assertEqual(iface['port_id'], port_id)
if network_id:
diff --git a/tempest/api/compute/servers/test_create_server.py b/tempest/api/compute/servers/test_create_server.py
index 5ddae5e..fd5e50e 100644
--- a/tempest/api/compute/servers/test_create_server.py
+++ b/tempest/api/compute/servers/test_create_server.py
@@ -51,7 +51,7 @@
cls.name = data_utils.rand_name(cls.__name__ + '-server')
cls.password = data_utils.rand_password()
disk_config = cls.disk_config
- cls.server_initial = cls.create_test_server(
+ server_initial = cls.create_test_server(
validatable=True,
wait_until='ACTIVE',
name=cls.name,
@@ -60,7 +60,7 @@
accessIPv6=cls.accessIPv6,
disk_config=disk_config,
adminPass=cls.password)
- cls.server = (cls.client.show_server(cls.server_initial['id'])
+ cls.server = (cls.client.show_server(server_initial['id'])
['server'])
def _create_net_subnet_ret_net_from_cidr(self, cidr):
@@ -236,7 +236,6 @@
@classmethod
def setup_clients(cls):
super(ServersWithSpecificFlavorTestJSON, cls).setup_clients()
- cls.flavor_client = cls.os_adm.flavors_client
cls.client = cls.servers_client
@classmethod
@@ -254,7 +253,6 @@
self.flavor_ref)['flavor']
def create_flavor_with_ephemeral(ephem_disk):
- flavor_id = data_utils.rand_int_id(start=1000)
name = 'flavor_with_ephemeral_%s' % ephem_disk
flavor_name = data_utils.rand_name(name)
@@ -263,17 +261,10 @@
disk = flavor_base['disk']
# Create a flavor with ephemeral disk
- flavor = self.flavor_client.create_flavor(
- name=flavor_name, ram=ram, vcpus=vcpus, disk=disk,
- id=flavor_id, ephemeral=ephem_disk)['flavor']
- self.addCleanup(flavor_clean_up, flavor['id'])
-
+ flavor = self.create_flavor(name=flavor_name, ram=ram, vcpus=vcpus,
+ disk=disk, ephemeral=ephem_disk)
return flavor['id']
- def flavor_clean_up(flavor_id):
- self.flavor_client.delete_flavor(flavor_id)
- self.flavor_client.wait_for_resource_deletion(flavor_id)
-
flavor_with_eph_disk_id = create_flavor_with_ephemeral(ephem_disk=1)
flavor_no_eph_disk_id = create_flavor_with_ephemeral(ephem_disk=0)
diff --git a/tempest/api/compute/servers/test_delete_server.py b/tempest/api/compute/servers/test_delete_server.py
index 83b2e1b..8ed55e0 100644
--- a/tempest/api/compute/servers/test_delete_server.py
+++ b/tempest/api/compute/servers/test_delete_server.py
@@ -115,8 +115,8 @@
self.client.delete_server(server['id'])
waiters.wait_for_server_termination(self.client, server['id'])
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
class DeleteServersAdminTestJSON(base.BaseV2ComputeAdminTest):
diff --git a/tempest/api/compute/servers/test_device_tagging.py b/tempest/api/compute/servers/test_device_tagging.py
index d9e83a6..5bcbdac 100644
--- a/tempest/api/compute/servers/test_device_tagging.py
+++ b/tempest/api/compute/servers/test_device_tagging.py
@@ -20,6 +20,7 @@
from tempest.common.utils import data_utils
from tempest.common.utils.linux import remote_client
from tempest import config
+from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions
from tempest import test
@@ -249,9 +250,9 @@
self.verify_device_metadata(md_json)
return True
- if not test.call_until_true(get_and_verify_metadata,
- CONF.compute.build_timeout,
- CONF.compute.build_interval):
+ if not test_utils.call_until_true(get_and_verify_metadata,
+ CONF.compute.build_timeout,
+ CONF.compute.build_interval):
raise exceptions.TimeoutException('Timeout while verifying '
'metadata on server.')
diff --git a/tempest/api/compute/servers/test_list_servers_negative.py b/tempest/api/compute/servers/test_list_servers_negative.py
index 594c5c9..3010caf 100644
--- a/tempest/api/compute/servers/test_list_servers_negative.py
+++ b/tempest/api/compute/servers/test_list_servers_negative.py
@@ -35,11 +35,9 @@
# by the test methods in this class. These
# servers are cleaned up automatically in the
# tearDownClass method of the super-class.
- cls.existing_fixtures = []
cls.deleted_fixtures = []
for _ in range(2):
srv = cls.create_test_server(wait_until='ACTIVE')
- cls.existing_fixtures.append(srv)
srv = cls.create_test_server(wait_until='ACTIVE')
cls.client.delete_server(srv['id'])
diff --git a/tempest/api/compute/servers/test_novnc.py b/tempest/api/compute/servers/test_novnc.py
index d10f370..3f6abab 100644
--- a/tempest/api/compute/servers/test_novnc.py
+++ b/tempest/api/compute/servers/test_novnc.py
@@ -1,4 +1,4 @@
-# Copyright 2016 OpenStack Foundation
+# Copyright 2016-2017 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -22,10 +22,15 @@
from tempest.api.compute import base
from tempest import config
-from tempest import test
+from tempest.lib import decorators
CONF = config.CONF
+if six.PY2:
+ ord_func = ord
+else:
+ ord_func = int
+
class NoVNCConsoleTestJSON(base.BaseV2ComputeTest):
@@ -60,14 +65,19 @@
resp = urllib3.PoolManager().request('GET', vnc_url)
# Make sure that the GET request was accepted by the novncproxy
self.assertEqual(resp.status, 200, 'Got a Bad HTTP Response on the '
- 'initial call: ' + str(resp.status))
+ 'initial call: ' + six.text_type(resp.status))
# Do some basic validation to make sure it is an expected HTML document
- self.assertTrue('<html>' in resp.data and '</html>' in resp.data,
- 'Not a valid html document in the response.')
+ resp_data = resp.data.decode()
+ self.assertIn('<html>', resp_data,
+ 'Not a valid html document in the response.')
+ self.assertIn('</html>', resp_data,
+ 'Not a valid html document in the response.')
# Just try to make sure we got JavaScript back for noVNC, since we
# won't actually use it since not inside of a browser
- self.assertTrue('noVNC' in resp.data and '<script' in resp.data,
- 'Not a valid noVNC javascript html document.')
+ self.assertIn('noVNC', resp_data,
+ 'Not a valid noVNC javascript html document.')
+ self.assertIn('<script', resp_data,
+ 'Not a valid noVNC javascript html document.')
def _validate_rfb_negotiation(self):
"""Verify we can connect to novnc and do the websocket connection."""
@@ -82,14 +92,14 @@
int(data[8:11], base=10)))
self.assertTrue(version >= 3.3, 'Bad RFB Version: ' + str(version))
# Send our RFB version to the server, which we will just go with 3.3
- self._websocket.send_frame(str(data))
+ self._websocket.send_frame(data)
# Get the sever authentication type and make sure None is supported
data = self._websocket.receive_frame()
self.assertIsNotNone(data, 'Expected authentication type None.')
self.assertGreaterEqual(
len(data), 2, 'Expected authentication type None.')
self.assertIn(
- 1, [ord(data[i + 1]) for i in range(ord(data[0]))],
+ 1, [ord_func(data[i + 1]) for i in range(ord_func(data[0]))],
'Expected authentication type None.')
# Send to the server that we only support authentication type None
self._websocket.send_frame(six.int2byte(1))
@@ -98,7 +108,7 @@
self.assertEqual(
len(data), 4, 'Server did not think security was successful.')
self.assertEqual(
- [ord(i) for i in data], [0, 0, 0, 0],
+ [ord_func(i) for i in data], [0, 0, 0, 0],
'Server did not think security was successful.')
# Say to leave the desktop as shared as part of client initialization
self._websocket.send_frame(six.int2byte(1))
@@ -121,12 +131,12 @@
def _validate_websocket_upgrade(self):
self.assertTrue(
- self._websocket.response.startswith('HTTP/1.1 101 Switching '
- 'Protocols\r\n'),
+ self._websocket.response.startswith(b'HTTP/1.1 101 Switching '
+ b'Protocols\r\n'),
'Did not get the expected 101 on the websockify call: '
- + str(len(self._websocket.response)))
+ + six.text_type(self._websocket.response))
self.assertTrue(
- self._websocket.response.find('Server: WebSockify') > 0,
+ self._websocket.response.find(b'Server: WebSockify') > 0,
'Did not get the expected WebSocket HTTP Response.')
def _create_websocket(self, url):
@@ -137,7 +147,7 @@
# Turn the Socket into a WebSocket to do the communication
return _WebSocket(client_socket, url)
- @test.idempotent_id('c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc')
+ @decorators.idempotent_id('c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc')
def test_novnc(self):
body = self.client.get_vnc_console(self.server['id'],
type='novnc')['console']
@@ -151,7 +161,7 @@
# Validate the RFB Negotiation to determine if a valid VNC session
self._validate_rfb_negotiation()
- @test.idempotent_id('f9c79937-addc-4aaa-9e0e-841eef02aeb7')
+ @decorators.idempotent_id('f9c79937-addc-4aaa-9e0e-841eef02aeb7')
def test_novnc_bad_token(self):
body = self.client.get_vnc_console(self.server['id'],
type='novnc')['console']
@@ -187,8 +197,8 @@
# frames less than 125 bytes here (for the negotiation) and
# that only the 2nd byte contains the length, and since the
# server doesn't do masking, we can just read the data length
- if ord(header[1]) & 127 > 0:
- return self._socket.recv(ord(header[1]) & 127)
+ if ord_func(header[1]) & 127 > 0:
+ return self._socket.recv(ord_func(header[1]) & 127)
def send_frame(self, data):
"""Wrapper for sending data to add in the WebSocket frame format."""
@@ -205,7 +215,7 @@
frame_bytes.append(mask[i])
# Mask each of the actual data bytes that we are going to send
for i in range(len(data)):
- frame_bytes.append(ord(data[i]) ^ mask[i % 4])
+ frame_bytes.append(ord_func(data[i]) ^ mask[i % 4])
# Convert our integer list to a binary array of bytes
frame_bytes = struct.pack('!%iB' % len(frame_bytes), * frame_bytes)
self._socket.sendall(frame_bytes)
@@ -233,9 +243,9 @@
# We are choosing to use binary even though browser may do Base64
reqdata += 'Sec-WebSocket-Protocol: binary\r\n\r\n'
# Send the HTTP GET request and get the response back
- self._socket.sendall(reqdata)
+ self._socket.sendall(reqdata.encode('utf8'))
self.response = data = self._socket.recv(4096)
# Loop through & concatenate all of the data in the response body
- while len(data) > 0 and self.response.find('\r\n\r\n') < 0:
+ while len(data) > 0 and self.response.find(b'\r\n\r\n') < 0:
data = self._socket.recv(4096)
self.response += data
diff --git a/tempest/api/compute/servers/test_server_addresses.py b/tempest/api/compute/servers/test_server_addresses.py
index dfda51b..cf4ed85 100644
--- a/tempest/api/compute/servers/test_server_addresses.py
+++ b/tempest/api/compute/servers/test_server_addresses.py
@@ -49,7 +49,7 @@
# We do not know the exact network configuration, but an instance
# should at least have a single public or private address
self.assertGreaterEqual(len(addresses), 1)
- for network_name, network_addresses in addresses.items():
+ for network_addresses in addresses.values():
self.assertGreaterEqual(len(network_addresses), 1)
for address in network_addresses:
self.assertTrue(address['addr'])
diff --git a/tempest/api/compute/servers/test_server_personality.py b/tempest/api/compute/servers/test_server_personality.py
index 957d24a..90b9da4 100644
--- a/tempest/api/compute/servers/test_server_personality.py
+++ b/tempest/api/compute/servers/test_server_personality.py
@@ -97,7 +97,7 @@
max_file_limit = limits['absolute']['maxPersonality']
if max_file_limit == -1:
raise self.skipException("No limit for personality files")
- for i in range(0, int(max_file_limit) + 1):
+ for i in range(0, max_file_limit + 1):
path = 'etc/test' + str(i) + '.txt'
personality.append({'path': path,
'contents': base64.encode_as_text(
@@ -117,7 +117,7 @@
if max_file_limit == -1:
raise self.skipException("No limit for personality files")
person = []
- for i in range(0, int(max_file_limit)):
+ for i in range(0, max_file_limit):
# NOTE(andreaf) The cirros disk image is blank before boot
# so we can only inject safely to /
path = '/test' + str(i) + '.txt'
diff --git a/tempest/api/compute/servers/test_server_rescue.py b/tempest/api/compute/servers/test_server_rescue.py
index 5db7f4f..75ba15c 100644
--- a/tempest/api/compute/servers/test_server_rescue.py
+++ b/tempest/api/compute/servers/test_server_rescue.py
@@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations
# under the License.
+import testtools
+
from tempest.api.compute import base
from tempest.common.utils import data_utils
from tempest.common import waiters
@@ -48,18 +50,16 @@
# Security group creation
cls.sg_name = data_utils.rand_name('sg')
- cls.sg_desc = data_utils.rand_name('sg-desc')
+ sg_desc = data_utils.rand_name('sg-desc')
cls.sg = cls.security_groups_client.create_security_group(
- name=cls.sg_name, description=cls.sg_desc)['security_group']
+ name=cls.sg_name, description=sg_desc)['security_group']
cls.sg_id = cls.sg['id']
cls.password = data_utils.rand_password()
# Server for positive tests
server = cls.create_test_server(adminPass=cls.password,
- wait_until='BUILD')
+ wait_until='ACTIVE')
cls.server_id = server['id']
- waiters.wait_for_server_status(cls.servers_client, cls.server_id,
- 'ACTIVE')
@classmethod
def resource_cleanup(cls):
@@ -85,6 +85,8 @@
'ACTIVE')
@decorators.idempotent_id('4842e0cf-e87d-4d9d-b61f-f4791da3cacc')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_rescued_vm_associate_dissociate_floating_ip(self):
# Rescue the server
self.servers_client.rescue_server(
diff --git a/tempest/api/compute/servers/test_server_tags.py b/tempest/api/compute/servers/test_server_tags.py
new file mode 100644
index 0000000..20e2cee
--- /dev/null
+++ b/tempest/api/compute/servers/test_server_tags.py
@@ -0,0 +1,108 @@
+# Copyright 2017 AT&T Corp.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import six
+
+from tempest.api.compute import base
+from tempest.common.utils import data_utils
+from tempest.lib import decorators
+from tempest import test
+
+
+class ServerTagsTestJSON(base.BaseV2ComputeTest):
+
+ min_microversion = '2.26'
+ max_microversion = 'latest'
+
+ @classmethod
+ def skip_checks(cls):
+ super(ServerTagsTestJSON, cls).skip_checks()
+ if not test.is_extension_enabled('os-server-tags', 'compute'):
+ msg = "os-server-tags extension is not enabled."
+ raise cls.skipException(msg)
+
+ @classmethod
+ def setup_clients(cls):
+ super(ServerTagsTestJSON, cls).setup_clients()
+ cls.client = cls.servers_client
+
+ @classmethod
+ def resource_setup(cls):
+ super(ServerTagsTestJSON, cls).resource_setup()
+ cls.server = cls.create_test_server(wait_until='ACTIVE')
+
+ def _update_server_tags(self, server_id, tags):
+ if not isinstance(tags, (list, tuple)):
+ tags = [tags]
+ for tag in tags:
+ self.client.update_tag(server_id, tag)
+ self.addCleanup(self.client.delete_all_tags, server_id)
+
+ @decorators.idempotent_id('8d95abe2-c658-4c42-9a44-c0258500306b')
+ def test_create_delete_tag(self):
+ # Check that no tags exist.
+ fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ self.assertEmpty(fetched_tags)
+
+ # Add server tag to the server.
+ assigned_tag = data_utils.rand_name('tag')
+ self._update_server_tags(self.server['id'], assigned_tag)
+
+ # Check that added tag exists.
+ fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ self.assertEqual([assigned_tag], fetched_tags)
+
+ # Remove assigned tag from server and check that it was removed.
+ self.client.delete_tag(self.server['id'], assigned_tag)
+ fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ self.assertEmpty(fetched_tags)
+
+ @decorators.idempotent_id('a2c1af8c-127d-417d-974b-8115f7e3d831')
+ def test_update_all_tags(self):
+ # Add server tags to the server.
+ tags = [data_utils.rand_name('tag'), data_utils.rand_name('tag')]
+ self._update_server_tags(self.server['id'], tags)
+
+ # Replace tags with new tags and check that they are present.
+ new_tags = [data_utils.rand_name('tag'), data_utils.rand_name('tag')]
+ replaced_tags = self.client.update_all_tags(
+ self.server['id'], new_tags)['tags']
+ six.assertCountEqual(self, new_tags, replaced_tags)
+
+ # List the tags and check that the tags were replaced.
+ fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ six.assertCountEqual(self, new_tags, fetched_tags)
+
+ @decorators.idempotent_id('a63b2a74-e918-4b7c-bcab-10c855f3a57e')
+ def test_delete_all_tags(self):
+ # Add server tags to the server.
+ assigned_tags = [data_utils.rand_name('tag'),
+ data_utils.rand_name('tag')]
+ self._update_server_tags(self.server['id'], assigned_tags)
+
+ # Delete tags from the server and check that they were deleted.
+ self.client.delete_all_tags(self.server['id'])
+ fetched_tags = self.client.list_tags(self.server['id'])['tags']
+ self.assertEmpty(fetched_tags)
+
+ @decorators.idempotent_id('81279a66-61c3-4759-b830-a2dbe64cbe08')
+ def test_check_tag_existence(self):
+ # Add server tag to the server.
+ assigned_tag = data_utils.rand_name('tag')
+ self._update_server_tags(self.server['id'], assigned_tag)
+
+ # Check that added tag exists. Throws a 404 if not found, else a 204,
+ # which was already checked by the schema validation.
+ self.client.check_tag_existence(self.server['id'], assigned_tag)
diff --git a/tempest/api/compute/servers/test_servers_negative.py b/tempest/api/compute/servers/test_servers_negative.py
index b22a434..1418b3f 100644
--- a/tempest/api/compute/servers/test_servers_negative.py
+++ b/tempest/api/compute/servers/test_servers_negative.py
@@ -176,7 +176,7 @@
self.assertRaises(lib_exc.NotFound,
self.client.rebuild_server,
- server['id'], self.image_ref_alt)
+ server['id'], self.image_ref)
@test.related_bug('1660878', status_code=409)
@test.attr(type=['negative'])
@@ -198,7 +198,7 @@
self.assertRaises(lib_exc.NotFound,
self.client.rebuild_server,
nonexistent_server,
- self.image_ref_alt)
+ self.image_ref)
@test.attr(type=['negative'])
@decorators.idempotent_id('fd57f159-68d6-4c2a-902b-03070828a87e')
diff --git a/tempest/api/compute/test_live_block_migration_negative.py b/tempest/api/compute/test_live_block_migration_negative.py
index 40d0746..01fd9ef 100644
--- a/tempest/api/compute/test_live_block_migration_negative.py
+++ b/tempest/api/compute/test_live_block_migration_negative.py
@@ -31,11 +31,6 @@
if not CONF.compute_feature_enabled.live_migration:
raise cls.skipException("Live migration is not enabled")
- @classmethod
- def setup_clients(cls):
- super(LiveBlockMigrationNegativeTestJSON, cls).setup_clients()
- cls.admin_servers_client = cls.os_adm.servers_client
-
def _migrate_server_to(self, server_id, dest_host):
bmflm = CONF.compute_feature_enabled.block_migration_for_live_migration
self.admin_servers_client.live_migrate_server(
diff --git a/tempest/api/compute/volumes/test_attach_volume.py b/tempest/api/compute/volumes/test_attach_volume.py
index cbe7178..73c7614 100644
--- a/tempest/api/compute/volumes/test_attach_volume.py
+++ b/tempest/api/compute/volumes/test_attach_volume.py
@@ -22,7 +22,6 @@
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
-from tempest.lib import exceptions as lib_exc
CONF = config.CONF
@@ -61,38 +60,14 @@
server['id'])['addresses']
return server
- def _detach_volume(self, server_id, volume_id):
- try:
- self.servers_client.detach_volume(server_id, volume_id)
- waiters.wait_for_volume_status(self.volumes_client,
- volume_id, 'available')
- except lib_exc.NotFound:
- LOG.warning("Unable to detach volume %s from server %s "
- "possibly it was already detached", volume_id,
- server_id)
-
- def _attach_volume(self, server_id, volume_id, device=None):
- # Attach the volume to the server
- kwargs = {'volumeId': volume_id}
- if device:
- kwargs.update({'device': '/dev/%s' % device})
- attachment = self.servers_client.attach_volume(
- server_id, **kwargs)['volumeAttachment']
- waiters.wait_for_volume_status(self.volumes_client,
- volume_id, 'in-use')
- self.addCleanup(self._detach_volume, server_id,
- volume_id)
-
- return attachment
-
@decorators.idempotent_id('52e9045a-e90d-4c0d-9087-79d657faffff')
def test_attach_detach_volume(self):
# Stop and Start a server with an attached volume, ensuring that
# the volume remains attached.
server = self._create_server()
volume = self.create_volume()
- attachment = self._attach_volume(server['id'], volume['id'],
- device=self.device)
+ attachment = self.attach_volume(server, volume,
+ device=('/dev/%s' % self.device))
self.servers_client.stop_server(server['id'])
waiters.wait_for_server_status(self.servers_client, server['id'],
@@ -115,7 +90,10 @@
device_name_to_match = '\n' + self.device + ' '
self.assertIn(device_name_to_match, disks)
- self._detach_volume(server['id'], attachment['volumeId'])
+ self.servers_client.detach_volume(server['id'], attachment['volumeId'])
+ waiters.wait_for_volume_resource_status(
+ self.volumes_client, attachment['volumeId'], 'available')
+
self.servers_client.stop_server(server['id'])
waiters.wait_for_server_status(self.servers_client, server['id'],
'SHUTOFF')
@@ -141,8 +119,8 @@
# List volume attachment of the server
server = self._create_server()
volume = self.create_volume()
- attachment = self._attach_volume(server['id'], volume['id'],
- device=self.device)
+ attachment = self.attach_volume(server, volume,
+ device=('/dev/%s' % self.device))
body = self.servers_client.list_volume_attachments(
server['id'])['volumeAttachments']
self.assertEqual(1, len(body))
@@ -165,8 +143,8 @@
server = self._create_server()
volume_1st = self.create_volume()
volume_2nd = self.create_volume()
- attachment_1st = self._attach_volume(server['id'], volume_1st['id'])
- attachment_2nd = self._attach_volume(server['id'], volume_2nd['id'])
+ attachment_1st = self.attach_volume(server, volume_1st)
+ attachment_2nd = self.attach_volume(server, volume_2nd)
body = self.servers_client.list_volume_attachments(
server['id'])['volumeAttachments']
@@ -253,8 +231,8 @@
volume = self.create_volume()
num_vol = self._count_volumes(server)
self._shelve_server(server)
- attachment = self._attach_volume(server['id'], volume['id'],
- device=self.device)
+ attachment = self.attach_volume(server, volume,
+ device=('/dev/%s' % self.device))
# Unshelve the instance and check that attached volume exists
self._unshelve_server_and_check_volumes(server, num_vol + 1)
@@ -279,9 +257,12 @@
volume = self.create_volume()
num_vol = self._count_volumes(server)
self._shelve_server(server)
- self._attach_volume(server['id'], volume['id'], device=self.device)
- # Detach the volume
- self._detach_volume(server['id'], volume['id'])
+
+ # Attach and then detach the volume
+ self.attach_volume(server, volume, device=('/dev/%s' % self.device))
+ self.servers_client.detach_volume(server['id'], volume['id'])
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
# Unshelve the instance and check that we have the expected number of
# volume(s)
diff --git a/tempest/api/compute/volumes/test_volume_snapshots.py b/tempest/api/compute/volumes/test_volume_snapshots.py
index 3d5d23b..4b06867 100644
--- a/tempest/api/compute/volumes/test_volume_snapshots.py
+++ b/tempest/api/compute/volumes/test_volume_snapshots.py
@@ -54,9 +54,9 @@
display_name=s_name)['snapshot']
def delete_snapshot(snapshot_id):
- waiters.wait_for_snapshot_status(self.snapshots_client,
- snapshot_id,
- 'available')
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
+ snapshot_id,
+ 'available')
# Delete snapshot
self.snapshots_client.delete_snapshot(snapshot_id)
self.snapshots_client.wait_for_resource_deletion(snapshot_id)
diff --git a/tempest/api/compute/volumes/test_volumes_get.py b/tempest/api/compute/volumes/test_volumes_get.py
index 63c247e..0eaa359 100644
--- a/tempest/api/compute/volumes/test_volumes_get.py
+++ b/tempest/api/compute/volumes/test_volumes_get.py
@@ -57,7 +57,8 @@
self.assertIsNotNone(volume['id'],
"Field volume id is empty or not found.")
# Wait for Volume status to become ACTIVE
- waiters.wait_for_volume_status(self.client, volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.client, volume['id'],
+ 'available')
# GET Volume
fetched_volume = self.client.show_volume(volume['id'])['volume']
# Verification of details of fetched Volume
diff --git a/tempest/api/compute/volumes/test_volumes_list.py b/tempest/api/compute/volumes/test_volumes_list.py
index dd9d408..0d8214f 100644
--- a/tempest/api/compute/volumes/test_volumes_list.py
+++ b/tempest/api/compute/volumes/test_volumes_list.py
@@ -44,13 +44,11 @@
super(VolumesTestJSON, cls).resource_setup()
# Create 3 Volumes
cls.volume_list = []
- cls.volume_id_list = []
for _ in range(3):
metadata = {'Type': 'work'}
volume = cls.create_volume(metadata=metadata)
volume = cls.client.show_volume(volume['id'])['volume']
cls.volume_list.append(volume)
- cls.volume_id_list.append(volume['id'])
@decorators.idempotent_id('bc2dd1a0-15af-48e5-9990-f2e75a48325d')
def test_volume_list(self):
diff --git a/tempest/api/identity/admin/v2/test_endpoints.py b/tempest/api/identity/admin/v2/test_endpoints.py
index df55d2f..0ea2eb3 100644
--- a/tempest/api/identity/admin/v2/test_endpoints.py
+++ b/tempest/api/identity/admin/v2/test_endpoints.py
@@ -27,10 +27,10 @@
s_name = data_utils.rand_name('service')
s_type = data_utils.rand_name('type')
s_description = data_utils.rand_name('description')
- cls.service_data = cls.services_client.create_service(
+ service_data = cls.services_client.create_service(
name=s_name, type=s_type,
description=s_description)['OS-KSADM:service']
- cls.service_id = cls.service_data['id']
+ cls.service_id = service_data['id']
cls.service_ids.append(cls.service_id)
# Create endpoints so as to use for LIST and GET test cases
cls.setup_endpoints = list()
diff --git a/tempest/api/identity/admin/v3/test_domains_negative.py b/tempest/api/identity/admin/v3/test_domains_negative.py
index 280a5a8..4555a6a 100644
--- a/tempest/api/identity/admin/v3/test_domains_negative.py
+++ b/tempest/api/identity/admin/v3/test_domains_negative.py
@@ -14,7 +14,7 @@
# under the License.
from tempest.api.identity import base
-from tempest.lib.common.utils import data_utils
+from tempest.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from tempest import test
diff --git a/tempest/api/identity/admin/v3/test_endpoints.py b/tempest/api/identity/admin/v3/test_endpoints.py
index 686743b..9a0b3e4 100644
--- a/tempest/api/identity/admin/v3/test_endpoints.py
+++ b/tempest/api/identity/admin/v3/test_endpoints.py
@@ -33,11 +33,10 @@
s_name = data_utils.rand_name('service')
s_type = data_utils.rand_name('type')
s_description = data_utils.rand_name('description')
- cls.service_data = (
+ service_data = (
cls.services_client.create_service(name=s_name, type=s_type,
description=s_description))
- cls.service_data = cls.service_data['service']
- cls.service_id = cls.service_data['id']
+ cls.service_id = service_data['service']['id']
cls.service_ids.append(cls.service_id)
# Create endpoints so as to use for LIST and GET test cases
cls.setup_endpoints = list()
diff --git a/tempest/api/identity/admin/v3/test_endpoints_negative.py b/tempest/api/identity/admin/v3/test_endpoints_negative.py
index 53c2b1f..8e00193 100644
--- a/tempest/api/identity/admin/v3/test_endpoints_negative.py
+++ b/tempest/api/identity/admin/v3/test_endpoints_negative.py
@@ -35,11 +35,11 @@
s_name = data_utils.rand_name('service')
s_type = data_utils.rand_name('type')
s_description = data_utils.rand_name('description')
- cls.service_data = (
+ service_data = (
cls.services_client.create_service(name=s_name, type=s_type,
description=s_description)
['service'])
- cls.service_id = cls.service_data['id']
+ cls.service_id = service_data['id']
cls.service_ids.append(cls.service_id)
@classmethod
diff --git a/tempest/api/identity/base.py b/tempest/api/identity/base.py
index d5897de..80e7936 100644
--- a/tempest/api/identity/base.py
+++ b/tempest/api/identity/base.py
@@ -67,7 +67,7 @@
return role[0]
def _create_test_user(self, **kwargs):
- if kwargs['password'] is None:
+ if kwargs.get('password', None) is None:
user_password = data_utils.rand_password()
kwargs['password'] = user_password
user = self.users_client.create_user(**kwargs)['user']
diff --git a/tempest/api/identity/v3/test_api_discovery.py b/tempest/api/identity/v3/test_api_discovery.py
index 74e9ec1..2eed3c8 100644
--- a/tempest/api/identity/v3/test_api_discovery.py
+++ b/tempest/api/identity/v3/test_api_discovery.py
@@ -14,6 +14,7 @@
# under the License.
from tempest.api.identity import base
+from tempest.lib import decorators
from tempest import test
@@ -21,7 +22,7 @@
"""Tests for API discovery features."""
@test.attr(type='smoke')
- @test.idempotent_id('b9232f5e-d9e5-4d97-b96c-28d3db4de1bd')
+ @decorators.idempotent_id('b9232f5e-d9e5-4d97-b96c-28d3db4de1bd')
def test_api_version_resources(self):
descr = self.non_admin_client.show_api_description()['version']
expected_resources = ('id', 'links', 'media-types', 'status',
@@ -32,7 +33,7 @@
self.assertIn(res, keys)
@test.attr(type='smoke')
- @test.idempotent_id('657c1970-4722-4189-8831-7325f3bc4265')
+ @decorators.idempotent_id('657c1970-4722-4189-8831-7325f3bc4265')
def test_api_media_types(self):
descr = self.non_admin_client.show_api_description()['version']
# Get MIME type bases and descriptions
@@ -47,7 +48,7 @@
self.assertIn(s_type, media_types)
@test.attr(type='smoke')
- @test.idempotent_id('8879a470-abfb-47bb-bb8d-5a7fd279ad1e')
+ @decorators.idempotent_id('8879a470-abfb-47bb-bb8d-5a7fd279ad1e')
def test_api_version_statuses(self):
descr = self.non_admin_client.show_api_description()['version']
status = descr['status'].lower()
diff --git a/tempest/api/identity/v3/test_projects.py b/tempest/api/identity/v3/test_projects.py
index 26cb90b..570be99 100644
--- a/tempest/api/identity/v3/test_projects.py
+++ b/tempest/api/identity/v3/test_projects.py
@@ -14,15 +14,15 @@
# under the License.
from tempest.api.identity import base
+from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
-from tempest import test
class IdentityV3ProjectsTest(base.BaseIdentityV3Test):
credentials = ['primary', 'alt']
- @test.idempotent_id('86128d46-e170-4644-866a-cc487f699e1d')
+ @decorators.idempotent_id('86128d46-e170-4644-866a-cc487f699e1d')
def test_list_projects_returns_only_authorized_projects(self):
alt_project_name =\
self.alt_manager.credentials.project_name
diff --git a/tempest/api/identity/v3/test_tokens.py b/tempest/api/identity/v3/test_tokens.py
index b410da6..1dc1df6 100644
--- a/tempest/api/identity/v3/test_tokens.py
+++ b/tempest/api/identity/v3/test_tokens.py
@@ -16,12 +16,12 @@
from oslo_utils import timeutils
import six
from tempest.api.identity import base
-from tempest import test
+from tempest.lib import decorators
class TokensV3Test(base.BaseIdentityV3Test):
- @test.idempotent_id('6f8e4436-fc96-4282-8122-e41df57197a9')
+ @decorators.idempotent_id('6f8e4436-fc96-4282-8122-e41df57197a9')
def test_create_token(self):
creds = self.os.credentials
diff --git a/tempest/api/identity/v3/test_users.py b/tempest/api/identity/v3/test_users.py
index 9592cb9..f263258 100644
--- a/tempest/api/identity/v3/test_users.py
+++ b/tempest/api/identity/v3/test_users.py
@@ -20,8 +20,8 @@
from tempest.api.identity import base
from tempest import config
from tempest.lib.common.utils import data_utils
+from tempest.lib import decorators
from tempest.lib import exceptions
-from tempest import test
CONF = config.CONF
@@ -78,7 +78,7 @@
time.sleep(1)
self.non_admin_users_client.auth_provider.set_auth()
- @test.idempotent_id('ad71bd23-12ad-426b-bb8b-195d2b635f27')
+ @decorators.idempotent_id('ad71bd23-12ad-426b-bb8b-195d2b635f27')
def test_user_update_own_password(self):
old_pass = self.creds.password
old_token = self.non_admin_client.token
@@ -103,7 +103,7 @@
@testtools.skipUnless(CONF.identity_feature_enabled.security_compliance,
'Security compliance not available.')
- @test.idempotent_id('941784ee-5342-4571-959b-b80dd2cea516')
+ @decorators.idempotent_id('941784ee-5342-4571-959b-b80dd2cea516')
def test_password_history_check_self_service_api(self):
old_pass = self.creds.password
new_pass1 = data_utils.rand_password()
@@ -133,7 +133,7 @@
@testtools.skipUnless(CONF.identity_feature_enabled.security_compliance,
'Security compliance not available.')
- @test.idempotent_id('a7ad8bbf-2cff-4520-8c1d-96332e151658')
+ @decorators.idempotent_id('a7ad8bbf-2cff-4520-8c1d-96332e151658')
def test_user_account_lockout(self):
password = self.creds.password
diff --git a/tempest/api/image/admin/v2/test_images.py b/tempest/api/image/admin/v2/test_images.py
index fc5ed79..11b595a 100644
--- a/tempest/api/image/admin/v2/test_images.py
+++ b/tempest/api/image/admin/v2/test_images.py
@@ -17,8 +17,8 @@
import testtools
from tempest.api.image import base
+from tempest.common.utils import data_utils
from tempest import config
-from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
diff --git a/tempest/api/image/v1/test_images.py b/tempest/api/image/v1/test_images.py
index a79c18c..756c78c 100644
--- a/tempest/api/image/v1/test_images.py
+++ b/tempest/api/image/v1/test_images.py
@@ -145,24 +145,24 @@
a_formats = ['ami', 'ari', 'aki']
(cls.container_format,
- cls.container_format_alt) = CONF.image.container_formats[:2]
+ container_format_alt) = CONF.image.container_formats[:2]
cls.disk_format, cls.disk_format_alt = CONF.image.disk_formats[:2]
if cls.container_format in a_formats:
cls.disk_format = cls.container_format
- if cls.container_format_alt in a_formats:
- cls.disk_format_alt = cls.container_format_alt
+ if container_format_alt in a_formats:
+ cls.disk_format_alt = container_format_alt
img1 = cls._create_remote_image('one', cls.container_format,
cls.disk_format)
- img2 = cls._create_remote_image('two', cls.container_format_alt,
+ img2 = cls._create_remote_image('two', container_format_alt,
cls.disk_format_alt)
img3 = cls._create_remote_image('dup', cls.container_format,
cls.disk_format)
img4 = cls._create_remote_image('dup', cls.container_format,
cls.disk_format)
- img5 = cls._create_standard_image('1', cls.container_format_alt,
+ img5 = cls._create_standard_image('1', container_format_alt,
cls.disk_format_alt, 42)
- img6 = cls._create_standard_image('2', cls.container_format_alt,
+ img6 = cls._create_standard_image('2', container_format_alt,
cls.disk_format_alt, 142)
img7 = cls._create_standard_image('33', cls.container_format,
cls.disk_format, 142)
diff --git a/tempest/api/image/v2/test_images.py b/tempest/api/image/v2/test_images.py
index 56b3517..2812c68 100644
--- a/tempest/api/image/v2/test_images.py
+++ b/tempest/api/image/v2/test_images.py
@@ -164,7 +164,6 @@
cls.client.store_image_file(image['id'], data=image_file)
# Keep the data of one test image so it can be used to filter lists
cls.test_data = image
- cls.test_data['size'] = size
return image['id']
diff --git a/tempest/api/network/admin/test_dhcp_agent_scheduler.py b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
index 868771f..485c8f5 100644
--- a/tempest/api/network/admin/test_dhcp_agent_scheduler.py
+++ b/tempest/api/network/admin/test_dhcp_agent_scheduler.py
@@ -32,7 +32,7 @@
# Create a network and make sure it will be hosted by a
# dhcp agent: this is done by creating a regular port
cls.network = cls.create_network()
- cls.subnet = cls.create_subnet(cls.network)
+ cls.create_subnet(cls.network)
cls.port = cls.create_port(cls.network)
@decorators.idempotent_id('5032b1fe-eb42-4a64-8f3b-6e189d8b5c7d')
diff --git a/tempest/api/network/admin/test_external_networks_negative.py b/tempest/api/network/admin/test_external_networks_negative.py
index 743089a..770d91f 100644
--- a/tempest/api/network/admin/test_external_networks_negative.py
+++ b/tempest/api/network/admin/test_external_networks_negative.py
@@ -12,6 +12,7 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
+import testtools
from tempest.api.network import base
from tempest import config
@@ -27,6 +28,8 @@
@test.attr(type=['negative'])
@decorators.idempotent_id('d402ae6c-0be0-4d8e-833b-a738895d98d0')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_create_port_with_precreated_floatingip_as_fixed_ip(self):
# NOTE: External networks can be used to create both floating-ip as
# well as instance-ip. So, creating an instance-ip with a value of a
diff --git a/tempest/api/network/admin/test_floating_ips_admin_actions.py b/tempest/api/network/admin/test_floating_ips_admin_actions.py
index c36323a..9a17817 100644
--- a/tempest/api/network/admin/test_floating_ips_admin_actions.py
+++ b/tempest/api/network/admin/test_floating_ips_admin_actions.py
@@ -31,6 +31,9 @@
if not test.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
+ if not CONF.network.public_network_id:
+ msg = "The public_network_id option must be specified."
+ raise cls.skipException(msg)
@classmethod
def setup_clients(cls):
@@ -43,9 +46,9 @@
cls.ext_net_id = CONF.network.public_network_id
cls.floating_ip = cls.create_floatingip(cls.ext_net_id)
cls.network = cls.create_network()
- cls.subnet = cls.create_subnet(cls.network)
- cls.router = cls.create_router(external_network_id=cls.ext_net_id)
- cls.create_router_interface(cls.router['id'], cls.subnet['id'])
+ subnet = cls.create_subnet(cls.network)
+ router = cls.create_router(external_network_id=cls.ext_net_id)
+ cls.create_router_interface(router['id'], subnet['id'])
cls.port = cls.create_port(cls.network)
@decorators.idempotent_id('64f2100b-5471-4ded-b46c-ddeeeb4f231b')
diff --git a/tempest/api/network/admin/test_l3_agent_scheduler.py b/tempest/api/network/admin/test_l3_agent_scheduler.py
index 5a54ae0..e1970b9 100644
--- a/tempest/api/network/admin/test_l3_agent_scheduler.py
+++ b/tempest/api/network/admin/test_l3_agent_scheduler.py
@@ -79,7 +79,7 @@
cls.router['id'])['router'].get('distributed', False)
if cls.is_dvr_router:
cls.network = cls.create_network()
- cls.subnet = cls.create_subnet(cls.network)
+ cls.create_subnet(cls.network)
cls.port = cls.create_port(cls.network)
cls.routers_client.add_router_interface(
cls.router['id'], port_id=cls.port['id'])
diff --git a/tempest/api/network/admin/test_negative_quotas.py b/tempest/api/network/admin/test_negative_quotas.py
index 435e672..2c639da 100644
--- a/tempest/api/network/admin/test_negative_quotas.py
+++ b/tempest/api/network/admin/test_negative_quotas.py
@@ -39,6 +39,7 @@
msg = "quotas extension not enabled."
raise cls.skipException(msg)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('644f4e1b-1bf9-4af0-9fd8-eb56ac0f51cf')
def test_network_quota_exceeding(self):
# Set the network quota to two
diff --git a/tempest/api/network/test_dhcp_ipv6.py b/tempest/api/network/test_dhcp_ipv6.py
index fa4010d..136f9e6 100644
--- a/tempest/api/network/test_dhcp_ipv6.py
+++ b/tempest/api/network/test_dhcp_ipv6.py
@@ -13,9 +13,10 @@
# License for the specific language governing permissions and limitations
# under the License.
-import netaddr
import random
+import netaddr
+
from tempest.api.network import base
from tempest.common.utils import data_utils
from tempest.common.utils import net_info
diff --git a/tempest/api/network/test_extra_dhcp_options.py b/tempest/api/network/test_extra_dhcp_options.py
index 52507f9..1156275 100644
--- a/tempest/api/network/test_extra_dhcp_options.py
+++ b/tempest/api/network/test_extra_dhcp_options.py
@@ -43,16 +43,16 @@
def resource_setup(cls):
super(ExtraDHCPOptionsTestJSON, cls).resource_setup()
cls.network = cls.create_network()
- cls.subnet = cls.create_subnet(cls.network)
+ cls.create_subnet(cls.network)
cls.port = cls.create_port(cls.network)
- cls.ip_tftp = ('123.123.123.123' if cls._ip_version == 4
- else '2015::dead')
- cls.ip_server = ('123.123.123.45' if cls._ip_version == 4
- else '2015::badd')
+ ip_tftp = ('123.123.123.123' if cls._ip_version == 4
+ else '2015::dead')
+ ip_server = ('123.123.123.45' if cls._ip_version == 4
+ else '2015::badd')
cls.extra_dhcp_opts = [
{'opt_value': 'pxelinux.0', 'opt_name': 'bootfile-name'},
- {'opt_value': cls.ip_tftp, 'opt_name': 'tftp-server'},
- {'opt_value': cls.ip_server, 'opt_name': 'server-ip-address'}
+ {'opt_value': ip_tftp, 'opt_name': 'tftp-server'},
+ {'opt_value': ip_server, 'opt_name': 'server-ip-address'}
]
@decorators.idempotent_id('d2c17063-3767-4a24-be4f-a23dbfa133c9')
diff --git a/tempest/api/network/test_floating_ips.py b/tempest/api/network/test_floating_ips.py
index 23614d6..1dc574b 100644
--- a/tempest/api/network/test_floating_ips.py
+++ b/tempest/api/network/test_floating_ips.py
@@ -46,6 +46,9 @@
if not test.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
+ if not CONF.network.public_network_id:
+ msg = "The public_network_id option must be specified."
+ raise cls.skipException(msg)
@classmethod
def resource_setup(cls):
diff --git a/tempest/api/network/test_floating_ips_negative.py b/tempest/api/network/test_floating_ips_negative.py
index 9ccda05..cb29d3d 100644
--- a/tempest/api/network/test_floating_ips_negative.py
+++ b/tempest/api/network/test_floating_ips_negative.py
@@ -37,6 +37,9 @@
if not test.is_extension_enabled('router', 'network'):
msg = "router extension not enabled."
raise cls.skipException(msg)
+ if not CONF.network.public_network_id:
+ msg = "The public_network_id option must be specified."
+ raise cls.skipException(msg)
@classmethod
def resource_setup(cls):
@@ -44,9 +47,9 @@
cls.ext_net_id = CONF.network.public_network_id
# Create a network with a subnet connected to a router.
cls.network = cls.create_network()
- cls.subnet = cls.create_subnet(cls.network)
- cls.router = cls.create_router()
- cls.create_router_interface(cls.router['id'], cls.subnet['id'])
+ subnet = cls.create_subnet(cls.network)
+ router = cls.create_router()
+ cls.create_router_interface(router['id'], subnet['id'])
cls.port = cls.create_port(cls.network)
@test.attr(type=['negative'])
diff --git a/tempest/api/network/test_networks.py b/tempest/api/network/test_networks.py
index 1426798..69d4ebe 100644
--- a/tempest/api/network/test_networks.py
+++ b/tempest/api/network/test_networks.py
@@ -34,7 +34,6 @@
def resource_setup(cls):
super(BaseNetworkTestResources, cls).resource_setup()
cls.network = cls.create_network()
- cls.name = cls.network['name']
cls.subnet = cls._create_subnet_with_last_subnet_block(cls.network,
cls._ip_version)
cls._subnet_data = {6: {'gateway':
diff --git a/tempest/api/network/test_routers.py b/tempest/api/network/test_routers.py
index 524ab9e..694b86b 100644
--- a/tempest/api/network/test_routers.py
+++ b/tempest/api/network/test_routers.py
@@ -14,6 +14,7 @@
# under the License.
import netaddr
+import testtools
from tempest.api.network import base_routers as base
from tempest.common.utils import data_utils
@@ -42,6 +43,8 @@
@test.attr(type='smoke')
@decorators.idempotent_id('f64403e2-8483-4b34-8ccd-b09a87bcc68c')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_create_show_list_update_delete_router(self):
# Create a router
router = self._create_router(
@@ -89,6 +92,8 @@
@decorators.idempotent_id('847257cc-6afd-4154-b8fb-af49f5670ce8')
@test.requires_ext(extension='ext-gw-mode', service='network')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_create_router_with_default_snat_value(self):
# Create a router with default snat rule
router = self._create_router(
@@ -99,6 +104,8 @@
@decorators.idempotent_id('ea74068d-09e9-4fd7-8995-9b6a1ace920f')
@test.requires_ext(extension='ext-gw-mode', service='network')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_create_router_with_snat_explicit(self):
name = data_utils.rand_name('snat-router')
# Create a router enabling snat attributes
@@ -184,6 +191,8 @@
self.assertIn(subnet_id, public_subnet_ids)
@decorators.idempotent_id('6cc285d8-46bf-4f36-9b1a-783e3008ba79')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_update_router_set_gateway(self):
router = self._create_router()
self.routers_client.update_router(
@@ -198,6 +207,8 @@
@decorators.idempotent_id('b386c111-3b21-466d-880c-5e72b01e1a33')
@test.requires_ext(extension='ext-gw-mode', service='network')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_update_router_set_gateway_with_snat_explicit(self):
router = self._create_router()
self.admin_routers_client.update_router(
@@ -213,6 +224,8 @@
@decorators.idempotent_id('96536bc7-8262-4fb2-9967-5c46940fa279')
@test.requires_ext(extension='ext-gw-mode', service='network')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_update_router_set_gateway_without_snat(self):
router = self._create_router()
self.admin_routers_client.update_router(
@@ -227,6 +240,8 @@
self._verify_gateway_port(router['id'])
@decorators.idempotent_id('ad81b7ee-4f81-407b-a19c-17e623f763e8')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_update_router_unset_gateway(self):
router = self._create_router(
external_network_id=CONF.network.public_network_id)
@@ -241,6 +256,8 @@
@decorators.idempotent_id('f2faf994-97f4-410b-a831-9bc977b64374')
@test.requires_ext(extension='ext-gw-mode', service='network')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
def test_update_router_reset_gateway_without_snat(self):
router = self._create_router(
external_network_id=CONF.network.public_network_id)
diff --git a/tempest/api/object_storage/base.py b/tempest/api/object_storage/base.py
index e0216fd..642c1ac 100644
--- a/tempest/api/object_storage/base.py
+++ b/tempest/api/object_storage/base.py
@@ -16,8 +16,8 @@
import time
from tempest.common import custom_matchers
+from tempest.common.utils import data_utils
from tempest import config
-from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import exceptions as lib_exc
import tempest.test
diff --git a/tempest/api/object_storage/test_account_services.py b/tempest/api/object_storage/test_account_services.py
index d779d42..cf7dbe8 100644
--- a/tempest/api/object_storage/test_account_services.py
+++ b/tempest/api/object_storage/test_account_services.py
@@ -253,6 +253,18 @@
self.assertEqual(True, container.decode(
'utf-8').startswith(prefix))
+ @decorators.idempotent_id('b1811cff-d1ed-4c15-a52e-efd8de41cf34')
+ def test_list_containers_reverse_order(self):
+ # list containers in reverse order
+ _, orig_container_list = self.account_client.list_account_containers()
+
+ params = {'reverse': True}
+ resp, container_list = self.account_client.list_account_containers(
+ params=params)
+ self.assertHeaders(resp, 'Account', 'GET')
+ self.assertEqual(sorted(orig_container_list, reverse=True),
+ container_list)
+
@test.attr(type='smoke')
@decorators.idempotent_id('4894c312-6056-4587-8d6f-86ffbf861f80')
def test_list_account_metadata(self):
@@ -291,7 +303,7 @@
self.account_client.delete_account_metadata(metadata)
@decorators.idempotent_id('9f60348d-c46f-4465-ae06-d51dbd470953')
- def test_update_account_metadata_with_delete_matadata(self):
+ def test_update_account_metadata_with_delete_metadata(self):
# delete metadata from account
metadata = {'test-account-meta1': 'Meta1'}
self.account_client.create_account_metadata(metadata)
@@ -302,7 +314,7 @@
self.assertNotIn('x-account-meta-test-account-meta1', resp)
@decorators.idempotent_id('64fd53f3-adbd-4639-af54-436e4982dbfb')
- def test_update_account_metadata_with_create_matadata_key(self):
+ def test_update_account_metadata_with_create_metadata_key(self):
# if the value of metadata is not set, the metadata is not
# registered at a server
metadata = {'test-account-meta1': ''}
@@ -313,7 +325,7 @@
self.assertNotIn('x-account-meta-test-account-meta1', resp)
@decorators.idempotent_id('d4d884d3-4696-4b85-bc98-4f57c4dd2bf1')
- def test_update_account_metadata_with_delete_matadata_key(self):
+ def test_update_account_metadata_with_delete_metadata_key(self):
# Although the value of metadata is not set, the feature of
# deleting metadata is valid
metadata_1 = {'test-account-meta1': 'Meta1'}
diff --git a/tempest/api/object_storage/test_container_services.py b/tempest/api/object_storage/test_container_services.py
index 4b65584..2e617f3 100644
--- a/tempest/api/object_storage/test_container_services.py
+++ b/tempest/api/object_storage/test_container_services.py
@@ -14,7 +14,7 @@
# under the License.
from tempest.api.object_storage import base
-from tempest.lib.common.utils import data_utils
+from tempest.common.utils import data_utils
from tempest.lib import decorators
from tempest import test
@@ -166,7 +166,7 @@
container_name = self.create_container()
object_name, _ = self.create_object(container_name)
- params = {'end_marker': 'ZzzzObject1234567890'}
+ params = {'end_marker': object_name + 'zzzz'}
resp, object_list = self.container_client.list_container_contents(
container_name,
params=params)
@@ -246,7 +246,8 @@
def test_list_container_contents_with_path(self):
# get container contents list using path param
container_name = self.create_container()
- object_name = data_utils.rand_name(name='Swift/TestObject')
+ object_name = data_utils.rand_name(name='TestObject')
+ object_name = 'Swift/' + object_name
self.create_object(container_name, object_name)
params = {'path': 'Swift'}
diff --git a/tempest/api/object_storage/test_container_services_negative.py b/tempest/api/object_storage/test_container_services_negative.py
index be066ba..ebbb84e 100644
--- a/tempest/api/object_storage/test_container_services_negative.py
+++ b/tempest/api/object_storage/test_container_services_negative.py
@@ -16,8 +16,8 @@
import testtools
from tempest.api.object_storage import base
+from tempest.common.utils import data_utils
from tempest import config
-from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions
from tempest import test
diff --git a/tempest/api/object_storage/test_object_formpost_negative.py b/tempest/api/object_storage/test_object_formpost_negative.py
index 2174940..52b3978 100644
--- a/tempest/api/object_storage/test_object_formpost_negative.py
+++ b/tempest/api/object_storage/test_object_formpost_negative.py
@@ -125,6 +125,7 @@
@decorators.idempotent_id('b277257f-113c-4499-b8d1-5fead79f7360')
@test.requires_ext(extension='formpost', service='object')
+ @test.attr(type=['negative'])
def test_post_object_using_form_invalid_signature(self):
self.key = "Wrong"
body, content_type = self.get_multipart_form()
diff --git a/tempest/api/volume/admin/test_multi_backend.py b/tempest/api/volume/admin/test_multi_backend.py
index 72d71c7..c3e904a 100644
--- a/tempest/api/volume/admin/test_multi_backend.py
+++ b/tempest/api/volume/admin/test_multi_backend.py
@@ -34,19 +34,18 @@
super(VolumeMultiBackendV2Test, cls).resource_setup()
# read backend name from a list .
- cls.backend_names = set(CONF.volume.backend_names)
+ backend_names = set(CONF.volume.backend_names)
cls.name_field = cls.special_fields['name_field']
- cls.volume_type_id_list = []
cls.volume_id_list_with_prefix = []
cls.volume_id_list_without_prefix = []
# Volume/Type creation (uses volume_backend_name)
# It is not allowed to create the same backend name twice
- if len(cls.backend_names) < 2:
+ if len(backend_names) < 2:
raise cls.skipException("Requires at least two different "
"backend names")
- for backend_name in cls.backend_names:
+ for backend_name in backend_names:
# Volume/Type creation (uses backend_name)
cls._create_type_and_volume(backend_name, False)
# Volume/Type creation (uses capabilities:volume_backend_name)
@@ -63,8 +62,8 @@
extra_specs = {spec_key_with_prefix: backend_name_key}
else:
extra_specs = {spec_key_without_prefix: backend_name_key}
- cls.type = cls.create_volume_type(name=type_name,
- extra_specs=extra_specs)
+ cls.create_volume_type(name=type_name,
+ extra_specs=extra_specs)
params = {cls.name_field: vol_name, 'volume_type': type_name,
'size': CONF.volume.volume_size}
@@ -75,8 +74,8 @@
else:
cls.volume_id_list_without_prefix.append(
cls.volume['id'])
- waiters.wait_for_volume_status(cls.admin_volume_client,
- cls.volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(cls.admin_volume_client,
+ cls.volume['id'], 'available')
@classmethod
def resource_cleanup(cls):
diff --git a/tempest/api/volume/admin/test_volume_quotas.py b/tempest/api/volume/admin/test_volume_quotas.py
index 5a83ae3..83fca45 100644
--- a/tempest/api/volume/admin/test_volume_quotas.py
+++ b/tempest/api/volume/admin/test_volume_quotas.py
@@ -114,7 +114,7 @@
volume_default = quota_set_default['volumes']
self.admin_quotas_client.update_quota_set(
- project_id, volumes=(int(volume_default) + 5))
+ project_id, volumes=(volume_default + 5))
self.admin_quotas_client.delete_quota_set(project_id)
quota_set_new = (self.admin_quotas_client.show_quota_set(project_id)
@@ -146,7 +146,7 @@
transfer_id, auth_key=auth_key)['transfer']
# Verify volume transferred is available
- waiters.wait_for_volume_status(
+ waiters.wait_for_volume_resource_status(
self.alt_client, volume['id'], 'available')
# List of tenants quota usage post transfer
diff --git a/tempest/api/volume/admin/test_volume_retype_with_migration.py b/tempest/api/volume/admin/test_volume_retype_with_migration.py
index dc509de..4d32fdd 100644
--- a/tempest/api/volume/admin/test_volume_retype_with_migration.py
+++ b/tempest/api/volume/admin/test_volume_retype_with_migration.py
@@ -40,16 +40,16 @@
def resource_setup(cls):
super(VolumeRetypeWithMigrationV2Test, cls).resource_setup()
# read backend name from a list.
- cls.backend_src = CONF.volume.backend_names[0]
- cls.backend_dst = CONF.volume.backend_names[1]
+ backend_src = CONF.volume.backend_names[0]
+ backend_dst = CONF.volume.backend_names[1]
- extra_specs_src = {"volume_backend_name": cls.backend_src}
- extra_specs_dst = {"volume_backend_name": cls.backend_dst}
+ extra_specs_src = {"volume_backend_name": backend_src}
+ extra_specs_dst = {"volume_backend_name": backend_dst}
- cls.src_vol_type = cls.create_volume_type(extra_specs=extra_specs_src)
+ src_vol_type = cls.create_volume_type(extra_specs=extra_specs_src)
cls.dst_vol_type = cls.create_volume_type(extra_specs=extra_specs_dst)
- cls.src_vol = cls.create_volume(volume_type=cls.src_vol_type['name'])
+ cls.src_vol = cls.create_volume(volume_type=src_vol_type['name'])
@classmethod
def resource_cleanup(cls):
diff --git a/tempest/api/volume/admin/test_volume_types.py b/tempest/api/volume/admin/test_volume_types.py
index 7938604..5d08416 100644
--- a/tempest/api/volume/admin/test_volume_types.py
+++ b/tempest/api/volume/admin/test_volume_types.py
@@ -36,7 +36,7 @@
# Create/update/get/delete volume with volume_type and extra spec.
volume_types = list()
vol_name = data_utils.rand_name(self.__class__.__name__ + '-volume')
- self.name_field = self.special_fields['name_field']
+ name_field = self.special_fields['name_field']
proto = CONF.volume.storage_protocol
vendor = CONF.volume.vendor_name
extra_specs = {"storage_protocol": proto,
@@ -46,26 +46,26 @@
vol_type = self.create_volume_type(
extra_specs=extra_specs)
volume_types.append(vol_type)
- params = {self.name_field: vol_name,
+ params = {name_field: vol_name,
'volume_type': volume_types[0]['id'],
'size': CONF.volume.volume_size}
# Create volume
volume = self.create_volume(**params)
self.assertEqual(volume_types[0]['name'], volume["volume_type"])
- self.assertEqual(volume[self.name_field], vol_name,
+ self.assertEqual(volume[name_field], vol_name,
"The created volume name is not equal "
"to the requested name")
self.assertIsNotNone(volume['id'],
"Field volume id is empty or not found.")
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
# Update volume with new volume_type
self.volumes_client.retype_volume(volume['id'],
new_type=volume_types[1]['id'])
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
# Get volume details and Verify
fetched_volume = self.volumes_client.show_volume(
@@ -74,7 +74,7 @@
fetched_volume['volume_type'],
'The fetched Volume type is different '
'from updated volume type')
- self.assertEqual(vol_name, fetched_volume[self.name_field],
+ self.assertEqual(vol_name, fetched_volume[name_field],
'The fetched Volume is different '
'from the created Volume')
self.assertEqual(volume['id'], fetched_volume['id'],
diff --git a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
index 933b6ad..5f590bc 100644
--- a/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_extra_specs_negative.py
@@ -17,6 +17,7 @@
from tempest.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
+from tempest import test
class ExtraSpecsNegativeV2Test(base.BaseVolumeAdminTest):
@@ -24,9 +25,10 @@
@classmethod
def resource_setup(cls):
super(ExtraSpecsNegativeV2Test, cls).resource_setup()
- cls.extra_specs = {"spec1": "val1"}
- cls.volume_type = cls.create_volume_type(extra_specs=cls.extra_specs)
+ extra_specs = {"spec1": "val1"}
+ cls.volume_type = cls.create_volume_type(extra_specs=extra_specs)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('08961d20-5cbb-4910-ac0f-89ad6dbb2da1')
def test_update_no_body(self):
# Should not update volume type extra specs with no body
@@ -35,6 +37,7 @@
self.admin_volume_types_client.update_volume_type_extra_specs,
self.volume_type['id'], "spec1", None)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('25e5a0ee-89b3-4c53-8310-236f76c75365')
def test_update_nonexistent_extra_spec_id(self):
# Should not update volume type extra specs with nonexistent id.
@@ -45,6 +48,7 @@
self.volume_type['id'], data_utils.rand_uuid(),
extra_spec)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('9bf7a657-b011-4aec-866d-81c496fbe5c8')
def test_update_none_extra_spec_id(self):
# Should not update volume type extra specs with none id.
@@ -54,6 +58,7 @@
self.admin_volume_types_client.update_volume_type_extra_specs,
self.volume_type['id'], None, extra_spec)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('a77dfda2-9100-448e-9076-ed1711f4bdfc')
def test_update_multiple_extra_spec(self):
# Should not update volume type extra specs with multiple specs as
@@ -65,6 +70,7 @@
self.volume_type['id'], list(extra_spec)[0],
extra_spec)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('49d5472c-a53d-4eab-a4d3-450c4db1c545')
def test_create_nonexistent_type_id(self):
# Should not create volume type extra spec for nonexistent volume
@@ -75,6 +81,7 @@
self.admin_volume_types_client.create_volume_type_extra_specs,
data_utils.rand_uuid(), extra_specs)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('c821bdc8-43a4-4bf4-86c8-82f3858d5f7d')
def test_create_none_body(self):
# Should not create volume type extra spec for none POST body.
@@ -83,6 +90,7 @@
self.admin_volume_types_client.create_volume_type_extra_specs,
self.volume_type['id'], None)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('bc772c71-1ed4-4716-b945-8b5ed0f15e87')
def test_create_invalid_body(self):
# Should not create volume type extra spec for invalid POST body.
@@ -91,6 +99,7 @@
self.admin_volume_types_client.create_volume_type_extra_specs,
self.volume_type['id'], extra_specs=['invalid'])
+ @test.attr(type=['negative'])
@decorators.idempotent_id('031cda8b-7d23-4246-8bf6-bbe73fd67074')
def test_delete_nonexistent_volume_type_id(self):
# Should not delete volume type extra spec for nonexistent
@@ -100,6 +109,7 @@
self.admin_volume_types_client.delete_volume_type_extra_specs,
data_utils.rand_uuid(), "spec1")
+ @test.attr(type=['negative'])
@decorators.idempotent_id('dee5cf0c-cdd6-4353-b70c-e847050d71fb')
def test_list_nonexistent_volume_type_id(self):
# Should not list volume type extra spec for nonexistent type id.
@@ -108,6 +118,7 @@
self.admin_volume_types_client.list_volume_types_extra_specs,
data_utils.rand_uuid())
+ @test.attr(type=['negative'])
@decorators.idempotent_id('9f402cbd-1838-4eb4-9554-126a6b1908c9')
def test_get_nonexistent_volume_type_id(self):
# Should not get volume type extra spec for nonexistent type id.
@@ -116,6 +127,7 @@
self.admin_volume_types_client.show_volume_type_extra_specs,
data_utils.rand_uuid(), "spec1")
+ @test.attr(type=['negative'])
@decorators.idempotent_id('c881797d-12ff-4f1a-b09d-9f6212159753')
def test_get_nonexistent_extra_spec_id(self):
# Should not get volume type extra spec for nonexistent extra spec
diff --git a/tempest/api/volume/admin/test_volume_types_negative.py b/tempest/api/volume/admin/test_volume_types_negative.py
index b278127..69e9cc0 100644
--- a/tempest/api/volume/admin/test_volume_types_negative.py
+++ b/tempest/api/volume/admin/test_volume_types_negative.py
@@ -17,19 +17,22 @@
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
+from tempest import test
class VolumeTypesNegativeV2Test(base.BaseVolumeAdminTest):
+ @test.attr(type=['negative'])
@decorators.idempotent_id('b48c98f2-e662-4885-9b71-032256906314')
def test_create_with_nonexistent_volume_type(self):
# Should not be able to create volume with nonexistent volume_type.
- self.name_field = self.special_fields['name_field']
- params = {self.name_field: data_utils.rand_uuid(),
+ name_field = self.special_fields['name_field']
+ params = {name_field: data_utils.rand_uuid(),
'volume_type': data_utils.rand_uuid()}
self.assertRaises(lib_exc.NotFound,
self.volumes_client.create_volume, **params)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('878b4e57-faa2-4659-b0d1-ce740a06ae81')
def test_create_with_empty_name(self):
# Should not be able to create volume type with an empty name.
@@ -37,6 +40,7 @@
lib_exc.BadRequest,
self.admin_volume_types_client.create_volume_type, name='')
+ @test.attr(type=['negative'])
@decorators.idempotent_id('994610d6-0476-4018-a644-a2602ef5d4aa')
def test_get_nonexistent_type_id(self):
# Should not be able to get volume type with nonexistent type id.
@@ -44,6 +48,7 @@
self.admin_volume_types_client.show_volume_type,
data_utils.rand_uuid())
+ @test.attr(type=['negative'])
@decorators.idempotent_id('6b3926d2-7d73-4896-bc3d-e42dfd11a9f6')
def test_delete_nonexistent_type_id(self):
# Should not be able to delete volume type with nonexistent type id.
@@ -51,6 +56,7 @@
self.admin_volume_types_client.delete_volume_type,
data_utils.rand_uuid())
+ @test.attr(type=['negative'])
@decorators.idempotent_id('8c09f849-f225-4d78-ba87-bffd9a5e0c6f')
def test_create_volume_with_private_volume_type(self):
# Should not be able to create volume with private volume type.
diff --git a/tempest/api/volume/admin/test_volumes_backup.py b/tempest/api/volume/admin/test_volumes_backup.py
index 04d27ea..13b7384 100644
--- a/tempest/api/volume/admin/test_volumes_backup.py
+++ b/tempest/api/volume/admin/test_volumes_backup.py
@@ -94,8 +94,9 @@
self.addCleanup(self._delete_backup, new_id)
self.assertIn("id", import_backup)
self.assertEqual(new_id, import_backup['id'])
- waiters.wait_for_backup_status(self.admin_backups_client,
- import_backup['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.admin_backups_client,
+ import_backup['id'],
+ 'available')
# Verify Import Backup
backups = self.admin_backups_client.list_backups(
@@ -108,14 +109,16 @@
self.addCleanup(self.admin_volume_client.delete_volume,
restore['volume_id'])
self.assertEqual(backup['id'], restore['backup_id'])
- waiters.wait_for_volume_status(self.admin_volume_client,
- restore['volume_id'], 'available')
+ waiters.wait_for_volume_resource_status(self.admin_volume_client,
+ restore['volume_id'],
+ 'available')
# Verify if restored volume is there in volume list
volumes = self.admin_volume_client.list_volumes()['volumes']
self.assertIn(restore['volume_id'], [v['id'] for v in volumes])
- waiters.wait_for_backup_status(self.admin_backups_client,
- import_backup['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.admin_backups_client,
+ import_backup['id'],
+ 'available')
@decorators.idempotent_id('47a35425-a891-4e13-961c-c45deea21e94')
def test_volume_backup_reset_status(self):
@@ -131,8 +134,8 @@
# Reset backup status to error
self.admin_backups_client.reset_backup_status(backup_id=backup['id'],
status="error")
- waiters.wait_for_backup_status(self.admin_backups_client,
- backup['id'], 'error')
+ waiters.wait_for_volume_resource_status(self.admin_backups_client,
+ backup['id'], 'error')
class VolumesBackupsAdminV1Test(VolumesBackupsAdminV2Test):
diff --git a/tempest/api/volume/admin/v2/test_snapshot_manage.py b/tempest/api/volume/admin/v2/test_snapshot_manage.py
index 1114924..e8bd477 100644
--- a/tempest/api/volume/admin/v2/test_snapshot_manage.py
+++ b/tempest/api/volume/admin/v2/test_snapshot_manage.py
@@ -61,13 +61,13 @@
new_snapshot = self.admin_snapshot_manage_client.manage_snapshot(
volume_id=volume['id'],
ref={'source-name': snapshot_ref})['snapshot']
- self.addCleanup(self.delete_snapshot,
- self.admin_snapshots_client, new_snapshot['id'])
+ self.addCleanup(self.delete_snapshot, new_snapshot['id'],
+ self.admin_snapshots_client)
# Wait for the snapshot to be available after manage operation
- waiters.wait_for_snapshot_status(self.admin_snapshots_client,
- new_snapshot['id'],
- 'available')
+ waiters.wait_for_volume_resource_status(self.admin_snapshots_client,
+ new_snapshot['id'],
+ 'available')
# Verify the managed snapshot has the expected parent volume
self.assertEqual(new_snapshot['volume_id'], volume['id'])
diff --git a/tempest/api/volume/admin/v2/test_volumes_list.py b/tempest/api/volume/admin/v2/test_volumes_list.py
index b0a37fb..6bab373 100644
--- a/tempest/api/volume/admin/v2/test_volumes_list.py
+++ b/tempest/api/volume/admin/v2/test_volumes_list.py
@@ -45,8 +45,8 @@
# Create a volume in admin tenant
adm_vol = self.admin_volume_client.create_volume(
size=CONF.volume.volume_size)['volume']
- waiters.wait_for_volume_status(self.admin_volume_client,
- adm_vol['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.admin_volume_client,
+ adm_vol['id'], 'available')
self.addCleanup(self.admin_volume_client.delete_volume, adm_vol['id'])
params = {'all_tenants': 1,
'project_id': self.volumes_client.tenant_id}
diff --git a/tempest/api/volume/base.py b/tempest/api/volume/base.py
index 98e050e..fd10fb3 100644
--- a/tempest/api/volume/base.py
+++ b/tempest/api/volume/base.py
@@ -131,8 +131,8 @@
volume = cls.volumes_client.create_volume(**kwargs)['volume']
cls.volumes.append(volume)
- waiters.wait_for_volume_status(cls.volumes_client, volume['id'],
- wait_until)
+ waiters.wait_for_volume_resource_status(cls.volumes_client,
+ volume['id'], wait_until)
return volume
@classmethod
@@ -145,9 +145,9 @@
snapshot = cls.snapshots_client.create_snapshot(
volume_id=volume_id, **kwargs)['snapshot']
- cls.snapshots.append(snapshot)
- waiters.wait_for_snapshot_status(cls.snapshots_client,
- snapshot['id'], 'available')
+ cls.snapshots.append(snapshot['id'])
+ waiters.wait_for_volume_resource_status(cls.snapshots_client,
+ snapshot['id'], 'available')
return snapshot
def create_backup(self, volume_id, backup_client=None, **kwargs):
@@ -158,8 +158,8 @@
backup = backup_client.create_backup(
volume_id=volume_id, **kwargs)['backup']
self.addCleanup(backup_client.delete_backup, backup['id'])
- waiters.wait_for_backup_status(backup_client, backup['id'],
- 'available')
+ waiters.wait_for_volume_resource_status(backup_client, backup['id'],
+ 'available')
return backup
# NOTE(afazekas): these create_* and clean_* could be defined
@@ -171,21 +171,24 @@
client.delete_volume(volume_id)
client.wait_for_resource_deletion(volume_id)
- @staticmethod
- def delete_snapshot(client, snapshot_id):
+ def delete_snapshot(self, snapshot_id, snapshots_client=None):
"""Delete snapshot by the given client"""
- client.delete_snapshot(snapshot_id)
- client.wait_for_resource_deletion(snapshot_id)
+ if snapshots_client is None:
+ snapshots_client = self.snapshots_client
+ snapshots_client.delete_snapshot(snapshot_id)
+ snapshots_client.wait_for_resource_deletion(snapshot_id)
+ if snapshot_id in self.snapshots:
+ self.snapshots.remove(snapshot_id)
def attach_volume(self, server_id, volume_id):
"""Attach a volume to a server"""
self.servers_client.attach_volume(
server_id, volumeId=volume_id,
device='/dev/%s' % CONF.compute.volume_device_name)
- waiters.wait_for_volume_status(self.volumes_client,
- volume_id, 'in-use')
- self.addCleanup(waiters.wait_for_volume_status, self.volumes_client,
- volume_id, 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume_id, 'in-use')
+ self.addCleanup(waiters.wait_for_volume_resource_status,
+ self.volumes_client, volume_id, 'available')
self.addCleanup(self.servers_client.detach_volume, server_id,
volume_id)
@@ -207,12 +210,12 @@
def clear_snapshots(cls):
for snapshot in cls.snapshots:
test_utils.call_and_ignore_notfound_exc(
- cls.snapshots_client.delete_snapshot, snapshot['id'])
+ cls.snapshots_client.delete_snapshot, snapshot)
for snapshot in cls.snapshots:
test_utils.call_and_ignore_notfound_exc(
cls.snapshots_client.wait_for_resource_deletion,
- snapshot['id'])
+ snapshot)
def create_server(self, **kwargs):
name = kwargs.pop(
diff --git a/tempest/api/volume/test_volume_transfers.py b/tempest/api/volume/test_volume_transfers.py
index 8d94cd2..9f63b14 100644
--- a/tempest/api/volume/test_volume_transfers.py
+++ b/tempest/api/volume/test_volume_transfers.py
@@ -30,7 +30,6 @@
cls.client = cls.volumes_client
cls.alt_client = cls.os_alt.volumes_client
- cls.alt_tenant_id = cls.alt_client.tenant_id
cls.adm_client = cls.os_adm.volumes_client
@decorators.idempotent_id('4d75b645-a478-48b1-97c8-503f64242f1a')
@@ -44,8 +43,8 @@
volume_id=volume['id'])['transfer']
transfer_id = transfer['id']
auth_key = transfer['auth_key']
- waiters.wait_for_volume_status(self.client,
- volume['id'], 'awaiting-transfer')
+ waiters.wait_for_volume_resource_status(
+ self.client, volume['id'], 'awaiting-transfer')
# Get a volume transfer
body = self.client.show_volume_transfer(transfer_id)['transfer']
@@ -59,8 +58,8 @@
# Accept a volume transfer by alt_tenant
body = self.alt_client.accept_volume_transfer(
transfer_id, auth_key=auth_key)['transfer']
- waiters.wait_for_volume_status(self.alt_client,
- volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.alt_client,
+ volume['id'], 'available')
@decorators.idempotent_id('ab526943-b725-4c07-b875-8e8ef87a2c30')
def test_create_list_delete_volume_transfer(self):
@@ -72,8 +71,8 @@
body = self.client.create_volume_transfer(
volume_id=volume['id'])['transfer']
transfer_id = body['id']
- waiters.wait_for_volume_status(self.client,
- volume['id'], 'awaiting-transfer')
+ waiters.wait_for_volume_resource_status(
+ self.client, volume['id'], 'awaiting-transfer')
# List all volume transfers (looking for the one we created)
body = self.client.list_volume_transfers()['transfers']
@@ -85,7 +84,8 @@
# Delete a volume transfer
self.client.delete_volume_transfer(transfer_id)
- waiters.wait_for_volume_status(self.client, volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(
+ self.client, volume['id'], 'available')
class VolumesV1TransfersTest(VolumesV2TransfersTest):
diff --git a/tempest/api/volume/test_volumes_actions.py b/tempest/api/volume/test_volumes_actions.py
index c0cc74d..0a6901c 100644
--- a/tempest/api/volume/test_volumes_actions.py
+++ b/tempest/api/volume/test_volumes_actions.py
@@ -60,11 +60,11 @@
instance_uuid=server['id'],
mountpoint='/dev/%s' %
CONF.compute.volume_device_name)
- waiters.wait_for_volume_status(self.client,
- self.volume['id'], 'in-use')
+ waiters.wait_for_volume_resource_status(self.client,
+ self.volume['id'], 'in-use')
self.client.detach_volume(self.volume['id'])
- waiters.wait_for_volume_status(self.client,
- self.volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.client,
+ self.volume['id'], 'available')
@decorators.idempotent_id('63e21b4c-0a0c-41f6-bfc3-7c2816815599')
def test_volume_bootable(self):
@@ -91,11 +91,10 @@
instance_uuid=server['id'],
mountpoint='/dev/%s' %
CONF.compute.volume_device_name)
- waiters.wait_for_volume_status(self.client,
- self.volume['id'], 'in-use')
- self.addCleanup(waiters.wait_for_volume_status, self.client,
- self.volume['id'],
- 'available')
+ waiters.wait_for_volume_resource_status(self.client, self.volume['id'],
+ 'in-use')
+ self.addCleanup(waiters.wait_for_volume_resource_status, self.client,
+ self.volume['id'], 'available')
self.addCleanup(self.client.detach_volume, self.volume['id'])
volume = self.client.show_volume(self.volume['id'])['volume']
self.assertIn('attachments', volume)
@@ -124,8 +123,8 @@
self.image_client.delete_image,
image_id)
waiters.wait_for_image_status(self.image_client, image_id, 'active')
- waiters.wait_for_volume_status(self.client,
- self.volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.client,
+ self.volume['id'], 'available')
@decorators.idempotent_id('92c4ef64-51b2-40c0-9f7e-4749fbaaba33')
def test_reserve_unreserve_volume(self):
diff --git a/tempest/api/volume/test_volumes_backup.py b/tempest/api/volume/test_volumes_backup.py
index 939f1ac..e664ff7 100644
--- a/tempest/api/volume/test_volumes_backup.py
+++ b/tempest/api/volume/test_volumes_backup.py
@@ -40,11 +40,11 @@
self.addCleanup(self.volumes_client.delete_volume,
restored_volume['volume_id'])
self.assertEqual(backup_id, restored_volume['backup_id'])
- waiters.wait_for_backup_status(self.backups_client,
- backup_id, 'available')
- waiters.wait_for_volume_status(self.volumes_client,
- restored_volume['volume_id'],
- 'available')
+ waiters.wait_for_volume_resource_status(self.backups_client,
+ backup_id, 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ restored_volume['volume_id'],
+ 'available')
return restored_volume
@decorators.idempotent_id('a66eb488-8ee1-47d4-8e9f-575a095728c6')
@@ -60,8 +60,8 @@
name=backup_name,
description=description)
self.assertEqual(backup_name, backup['name'])
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
# Get a given backup
backup = self.backups_client.show_backup(backup['id'])['backup']
diff --git a/tempest/api/volume/test_volumes_clone.py b/tempest/api/volume/test_volumes_clone.py
index 79a1a0a..d653808 100644
--- a/tempest/api/volume/test_volumes_clone.py
+++ b/tempest/api/volume/test_volumes_clone.py
@@ -43,7 +43,7 @@
volume = self.volumes_client.show_volume(dst_vol['id'])['volume']
# Should allow
self.assertEqual(volume['source_volid'], src_vol['id'])
- self.assertEqual(int(volume['size']), src_size + 1)
+ self.assertEqual(volume['size'], src_size + 1)
@decorators.idempotent_id('cbbcd7c6-5a6c-481a-97ac-ca55ab715d16')
def test_create_from_bootable_volume(self):
diff --git a/tempest/api/volume/test_volumes_clone_negative.py b/tempest/api/volume/test_volumes_clone_negative.py
index fa827cd..5331243 100644
--- a/tempest/api/volume/test_volumes_clone_negative.py
+++ b/tempest/api/volume/test_volumes_clone_negative.py
@@ -17,6 +17,7 @@
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions
+from tempest import test
CONF = config.CONF
@@ -29,6 +30,7 @@
if not CONF.volume_feature_enabled.clone:
raise cls.skipException("Cinder volume clones are disabled")
+ @test.attr(type=['negative'])
@decorators.idempotent_id('9adae371-a257-43a5-459a-dc7c88e66e0e')
def test_create_from_volume_decreasing_size(self):
# Creates a volume from another volume passing a size different from
diff --git a/tempest/api/volume/test_volumes_extend.py b/tempest/api/volume/test_volumes_extend.py
index 20118df..3df9b00 100644
--- a/tempest/api/volume/test_volumes_extend.py
+++ b/tempest/api/volume/test_volumes_extend.py
@@ -23,14 +23,14 @@
@decorators.idempotent_id('9a36df71-a257-43a5-9555-dc7c88e66e0e')
def test_volume_extend(self):
# Extend Volume Test.
- self.volume = self.create_volume()
- extend_size = int(self.volume['size']) + 1
- self.volumes_client.extend_volume(self.volume['id'],
+ volume = self.create_volume()
+ extend_size = volume['size'] + 1
+ self.volumes_client.extend_volume(volume['id'],
new_size=extend_size)
- waiters.wait_for_volume_status(self.volumes_client,
- self.volume['id'], 'available')
- volume = self.volumes_client.show_volume(self.volume['id'])['volume']
- self.assertEqual(int(volume['size']), extend_size)
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
+ volume = self.volumes_client.show_volume(volume['id'])['volume']
+ self.assertEqual(volume['size'], extend_size)
class VolumesV1ExtendTest(VolumesV2ExtendTest):
diff --git a/tempest/api/volume/test_volumes_get.py b/tempest/api/volume/test_volumes_get.py
index d1a1c2f..a3e46a8 100644
--- a/tempest/api/volume/test_volumes_get.py
+++ b/tempest/api/volume/test_volumes_get.py
@@ -41,8 +41,8 @@
volume = self.volumes_client.create_volume(**kwargs)['volume']
self.assertIn('id', volume)
self.addCleanup(self.delete_volume, self.volumes_client, volume['id'])
- waiters.wait_for_volume_status(self.volumes_client, volume['id'],
- 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
self.assertIn(name_field, volume)
self.assertEqual(volume[name_field], v_name,
"The created volume name is not equal "
@@ -106,8 +106,8 @@
self.assertIn('id', new_volume)
self.addCleanup(self.delete_volume, self.volumes_client,
new_volume['id'])
- waiters.wait_for_volume_status(self.volumes_client,
- new_volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ new_volume['id'], 'available')
params = {name_field: volume[name_field],
descrip_field: volume[descrip_field]}
diff --git a/tempest/api/volume/test_volumes_negative.py b/tempest/api/volume/test_volumes_negative.py
index 0a095a9..28e65ed 100644
--- a/tempest/api/volume/test_volumes_negative.py
+++ b/tempest/api/volume/test_volumes_negative.py
@@ -221,7 +221,7 @@
@decorators.idempotent_id('8f05a943-013c-4063-ac71-7baf561e82eb')
def test_volume_extend_with_nonexistent_volume_id(self):
# Extend volume size when volume is nonexistent.
- extend_size = int(self.volume['size']) + 1
+ extend_size = self.volume['size'] + 1
self.assertRaises(lib_exc.NotFound, self.volumes_client.extend_volume,
data_utils.rand_uuid(), new_size=extend_size)
@@ -229,7 +229,7 @@
@decorators.idempotent_id('aff8ba64-6d6f-4f2e-bc33-41a08ee9f115')
def test_volume_extend_without_passing_volume_id(self):
# Extend volume size when passing volume id is None.
- extend_size = int(self.volume['size']) + 1
+ extend_size = self.volume['size'] + 1
self.assertRaises(lib_exc.NotFound, self.volumes_client.extend_volume,
None, new_size=extend_size)
diff --git a/tempest/api/volume/test_volumes_snapshots.py b/tempest/api/volume/test_volumes_snapshots.py
index f1ca722..5abda5e 100644
--- a/tempest/api/volume/test_volumes_snapshots.py
+++ b/tempest/api/volume/test_volumes_snapshots.py
@@ -10,6 +10,8 @@
# License for the specific language governing permissions and limitations
# under the License.
+from testtools import matchers
+
from tempest.api.volume import base
from tempest.common.utils import data_utils
from tempest import config
@@ -34,12 +36,6 @@
cls.name_field = cls.special_fields['name_field']
cls.descrip_field = cls.special_fields['descrip_field']
- def cleanup_snapshot(self, snapshot):
- # Delete the snapshot
- self.snapshots_client.delete_snapshot(snapshot['id'])
- self.snapshots_client.wait_for_resource_deletion(snapshot['id'])
- self.snapshots.remove(snapshot)
-
@decorators.idempotent_id('b467b54c-07a4-446d-a1cf-651dedcc3ff1')
@test.services('compute')
def test_snapshot_create_with_volume_in_use(self):
@@ -52,7 +48,7 @@
snapshot = self.create_snapshot(self.volume_origin['id'],
force=True)
# Delete the snapshot
- self.cleanup_snapshot(snapshot)
+ self.delete_snapshot(snapshot['id'])
@decorators.idempotent_id('8567b54c-4455-446d-a1cf-651ddeaa3ff2')
@test.services('compute')
@@ -68,9 +64,9 @@
# Delete the snapshots. Some snapshot implementations can take
# different paths according to order they are deleted.
- self.cleanup_snapshot(snapshot1)
- self.cleanup_snapshot(snapshot3)
- self.cleanup_snapshot(snapshot2)
+ self.delete_snapshot(snapshot1['id'])
+ self.delete_snapshot(snapshot3['id'])
+ self.delete_snapshot(snapshot2['id'])
@decorators.idempotent_id('5210a1de-85a0-11e6-bb21-641c676a5d61')
@test.services('compute')
@@ -89,14 +85,18 @@
# Delete the snapshots. Some snapshot implementations can take
# different paths according to order they are deleted.
- self.cleanup_snapshot(snapshot3)
- self.cleanup_snapshot(snapshot1)
- self.cleanup_snapshot(snapshot2)
+ self.delete_snapshot(snapshot3['id'])
+ self.delete_snapshot(snapshot1['id'])
+ self.delete_snapshot(snapshot2['id'])
@decorators.idempotent_id('2a8abbe4-d871-46db-b049-c41f5af8216e')
def test_snapshot_create_get_list_update_delete(self):
- # Create a snapshot
- snapshot = self.create_snapshot(self.volume_origin['id'])
+ # Create a snapshot with metadata
+ metadata = {"snap-meta1": "value1",
+ "snap-meta2": "value2",
+ "snap-meta3": "value3"}
+ snapshot = self.create_snapshot(self.volume_origin['id'],
+ metadata=metadata)
# Get the snap and check for some of its details
snap_get = self.snapshots_client.show_snapshot(
@@ -105,6 +105,10 @@
snap_get['volume_id'],
"Referred volume origin mismatch")
+ # Verify snapshot metadata
+ self.assertThat(snap_get['metadata'].items(),
+ matchers.ContainsAll(metadata.items()))
+
# Compare also with the output from the list action
tracking_data = (snapshot['id'], snapshot[self.name_field])
snaps_list = self.snapshots_client.list_snapshots()['snapshots']
@@ -129,7 +133,7 @@
self.assertEqual(new_desc, updated_snapshot[self.descrip_field])
# Delete the snapshot
- self.cleanup_snapshot(snapshot)
+ self.delete_snapshot(snapshot['id'])
@decorators.idempotent_id('677863d1-3142-456d-b6ac-9924f667a7f4')
def test_volume_from_snapshot(self):
@@ -153,7 +157,7 @@
volume = self.volumes_client.show_volume(dst_vol['id'])['volume']
# Should allow
self.assertEqual(volume['snapshot_id'], src_snap['id'])
- self.assertEqual(int(volume['size']), src_size + 1)
+ self.assertEqual(volume['size'], src_size + 1)
class VolumesV1SnapshotTestJSON(VolumesV2SnapshotTestJSON):
diff --git a/tempest/api/volume/test_volumes_snapshots_list.py b/tempest/api/volume/test_volumes_snapshots_list.py
index ff390ea..a0eaa00 100644
--- a/tempest/api/volume/test_volumes_snapshots_list.py
+++ b/tempest/api/volume/test_volumes_snapshots_list.py
@@ -28,11 +28,11 @@
@classmethod
def resource_setup(cls):
super(VolumesV2SnapshotListTestJSON, cls).resource_setup()
- cls.volume_origin = cls.create_volume()
+ volume_origin = cls.create_volume()
cls.name_field = cls.special_fields['name_field']
# Create snapshots with params
for _ in range(2):
- cls.snapshot = cls.create_snapshot(cls.volume_origin['id'])
+ cls.snapshot = cls.create_snapshot(volume_origin['id'])
def _list_by_param_values_and_assert(self, with_detail=False, **params):
"""list or list_details with given params and validates result."""
diff --git a/tempest/api/volume/test_volumes_snapshots_negative.py b/tempest/api/volume/test_volumes_snapshots_negative.py
index 1e68848..9e44379 100644
--- a/tempest/api/volume/test_volumes_snapshots_negative.py
+++ b/tempest/api/volume/test_volumes_snapshots_negative.py
@@ -47,6 +47,7 @@
self.snapshots_client.create_snapshot,
volume_id=None, display_name=s_name)
+ @test.attr(type=['negative'])
@decorators.idempotent_id('677863d1-34f9-456d-b6ac-9924f667a7f4')
def test_volume_from_snapshot_decreasing_size(self):
# Creates a volume a snapshot passing a size different from the source
@@ -61,6 +62,13 @@
size=src_size - 1,
snapshot_id=src_snap['id'])
+ @test.attr(type=['negative'])
+ @decorators.idempotent_id('8fd92339-e22f-4591-86b4-1e2215372a40')
+ def test_list_snapshot_invalid_param_limit(self):
+ self.assertRaises(lib_exc.BadRequest,
+ self.snapshots_client.list_snapshots,
+ limit='invalid')
+
class VolumesV1SnapshotNegativeTestJSON(VolumesV2SnapshotNegativeTestJSON):
_api_version = 1
diff --git a/tempest/api/volume/v2/test_volumes_list.py b/tempest/api/volume/v2/test_volumes_list.py
index 9b17515..d2328c8 100644
--- a/tempest/api/volume/v2/test_volumes_list.py
+++ b/tempest/api/volume/v2/test_volumes_list.py
@@ -15,6 +15,7 @@
# under the License.
import random
+
from six.moves.urllib import parse
from tempest.api.volume import base
@@ -36,13 +37,12 @@
super(VolumesV2ListTestJSON, cls).resource_setup()
# Create 3 test volumes
- cls.metadata = {'Type': 'work'}
# NOTE(zhufl): When using pre-provisioned credentials, the project
# may have volumes other than those created below.
existing_volumes = cls.volumes_client.list_volumes()['volumes']
cls.volume_id_list = [vol['id'] for vol in existing_volumes]
for _ in range(3):
- volume = cls.create_volume(metadata=cls.metadata)
+ volume = cls.create_volume()
cls.volume_id_list.append(volume['id'])
@decorators.idempotent_id('2a7064eb-b9c3-429b-b888-33928fc5edd3')
diff --git a/tempest/api/volume/v2/test_volumes_snapshots_list.py b/tempest/api/volume/v2/test_volumes_snapshots_list.py
index f389b59..d385f65 100644
--- a/tempest/api/volume/v2/test_volumes_snapshots_list.py
+++ b/tempest/api/volume/v2/test_volumes_snapshots_list.py
@@ -15,7 +15,7 @@
from tempest.api.volume import base
from tempest import config
-from tempest import test
+from tempest.lib import decorators
CONF = config.CONF
@@ -33,11 +33,10 @@
super(VolumesV2SnapshotListTestJSON, cls).resource_setup()
cls.snapshot_id_list = []
# Create a volume
- cls.volume_origin = cls.create_volume()
- cls.name_field = cls.special_fields['name_field']
+ volume_origin = cls.create_volume()
# Create 3 snapshots
for _ in range(3):
- snapshot = cls.create_snapshot(cls.volume_origin['id'])
+ snapshot = cls.create_snapshot(volume_origin['id'])
cls.snapshot_id_list.append(snapshot['id'])
def _list_snapshots_param_sort(self, sort_key, sort_dir):
@@ -56,33 +55,33 @@
self.assertEqual(sorted(sorted_list, reverse=(sort_dir == 'desc')),
sorted_list, msg)
- @test.idempotent_id('c5513ada-64c1-4d28-83b9-af3307ec1388')
+ @decorators.idempotent_id('c5513ada-64c1-4d28-83b9-af3307ec1388')
def test_snapshot_list_param_sort_id_asc(self):
self._list_snapshots_param_sort(sort_key='id', sort_dir='asc')
- @test.idempotent_id('8a7fe058-0b41-402a-8afd-2dbc5a4a718b')
+ @decorators.idempotent_id('8a7fe058-0b41-402a-8afd-2dbc5a4a718b')
def test_snapshot_list_param_sort_id_desc(self):
self._list_snapshots_param_sort(sort_key='id', sort_dir='desc')
- @test.idempotent_id('4052c3a0-2415-440a-a8cc-305a875331b0')
+ @decorators.idempotent_id('4052c3a0-2415-440a-a8cc-305a875331b0')
def test_snapshot_list_param_sort_created_at_asc(self):
self._list_snapshots_param_sort(sort_key='created_at', sort_dir='asc')
- @test.idempotent_id('dcbbe24a-f3c0-4ec8-9274-55d48db8d1cf')
+ @decorators.idempotent_id('dcbbe24a-f3c0-4ec8-9274-55d48db8d1cf')
def test_snapshot_list_param_sort_created_at_desc(self):
self._list_snapshots_param_sort(sort_key='created_at', sort_dir='desc')
- @test.idempotent_id('d58b5fed-0c37-42d3-8c5d-39014ac13c00')
+ @decorators.idempotent_id('d58b5fed-0c37-42d3-8c5d-39014ac13c00')
def test_snapshot_list_param_sort_name_asc(self):
self._list_snapshots_param_sort(sort_key='display_name',
sort_dir='asc')
- @test.idempotent_id('96ba6f4d-1f18-47e1-b4bc-76edc6c21250')
+ @decorators.idempotent_id('96ba6f4d-1f18-47e1-b4bc-76edc6c21250')
def test_snapshot_list_param_sort_name_desc(self):
self._list_snapshots_param_sort(sort_key='display_name',
sort_dir='desc')
- @test.idempotent_id('05489dde-44bc-4961-a1f5-3ce7ee7824f7')
+ @decorators.idempotent_id('05489dde-44bc-4961-a1f5-3ce7ee7824f7')
def test_snapshot_list_param_marker(self):
# The list of snapshots should end before the provided marker
params = {'marker': self.snapshot_id_list[1]}
diff --git a/tempest/api/volume/v2/test_volumes_snapshots_negative.py b/tempest/api/volume/v2/test_volumes_snapshots_negative.py
new file mode 100644
index 0000000..e5581b9
--- /dev/null
+++ b/tempest/api/volume/v2/test_volumes_snapshots_negative.py
@@ -0,0 +1,46 @@
+# Copyright 2017 Red Hat, Inc.
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from tempest.api.volume import base
+from tempest.common.utils import data_utils
+from tempest import config
+from tempest.lib import decorators
+from tempest.lib import exceptions as lib_exc
+from tempest import test
+
+CONF = config.CONF
+
+
+class VolumesV2SnapshotNegativeTest(base.BaseVolumeTest):
+
+ @classmethod
+ def skip_checks(cls):
+ super(VolumesV2SnapshotNegativeTest, cls).skip_checks()
+ if not CONF.volume_feature_enabled.snapshot:
+ raise cls.skipException("Cinder volume snapshots are disabled")
+
+ @test.attr(type=['negative'])
+ @decorators.idempotent_id('27b5f37f-bf69-4e8c-986e-c44f3d6819b8')
+ def test_list_snapshots_invalid_param_sort(self):
+ self.assertRaises(lib_exc.BadRequest,
+ self.snapshots_client.list_snapshots,
+ sort_key='invalid')
+
+ @test.attr(type=['negative'])
+ @decorators.idempotent_id('b68deeda-ca79-4a32-81af-5c51179e553a')
+ def test_list_snapshots_invalid_param_marker(self):
+ self.assertRaises(lib_exc.NotFound,
+ self.snapshots_client.list_snapshots,
+ marker=data_utils.rand_uuid())
diff --git a/tempest/cmd/subunit_describe_calls.py b/tempest/cmd/subunit_describe_calls.py
index 0f868a9..8ee3055 100644
--- a/tempest/cmd/subunit_describe_calls.py
+++ b/tempest/cmd/subunit_describe_calls.py
@@ -294,7 +294,8 @@
outfile.write(json.dumps(url_parser.test_logs))
return
- for test_name, items in url_parser.test_logs.iteritems():
+ for test_name in url_parser.test_logs:
+ items = url_parser.test_logs[test_name]
sys.stdout.write('{0}\n'.format(test_name))
if not items:
sys.stdout.write('\n')
diff --git a/tempest/common/compute.py b/tempest/common/compute.py
index 01de704..99da983 100644
--- a/tempest/common/compute.py
+++ b/tempest/common/compute.py
@@ -17,10 +17,10 @@
from oslo_utils import excutils
from tempest.common import fixed_network
+from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common import rest_client
-from tempest.lib.common.utils import data_utils
CONF = config.CONF
@@ -124,8 +124,9 @@
'imageRef': image_id,
'size': CONF.volume.volume_size}
volume = volumes_client.create_volume(**params)
- waiters.wait_for_volume_status(volumes_client,
- volume['volume']['id'], 'available')
+ waiters.wait_for_volume_resource_status(volumes_client,
+ volume['volume']['id'],
+ 'available')
bd_map_v2 = [{
'uuid': volume['volume']['id'],
diff --git a/tempest/common/dynamic_creds.py b/tempest/common/dynamic_creds.py
index 632a876..88fe26c 100644
--- a/tempest/common/dynamic_creds.py
+++ b/tempest/common/dynamic_creds.py
@@ -293,12 +293,12 @@
return resp_body['subnet']
def _create_router(self, router_name, tenant_id):
- external_net_id = dict(
- network_id=self.public_network_id)
- resp_body = self.routers_admin_client.create_router(
- name=router_name,
- external_gateway_info=external_net_id,
- tenant_id=tenant_id)
+ kwargs = {'name': router_name,
+ 'tenant_id': tenant_id}
+ if self.public_network_id:
+ kwargs['external_gateway_info'] = dict(
+ network_id=self.public_network_id)
+ resp_body = self.routers_admin_client.create_router(**kwargs)
return resp_body['router']
def _add_router_interface(self, router_id, subnet_id):
diff --git a/tempest/common/fixed_network.py b/tempest/common/fixed_network.py
index f50edbd..4032c90 100644
--- a/tempest/common/fixed_network.py
+++ b/tempest/common/fixed_network.py
@@ -11,6 +11,7 @@
# under the License.
import copy
+
from oslo_log import log as logging
from tempest import exceptions
diff --git a/tempest/common/utils/linux/remote_client.py b/tempest/common/utils/linux/remote_client.py
index 009812e..1487c1d 100644
--- a/tempest/common/utils/linux/remote_client.py
+++ b/tempest/common/utils/linux/remote_client.py
@@ -10,12 +10,13 @@
# License for the specific language governing permissions and limitations
# under the License.
-import netaddr
import re
-import six
import sys
import time
+import netaddr
+import six
+
from oslo_log import log as logging
from tempest import config
diff --git a/tempest/common/waiters.py b/tempest/common/waiters.py
index 865db39..3e5600c 100644
--- a/tempest/common/waiters.py
+++ b/tempest/common/waiters.py
@@ -10,7 +10,7 @@
# License for the specific language governing permissions and limitations
# under the License.
-
+import re
import time
from oslo_log import log as logging
@@ -121,7 +121,7 @@
'/'.join((server_status, str(task_state))),
time.time() - start_time)
if server_status == 'ERROR' and not ignore_error:
- raise exceptions.BuildErrorException(server_id=server_id)
+ raise lib_exc.DeleteErrorException(resource_id=server_id)
if int(time.time()) - start_time >= client.build_timeout:
raise lib_exc.TimeoutException
@@ -179,25 +179,33 @@
raise lib_exc.TimeoutException(message)
-def wait_for_volume_status(client, volume_id, status):
- """Waits for a Volume to reach a given status."""
- body = client.show_volume(volume_id)['volume']
- volume_status = body['status']
+def wait_for_volume_resource_status(client, resource_id, status):
+ """Waits for a volume resource to reach a given status.
+
+ This function is a common function for volume, snapshot and backup
+ resources. The function extracts the name of the desired resource from
+ the client class name of the resource.
+ """
+ resource_name = re.findall(r'(Volume|Snapshot|Backup)',
+ client.__class__.__name__)[0].lower()
+ show_resource = getattr(client, 'show_' + resource_name)
+ resource_status = show_resource(resource_id)[resource_name]['status']
start = int(time.time())
- while volume_status != status:
+ while resource_status != status:
time.sleep(client.build_interval)
- body = client.show_volume(volume_id)['volume']
- volume_status = body['status']
- if volume_status == 'error' and status != 'error':
- raise exceptions.VolumeBuildErrorException(volume_id=volume_id)
- if volume_status == 'error_restoring':
- raise exceptions.VolumeRestoreErrorException(volume_id=volume_id)
+ resource_status = show_resource(resource_id)[
+ '{}'.format(resource_name)]['status']
+ if resource_status == 'error' and resource_status != status:
+ raise exceptions.VolumeResourceBuildErrorException(
+ resource_name=resource_name, resource_id=resource_id)
+ if resource_name == 'volume' and resource_status == 'error_restoring':
+ raise exceptions.VolumeRestoreErrorException(volume_id=resource_id)
if int(time.time()) - start >= client.build_timeout:
- message = ('Volume %s failed to reach %s status (current %s) '
+ message = ('%s %s failed to reach %s status (current %s) '
'within the required time (%s s).' %
- (volume_id, status, volume_status,
+ (resource_name, resource_id, status, resource_status,
client.build_timeout))
raise lib_exc.TimeoutException(message)
@@ -218,48 +226,6 @@
'within the required time (%s s).' %
(volume_id, new_volume_type, current_volume_type,
client.build_timeout))
- raise exceptions.TimeoutException(message)
-
-
-def wait_for_snapshot_status(client, snapshot_id, status):
- """Waits for a Snapshot to reach a given status."""
- body = client.show_snapshot(snapshot_id)['snapshot']
- snapshot_status = body['status']
- start = int(time.time())
-
- while snapshot_status != status:
- time.sleep(client.build_interval)
- body = client.show_snapshot(snapshot_id)['snapshot']
- snapshot_status = body['status']
- if snapshot_status == 'error':
- raise exceptions.SnapshotBuildErrorException(
- snapshot_id=snapshot_id)
- if int(time.time()) - start >= client.build_timeout:
- message = ('Snapshot %s failed to reach %s status (current %s) '
- 'within the required time (%s s).' %
- (snapshot_id, status, snapshot_status,
- client.build_timeout))
- raise lib_exc.TimeoutException(message)
-
-
-def wait_for_backup_status(client, backup_id, status):
- """Waits for a Backup to reach a given status."""
- body = client.show_backup(backup_id)['backup']
- backup_status = body['status']
- start = int(time.time())
-
- while backup_status != status:
- time.sleep(client.build_interval)
- body = client.show_backup(backup_id)['backup']
- backup_status = body['status']
- if backup_status == 'error' and backup_status != status:
- raise lib_exc.VolumeBackupException(backup_id=backup_id)
-
- if int(time.time()) - start >= client.build_timeout:
- message = ('Volume backup %s failed to reach %s status '
- '(current %s) within the required time (%s s).' %
- (backup_id, status, backup_status,
- client.build_timeout))
raise lib_exc.TimeoutException(message)
diff --git a/tempest/config.py b/tempest/config.py
index 213cbd7..651c32e 100644
--- a/tempest/config.py
+++ b/tempest/config.py
@@ -19,6 +19,7 @@
import os
import tempfile
+import debtcollector.removals
from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_log import log as logging
@@ -317,8 +318,7 @@
"min_microversion and max_microversion. "
"If both values are not specified, Tempest avoids tests "
"which require a microversion. Valid values are string "
- "with format 'X.Y' or string 'latest'",
- deprecated_group='compute-feature-enabled'),
+ "with format 'X.Y' or string 'latest'"),
cfg.StrOpt('max_microversion',
default=None,
help="Upper version of the test target microversion range. "
@@ -327,8 +327,7 @@
"min_microversion and max_microversion. "
"If both values are not specified, Tempest avoids tests "
"which require a microversion. Valid values are string "
- "with format 'X.Y' or string 'latest'",
- deprecated_group='compute-feature-enabled'),
+ "with format 'X.Y' or string 'latest'"),
]
compute_features_group = cfg.OptGroup(name='compute-feature-enabled',
@@ -544,23 +543,18 @@
'publicURL', 'adminURL', 'internalURL'],
help="The endpoint type to use for the network service."),
cfg.StrOpt('project_network_cidr',
- deprecated_name='tenant_network_cidr',
default="10.100.0.0/16",
help="The cidr block to allocate project ipv4 subnets from"),
cfg.IntOpt('project_network_mask_bits',
- deprecated_name='tenant_network_mask_bits',
default=28,
help="The mask bits for project ipv4 subnets"),
cfg.StrOpt('project_network_v6_cidr',
- deprecated_name='tenant_network_v6_cidr',
default="2003::/48",
help="The cidr block to allocate project ipv6 subnets from"),
cfg.IntOpt('project_network_v6_mask_bits',
- deprecated_name='tenant_network_v6_mask_bits',
default=64,
help="The mask bits for project ipv6 subnets"),
cfg.BoolOpt('project_networks_reachable',
- deprecated_name='tenant_networks_reachable',
default=False,
help="Whether project networks can be reached directly from "
"the test client. This must be set to True when the "
@@ -662,17 +656,13 @@
choices=['fixed', 'floating'],
help='Default IP type used for validation: '
'-fixed: uses the first IP belonging to the fixed network '
- '-floating: creates and uses a floating IP',
- deprecated_opts=[cfg.DeprecatedOpt('use_floatingip_for_ssh',
- group='compute')]),
+ '-floating: creates and uses a floating IP'),
cfg.StrOpt('auth_method',
default='keypair',
choices=['keypair'],
help='Default authentication method to the instance. '
'Only ssh via keypair is supported for now. '
- 'Additional methods will be handled in a separate spec.',
- deprecated_opts=[cfg.DeprecatedOpt('ssh_auth_method',
- group='compute')]),
+ 'Additional methods will be handled in a separate spec.'),
cfg.IntOpt('ip_version_for_ssh',
default=4,
help='Default IP version for ssh connections.'),
@@ -699,35 +689,25 @@
group='scenario')]),
cfg.StrOpt('image_ssh_password',
default="password",
- help="Password used to authenticate to an instance.",
- deprecated_opts=[cfg.DeprecatedOpt('image_ssh_password',
- group='compute')]),
+ help="Password used to authenticate to an instance."),
cfg.StrOpt('ssh_shell_prologue',
default="set -eu -o pipefail; PATH=$$PATH:/sbin;",
help="Shell fragments to use before executing a command "
- "when sshing to a guest.",
- deprecated_opts=[cfg.DeprecatedOpt('ssh_shell_prologue',
- group='compute')]),
+ "when sshing to a guest."),
cfg.IntOpt('ping_size',
default=56,
help="The packet size for ping packets originating "
- "from remote linux hosts",
- deprecated_opts=[cfg.DeprecatedOpt('ping_size',
- group='compute')]),
+ "from remote linux hosts"),
cfg.IntOpt('ping_count',
default=1,
help="The number of ping packets originating from remote "
- "linux hosts",
- deprecated_opts=[cfg.DeprecatedOpt('ping_count',
- group='compute')]),
+ "linux hosts"),
cfg.StrOpt('floating_ip_range',
default='10.0.0.0/29',
help='Unallocated floating IP range, which will be used to '
'test the floating IP bulk feature for CRUD operation. '
'This block must not overlap an existing floating IP '
- 'pool.',
- deprecated_opts=[cfg.DeprecatedOpt('floating_ip_range',
- group='compute')]),
+ 'pool.'),
cfg.StrOpt('network_for_ssh',
default='public',
help="Network used for SSH connections. Ignored if "
@@ -1042,32 +1022,6 @@
""")
]
-input_scenario_group = cfg.OptGroup(name="input-scenario",
- title="Filters and values for"
- " input scenarios[DEPRECATED]")
-
-
-InputScenarioGroup = [
- cfg.StrOpt('image_regex',
- default='^cirros-0.3.1-x86_64-uec$',
- help="Matching images become parameters for scenario tests",
- deprecated_for_removal=True),
- cfg.StrOpt('flavor_regex',
- default='^m1.nano$',
- help="Matching flavors become parameters for scenario tests",
- deprecated_for_removal=True),
- cfg.StrOpt('non_ssh_image_regex',
- default='^.*[Ww]in.*$',
- help="SSH verification in tests is skipped"
- "for matching images",
- deprecated_for_removal=True),
- cfg.StrOpt('ssh_user_regex',
- default="[[\"^.*[Cc]irros.*$\", \"cirros\"]]",
- help="List of user mapped to regex "
- "to matching image names.",
- deprecated_for_removal=True),
-]
-
DefaultGroup = [
cfg.StrOpt('resources_prefix',
default='tempest',
@@ -1097,7 +1051,6 @@
(scenario_group, ScenarioGroup),
(service_available_group, ServiceAvailableGroup),
(debug_group, DebugGroup),
- (input_scenario_group, InputScenarioGroup),
(None, DefaultGroup)
]
@@ -1159,7 +1112,6 @@
self.scenario = _CONF.scenario
self.service_available = _CONF.service_available
self.debug = _CONF.debug
- self.input_scenario = _CONF['input-scenario']
logging.tempest_set_log_file('tempest.log')
def __init__(self, parse_conf=True, config_path=None):
@@ -1247,6 +1199,8 @@
CONF = TempestConfigProxy()
+@debtcollector.removals.remove(
+ message='use testtools.skipUnless instead', removal_version='Queens')
def skip_unless_config(*args):
"""Decorator to raise a skip if a config opt doesn't exist or is False
@@ -1285,6 +1239,8 @@
return decorator
+@debtcollector.removals.remove(
+ message='use testtools.skipIf instead', removal_version='Queens')
def skip_if_config(*args):
"""Raise a skipException if a config exists and is True
diff --git a/tempest/exceptions.py b/tempest/exceptions.py
index 45bbc11..f48d7ac 100644
--- a/tempest/exceptions.py
+++ b/tempest/exceptions.py
@@ -37,18 +37,15 @@
message = "Image %(image_id)s failed to become ACTIVE in the allotted time"
-class VolumeBuildErrorException(exceptions.TempestException):
- message = "Volume %(volume_id)s failed to build and is in ERROR status"
+class VolumeResourceBuildErrorException(exceptions.TempestException):
+ message = ("%(resource_name)s %(resource_id)s failed to build and is in "
+ "ERROR status")
class VolumeRestoreErrorException(exceptions.TempestException):
message = "Volume %(volume_id)s failed to restore and is in ERROR status"
-class SnapshotBuildErrorException(exceptions.TempestException):
- message = "Snapshot %(snapshot_id)s failed to build and is in ERROR status"
-
-
class StackBuildErrorException(exceptions.TempestException):
message = ("Stack %(stack_identifier)s is in %(stack_status)s status "
"due to '%(stack_status_reason)s'")
diff --git a/tempest/lib/api_schema/response/compute/v2_26/servers.py b/tempest/lib/api_schema/response/compute/v2_26/servers.py
index bc5d18e..d873402 100644
--- a/tempest/lib/api_schema/response/compute/v2_26/servers.py
+++ b/tempest/lib/api_schema/response/compute/v2_26/servers.py
@@ -1,4 +1,5 @@
# Copyright 2016 IBM Corp.
+# Copyright 2017 AT&T Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@@ -45,3 +46,41 @@
# list response schema wasn't changed for v2.26 so use v2.1
list_servers = copy.deepcopy(servers21.list_servers)
+
+list_tags = {
+ 'status_code': [200],
+ 'response_body': {
+ 'type': 'object',
+ 'properties': {
+ 'tags': {
+ 'type': 'array',
+ 'items': {
+ 'type': 'string'
+ }
+ }
+ },
+ 'additionalProperties': False,
+ 'required': ['tags']
+ }
+}
+
+update_all_tags = copy.deepcopy(list_tags)
+
+delete_all_tags = {'status_code': [204]}
+
+check_tag_existence = {'status_code': [204]}
+
+update_tag = {
+ 'status_code': [201, 204],
+ 'response_header': {
+ 'type': 'object',
+ 'properties': {
+ 'location': {
+ 'type': 'string'
+ }
+ },
+ 'required': ['location']
+ }
+}
+
+delete_tag = {'status_code': [204]}
diff --git a/tempest/lib/cmd/check_uuid.py b/tempest/lib/cmd/check_uuid.py
index 2fe957b..283b10f 100755
--- a/tempest/lib/cmd/check_uuid.py
+++ b/tempest/lib/cmd/check_uuid.py
@@ -27,6 +27,7 @@
import six.moves.urllib.parse as urlparse
# TODO(oomichi): Need to remove this after switching all modules to decorators
+# on all OpenStack projects because they runs check-uuid on their own gates.
OLD_DECORATOR_MODULE = 'test'
DECORATOR_MODULE = 'decorators'
@@ -120,7 +121,7 @@
@staticmethod
def _get_idempotent_id(test_node):
- """Return key-value dict with all metadata from @test.idempotent_id"""
+ "Return key-value dict with metadata from @decorators.idempotent_id"
idempotent_id = None
for decorator in test_node.decorator_list:
if (hasattr(decorator, 'func') and
@@ -308,7 +309,8 @@
Returns true if untagged tests exist.
"""
def report(module_name, test_name, tests):
- error_str = "%s:%s\nmissing @test.idempotent_id('...')\n%s\n" % (
+ error_str = ("%s:%s\nmissing @decorators.idempotent_id"
+ "('...')\n%s\n") % (
tests[module_name]['source_path'],
tests[module_name]['tests'][test_name].lineno,
test_name
@@ -356,7 +358,8 @@
else:
errors = checker.report_untagged(untagged) or errors
if errors:
- sys.exit("@test.idempotent_id existence and uniqueness checks failed\n"
+ sys.exit("@decorators.idempotent_id existence and uniqueness checks "
+ "failed\n"
"Run 'tox -v -euuidgen' to automatically fix tests with\n"
"missing @test.idempotent_id decorators.")
diff --git a/tempest/lib/common/rest_client.py b/tempest/lib/common/rest_client.py
index d0e21ff..f5bff20 100644
--- a/tempest/lib/common/rest_client.py
+++ b/tempest/lib/common/rest_client.py
@@ -617,6 +617,7 @@
:raises BadRequest: If a 400 response code is received
:raises Gone: If a 410 response code is received
:raises Conflict: If a 409 response code is received
+ :raises PreconditionFailed: If a 412 response code is received
:raises OverLimit: If a 413 response code is received and over_limit is
not in the response body
:raises RateLimitExceeded: If a 413 response code is received and
@@ -775,6 +776,11 @@
resp_body = self._parse_resp(resp_body)
raise exceptions.Conflict(resp_body, resp=resp)
+ if resp.status == 412:
+ if parse_resp:
+ resp_body = self._parse_resp(resp_body)
+ raise exceptions.PreconditionFailed(resp_body, resp=resp)
+
if resp.status == 413:
if parse_resp:
resp_body = self._parse_resp(resp_body)
diff --git a/tempest/lib/common/ssh.py b/tempest/lib/common/ssh.py
index 5e65bee..657c0c1 100644
--- a/tempest/lib/common/ssh.py
+++ b/tempest/lib/common/ssh.py
@@ -111,6 +111,7 @@
except (EOFError,
socket.error, socket.timeout,
paramiko.SSHException) as e:
+ ssh.close()
if self._is_timed_out(_start_time):
LOG.exception("Failed to establish authenticated ssh"
" connection to %s@%s after %d attempts",
diff --git a/tempest/lib/common/utils/data_utils.py b/tempest/lib/common/utils/data_utils.py
index 75c2e51..642514b 100644
--- a/tempest/lib/common/utils/data_utils.py
+++ b/tempest/lib/common/utils/data_utils.py
@@ -14,12 +14,12 @@
# under the License.
import itertools
-import netaddr
import random
import string
import uuid
from debtcollector import removals
+import netaddr
from oslo_utils import netutils
from oslo_utils import uuidutils
import six.moves
diff --git a/tempest/lib/exceptions.py b/tempest/lib/exceptions.py
index 108ba70..dea3289 100644
--- a/tempest/lib/exceptions.py
+++ b/tempest/lib/exceptions.py
@@ -101,6 +101,11 @@
message = "The requested resource is no longer available"
+class PreconditionFailed(ClientRestClientException):
+ status_code = 412
+ message = "Precondition Failed"
+
+
class RateLimitExceeded(ClientRestClientException):
status_code = 413
message = "Rate limit exceeded"
@@ -259,3 +264,8 @@
class VolumeBackupException(TempestException):
message = "Volume backup %(backup_id)s failed and is in ERROR status"
+
+
+class DeleteErrorException(TempestException):
+ message = ("Resource %(resource_id)s failed to delete "
+ "and is in ERROR status")
diff --git a/tempest/lib/services/compute/servers_client.py b/tempest/lib/services/compute/servers_client.py
index 50ce32e..c167d81 100644
--- a/tempest/lib/services/compute/servers_client.py
+++ b/tempest/lib/services/compute/servers_client.py
@@ -1,5 +1,6 @@
# Copyright 2012 OpenStack Foundation
# Copyright 2013 Hewlett-Packard Development Company, L.P.
+# Copyright 2017 AT&T Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@@ -19,6 +20,8 @@
from oslo_serialization import jsonutils as json
from six.moves.urllib import parse as urllib
+from tempest.lib.api_schema.response.compute.v2_1 import \
+ security_groups as security_groups_schema
from tempest.lib.api_schema.response.compute.v2_1 import servers as schema
from tempest.lib.api_schema.response.compute.v2_16 import servers as schemav216
from tempest.lib.api_schema.response.compute.v2_19 import servers as schemav219
@@ -30,6 +33,8 @@
class ServersClient(base_compute_client.BaseComputeClient):
+ """Service client for the resource /servers"""
+
schema_versions_info = [
{'min': None, 'max': '2.2', 'schema': schema},
{'min': '2.3', 'max': '2.8', 'schema': schemav23},
@@ -715,3 +720,105 @@
http://developer.openstack.org/api-ref-compute-v2.1.html#removeFixedIp
"""
return self.action(server_id, 'removeFixedIp', **kwargs)
+
+ def list_security_groups_by_server(self, server_id):
+ """Lists security groups for a server.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ http://developer.openstack.org/api-ref-compute-v2.1.html#listSecurityGroupsByServer
+ """
+ resp, body = self.get("servers/%s/os-security-groups" % server_id)
+ body = json.loads(body)
+ self.validate_response(security_groups_schema.list_security_groups,
+ resp, body)
+ return rest_client.ResponseBody(resp, body)
+
+ def list_tags(self, server_id):
+ """Lists all tags for a server.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/compute/#list-tags
+ """
+ url = 'servers/%s/tags' % server_id
+ resp, body = self.get(url)
+ body = json.loads(body)
+ schema = self.get_schema(self.schema_versions_info)
+ self.validate_response(schema.list_tags, resp, body)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_all_tags(self, server_id, tags):
+ """Replaces all tags on specified server with the new set of tags.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/compute/#replace-tags
+
+ :param tags: List of tags to replace current server tags with.
+ """
+ url = 'servers/%s/tags' % server_id
+ put_body = {'tags': tags}
+ resp, body = self.put(url, json.dumps(put_body))
+ body = json.loads(body)
+ schema = self.get_schema(self.schema_versions_info)
+ self.validate_response(schema.update_all_tags, resp, body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_all_tags(self, server_id):
+ """Deletes all tags from the specified server.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/compute/#delete-all-tags
+ """
+ url = 'servers/%s/tags' % server_id
+ resp, body = self.delete(url)
+ schema = self.get_schema(self.schema_versions_info)
+ self.validate_response(schema.delete_all_tags, resp, body)
+ return rest_client.ResponseBody(resp, body)
+
+ def check_tag_existence(self, server_id, tag):
+ """Checks tag existence on the server.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/compute/#check-tag-existence
+
+ :param tag: Check for existence of tag on specified server.
+ """
+ url = 'servers/%s/tags/%s' % (server_id, tag)
+ resp, body = self.get(url)
+ schema = self.get_schema(self.schema_versions_info)
+ self.validate_response(schema.check_tag_existence, resp, body)
+ return rest_client.ResponseBody(resp, body)
+
+ def update_tag(self, server_id, tag):
+ """Adds a single tag to the server if server has no specified tag.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/compute/#add-a-single-tag
+
+ :param tag: Tag to be added to the specified server.
+ """
+ url = 'servers/%s/tags/%s' % (server_id, tag)
+ resp, body = self.put(url, None)
+ schema = self.get_schema(self.schema_versions_info)
+ self.validate_response(schema.update_tag, resp, body)
+ return rest_client.ResponseBody(resp, body)
+
+ def delete_tag(self, server_id, tag):
+ """Deletes a single tag from the specified server.
+
+ For a full list of available parameters, please refer to the official
+ API reference:
+ https://developer.openstack.org/api-ref/compute/#delete-a-single-tag
+
+ :param tag: Tag to be removed from the specified server.
+ """
+ url = 'servers/%s/tags/%s' % (server_id, tag)
+ resp, body = self.delete(url)
+ schema = self.get_schema(self.schema_versions_info)
+ self.validate_response(schema.delete_tag, resp, body)
+ return rest_client.ResponseBody(resp, body)
diff --git a/tempest/lib/services/identity/v2/services_client.py b/tempest/lib/services/identity/v2/services_client.py
index b3f94aa..47398db 100644
--- a/tempest/lib/services/identity/v2/services_client.py
+++ b/tempest/lib/services/identity/v2/services_client.py
@@ -26,7 +26,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/identity/v2-ext/?expanded=#create-service-admin-extension
+ http://developer.openstack.org/api-ref/identity/v2-ext/#create-service-admin-extension
"""
post_body = json.dumps({'OS-KSADM:service': kwargs})
resp, body = self.post('/OS-KSADM/services', post_body)
@@ -47,7 +47,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/identity/v2-ext/?expanded=#list-services-admin-extension
+ http://developer.openstack.org/api-ref/identity/v2-ext/#list-services-admin-extension
"""
url = '/OS-KSADM/services'
if params:
diff --git a/tempest/lib/services/identity/v3/role_assignments_client.py b/tempest/lib/services/identity/v3/role_assignments_client.py
index 10de03f..a426e69 100644
--- a/tempest/lib/services/identity/v3/role_assignments_client.py
+++ b/tempest/lib/services/identity/v3/role_assignments_client.py
@@ -26,7 +26,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/identity/v3/?expanded=list-effective-role-assignments-detail
+ http://developer.openstack.org/api-ref/identity/v3/#list-role-assignments
:param effective: If True, returns the effective assignments, including
any assignments gained by virtue of group membership
diff --git a/tempest/lib/services/image/v2/namespace_tags_client.py b/tempest/lib/services/image/v2/namespace_tags_client.py
index ac8b569..a7f8c39 100644
--- a/tempest/lib/services/image/v2/namespace_tags_client.py
+++ b/tempest/lib/services/image/v2/namespace_tags_client.py
@@ -115,5 +115,11 @@
"""
url = 'metadefs/namespaces/%s/tags' % namespace
resp, _ = self.delete(url)
- self.expected_success(200, resp.status)
+
+ # NOTE(rosmaita): Bug 1656183 fixed the success response code for
+ # this call to make it consistent with the other metadefs delete
+ # calls. Accept both codes in case tempest is being run against
+ # an old Glance.
+ self.expected_success([200, 204], resp.status)
+
return rest_client.ResponseBody(resp)
diff --git a/tempest/lib/services/image/v2/resource_types_client.py b/tempest/lib/services/image/v2/resource_types_client.py
index 1b6889f..13259d1 100644
--- a/tempest/lib/services/image/v2/resource_types_client.py
+++ b/tempest/lib/services/image/v2/resource_types_client.py
@@ -26,7 +26,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#list-resource-types
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#list-resource-types
"""
url = 'metadefs/resource_types'
resp, body = self.get(url)
@@ -39,7 +39,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#create-resource-type-association
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#create-resource-type-association
"""
url = 'metadefs/namespaces/%s/resource_types' % namespace_id
data = json.dumps(kwargs)
@@ -53,7 +53,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#list-resource-type-associations
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#list-resource-type-associations
"""
url = 'metadefs/namespaces/%s/resource_types' % namespace_id
resp, body = self.get(url)
@@ -66,7 +66,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/image/v2/metadefs-index.html?expanded=#remove-resource-type-association
+ http://developer.openstack.org/api-ref/image/v2/metadefs-index.html#remove-resource-type-association
"""
url = 'metadefs/namespaces/%s/resource_types/%s' % (namespace_id,
resource_name)
diff --git a/tempest/lib/services/network/ports_client.py b/tempest/lib/services/network/ports_client.py
index 93138b9..daa15d7 100644
--- a/tempest/lib/services/network/ports_client.py
+++ b/tempest/lib/services/network/ports_client.py
@@ -73,7 +73,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=#bulk-create-ports
+ http://developer.openstack.org/api-ref/networking/v2/index.html#bulk-create-ports
"""
uri = '/ports'
return self.create_resource(uri, kwargs)
diff --git a/tempest/lib/services/volume/v2/qos_client.py b/tempest/lib/services/volume/v2/qos_client.py
index 40d4a3f..47d3914 100644
--- a/tempest/lib/services/volume/v2/qos_client.py
+++ b/tempest/lib/services/volume/v2/qos_client.py
@@ -43,9 +43,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v2/index.html
- ?expanded=create-qos-specification-detail
- #quality-of-service-qos-specifications-qos-specs
+ http://developer.openstack.org/api-ref/block-storage/v2/#create-qos-specification
"""
post_body = json.dumps({'qos_specs': kwargs})
resp, body = self.post('qos-specs', post_body)
@@ -81,9 +79,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v2/index.html
- ?expanded=set-keys-in-qos-specification-detail
- #quality-of-service-qos-specifications-qos-specs
+ http://developer.openstack.org/api-ref/block-storage/v2/#set-keys-in-qos-specification
"""
put_body = json.dumps({"qos_specs": kwargs})
resp, body = self.put('qos-specs/%s' % qos_id, put_body)
@@ -98,9 +94,7 @@
For a full list of available parameters, please refer to the official
API reference:
- http://developer.openstack.org/api-ref/block-storage/v2/index.html
- ?expanded=unset-keys-in-qos-specification-detail
- #quality-of-service-qos-specifications-qos-specs
+ http://developer.openstack.org/api-ref/block-storage/v2/#unset-keys-in-qos-specification
"""
put_body = json.dumps({'keys': keys})
resp, body = self.put('qos-specs/%s/delete_keys' % qos_id, put_body)
diff --git a/tempest/scenario/manager.py b/tempest/scenario/manager.py
index 6014c8c..e5f5f68 100644
--- a/tempest/scenario/manager.py
+++ b/tempest/scenario/manager.py
@@ -241,8 +241,8 @@
self.assertEqual(name, volume['display_name'])
else:
self.assertEqual(name, volume['name'])
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
# The volume retrieved on creation has a non-up-to-date status.
# Retrieval after it becomes active ensures correct details.
volume = self.volumes_client.show_volume(volume['id'])['volume']
@@ -481,8 +481,9 @@
self.addCleanup(test_utils.call_and_ignore_notfound_exc,
self.snapshots_client.delete_snapshot,
snapshot_id)
- waiters.wait_for_snapshot_status(self.snapshots_client,
- snapshot_id, 'available')
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
+ snapshot_id,
+ 'available')
image_name = snapshot_image['name']
self.assertEqual(name, image_name)
LOG.debug("Created snapshot image %s for server %s",
@@ -494,16 +495,16 @@
server['id'], volumeId=volume_to_attach['id'], device='/dev/%s'
% CONF.compute.volume_device_name)['volumeAttachment']
self.assertEqual(volume_to_attach['id'], volume['id'])
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'in-use')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'in-use')
# Return the updated volume after the attachment
return self.volumes_client.show_volume(volume['id'])['volume']
def nova_volume_detach(self, server, volume):
self.servers_client.detach_volume(server['id'], volume['id'])
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
volume = self.volumes_client.show_volume(volume['id'])['volume']
self.assertEqual('available', volume['status'])
@@ -730,36 +731,6 @@
network['id'])
return network
- def _list_networks(self, *args, **kwargs):
- """List networks using admin creds """
- networks_list = self.admin_manager.networks_client.list_networks(
- *args, **kwargs)
- return networks_list['networks']
-
- def _list_subnets(self, *args, **kwargs):
- """List subnets using admin creds """
- subnets_list = self.admin_manager.subnets_client.list_subnets(
- *args, **kwargs)
- return subnets_list['subnets']
-
- def _list_routers(self, *args, **kwargs):
- """List routers using admin creds """
- routers_list = self.admin_manager.routers_client.list_routers(
- *args, **kwargs)
- return routers_list['routers']
-
- def _list_ports(self, *args, **kwargs):
- """List ports using admin creds """
- ports_list = self.admin_manager.ports_client.list_ports(
- *args, **kwargs)
- return ports_list['ports']
-
- def _list_agents(self, *args, **kwargs):
- """List agents using admin creds """
- agents_list = self.admin_manager.network_agents_client.list_agents(
- *args, **kwargs)
- return agents_list['agents']
-
def _create_subnet(self, network, subnets_client=None,
routers_client=None, namestart='subnet-smoke',
**kwargs):
@@ -778,7 +749,8 @@
:returns: True if subnet with cidr already exist in tenant
False else
"""
- cidr_in_use = self._list_subnets(tenant_id=tenant_id, cidr=cidr)
+ cidr_in_use = self.admin_manager.subnets_client.list_subnets(
+ tenant_id=tenant_id, cidr=cidr)['subnets']
return len(cidr_in_use) != 0
ip_version = kwargs.pop('ip_version', 4)
@@ -826,7 +798,8 @@
return subnet
def _get_server_port_id_and_ip4(self, server, ip_addr=None):
- ports = self._list_ports(device_id=server['id'], fixed_ip=ip_addr)
+ ports = self.admin_manager.ports_client.list_ports(
+ device_id=server['id'], fixed_ip=ip_addr)['ports']
# A port can have more than one IP address in some cases.
# If the network is dual-stack (IPv4 + IPv6), this port is associated
# with 2 subnets
@@ -855,7 +828,8 @@
return port_map[0]
def _get_network_by_name(self, network_name):
- net = self._list_networks(name=network_name)
+ net = self.admin_manager.networks_client.list_networks(
+ name=network_name)['networks']
self.assertNotEqual(len(net), 0,
"Unable to get network by name: %s" % network_name)
return net[0]
@@ -938,7 +912,7 @@
# The target login is assumed to have been configured for
# key-based authentication by cloud-init.
try:
- for net_name, ip_addresses in server['addresses'].items():
+ for ip_addresses in server['addresses'].values():
for ip_address in ip_addresses:
self.check_vm_connectivity(ip_address['addr'],
username,
@@ -952,14 +926,15 @@
def _check_remote_connectivity(self, source, dest, should_succeed=True,
nic=None):
- """check ping server via source ssh connection
+ """assert ping server via source ssh connection
+
+ Note: This is an internal method. Use check_remote_connectivity
+ instead.
:param source: RemoteClient: an ssh connection from which to ping
:param dest: and IP to ping against
:param should_succeed: boolean should ping succeed or not
:param nic: specific network interface to ping from
- :returns: boolean -- should_succeed == ping
- :returns: ping is false if ping failed
"""
def ping_remote():
try:
@@ -974,6 +949,25 @@
CONF.validation.ping_timeout,
1)
+ def check_remote_connectivity(self, source, dest, should_succeed=True,
+ nic=None):
+ """assert ping server via source ssh connection
+
+ :param source: RemoteClient: an ssh connection from which to ping
+ :param dest: and IP to ping against
+ :param should_succeed: boolean should ping succeed or not
+ :param nic: specific network interface to ping from
+ """
+ result = self._check_remote_connectivity(source, dest, should_succeed,
+ nic)
+ source_host = source.ssh_client.host
+ if should_succeed:
+ msg = "Timed out waiting for %s to become reachable from %s" \
+ % (dest, source_host)
+ else:
+ msg = "%s is reachable from %s" % (dest, source_host)
+ self.assertTrue(result, msg)
+
def _create_security_group(self, security_group_rules_client=None,
tenant_id=None,
namestart='secgroup-smoke',
diff --git a/tempest/scenario/test_aggregates_basic_ops.py b/tempest/scenario/test_aggregates_basic_ops.py
index 50fe9c8..5152472 100644
--- a/tempest/scenario/test_aggregates_basic_ops.py
+++ b/tempest/scenario/test_aggregates_basic_ops.py
@@ -36,9 +36,8 @@
def setup_clients(cls):
super(TestAggregatesBasicOps, cls).setup_clients()
# Use admin client by default
- cls.manager = cls.admin_manager
- cls.aggregates_client = cls.manager.aggregates_client
- cls.hosts_client = cls.manager.hosts_client
+ cls.aggregates_client = cls.admin_manager.aggregates_client
+ cls.hosts_client = cls.admin_manager.hosts_client
def _create_aggregate(self, **kwargs):
aggregate = (self.aggregates_client.create_aggregate(**kwargs)
@@ -83,7 +82,7 @@
aggregate = self.aggregates_client.set_metadata(aggregate['id'],
metadata=meta)
- for key, value in meta.items():
+ for key in meta.keys():
self.assertEqual(meta[key],
aggregate['aggregate']['metadata'][key])
diff --git a/tempest/scenario/test_minimum_basic.py b/tempest/scenario/test_minimum_basic.py
index 738ed61..5fee801 100644
--- a/tempest/scenario/test_minimum_basic.py
+++ b/tempest/scenario/test_minimum_basic.py
@@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations
# under the License.
+import testtools
+
from tempest.common import custom_matchers
from tempest.common import waiters
from tempest import config
@@ -92,13 +94,15 @@
raise exceptions.TimeoutException(msg)
def _get_floating_ip_in_server_addresses(self, floating_ip, server):
- for network_name, addresses in server['addresses'].items():
+ for addresses in server['addresses'].values():
for address in addresses:
if (address['OS-EXT-IPS:type'] == 'floating' and
address['addr'] == floating_ip['ip']):
return address
@decorators.idempotent_id('bdbb5441-9204-419d-a225-b4fdbfb1a1a8')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
@test.services('compute', 'volume', 'image', 'network')
def test_minimum_basic_scenario(self):
image = self.glance_image_create()
diff --git a/tempest/scenario/test_network_advanced_server_ops.py b/tempest/scenario/test_network_advanced_server_ops.py
index f8e7742..1196659 100644
--- a/tempest/scenario/test_network_advanced_server_ops.py
+++ b/tempest/scenario/test_network_advanced_server_ops.py
@@ -39,7 +39,6 @@
def setup_clients(cls):
super(TestNetworkAdvancedServerOps, cls).setup_clients()
cls.admin_servers_client = cls.os_adm.servers_client
- cls.admin_hosts_client = cls.os_adm.hosts_client
@classmethod
def skip_checks(cls):
diff --git a/tempest/scenario/test_network_basic_ops.py b/tempest/scenario/test_network_basic_ops.py
index e6a5f51..51b59c9 100644
--- a/tempest/scenario/test_network_basic_ops.py
+++ b/tempest/scenario/test_network_basic_ops.py
@@ -109,13 +109,13 @@
self.check_networks()
self.ports = []
- self.port_id = None
+ port_id = None
if boot_with_port:
# create a port on the network and boot with that
- self.port_id = self._create_port(self.network['id'])['id']
- self.ports.append({'port': self.port_id})
+ port_id = self._create_port(self.network['id'])['id']
+ self.ports.append({'port': port_id})
- server = self._create_server(self.network, self.port_id)
+ server = self._create_server(self.network, port_id)
self._check_tenant_network_connectivity()
floating_ip = self.create_floating_ip(server)
@@ -127,23 +127,23 @@
via checking the result of list_[networks,routers,subnets]
"""
- seen_nets = self._list_networks()
- seen_names = [n['name'] for n in seen_nets]
- seen_ids = [n['id'] for n in seen_nets]
+ seen_nets = self.admin_manager.networks_client.list_networks()
+ seen_names = [n['name'] for n in seen_nets['networks']]
+ seen_ids = [n['id'] for n in seen_nets['networks']]
self.assertIn(self.network['name'], seen_names)
self.assertIn(self.network['id'], seen_ids)
if self.subnet:
- seen_subnets = self._list_subnets()
- seen_net_ids = [n['network_id'] for n in seen_subnets]
- seen_subnet_ids = [n['id'] for n in seen_subnets]
+ seen_subnets = self.admin_manager.subnets_client.list_subnets()
+ seen_net_ids = [n['network_id'] for n in seen_subnets['subnets']]
+ seen_subnet_ids = [n['id'] for n in seen_subnets['subnets']]
self.assertIn(self.network['id'], seen_net_ids)
self.assertIn(self.subnet['id'], seen_subnet_ids)
if self.router:
- seen_routers = self._list_routers()
- seen_router_ids = [n['id'] for n in seen_routers]
- seen_router_names = [n['name'] for n in seen_routers]
+ seen_routers = self.admin_manager.routers_client.list_routers()
+ seen_router_ids = [n['id'] for n in seen_routers['routers']]
+ seen_router_names = [n['name'] for n in seen_routers['routers']]
self.assertIn(self.router['name'],
seen_router_names)
self.assertIn(self.router['id'],
@@ -240,7 +240,8 @@
ip_address, private_key=private_key)
old_nic_list = self._get_server_nics(ssh_client)
# get a port from a list of one item
- port_list = self._list_ports(device_id=server['id'])
+ port_list = self.admin_manager.ports_client.list_ports(
+ device_id=server['id'])['ports']
self.assertEqual(1, len(port_list))
old_port = port_list[0]
interface = self.interface_client.create_interface(
@@ -253,9 +254,12 @@
server['id'], interface['port_id'])
def check_ports():
- self.new_port_list = [port for port in
- self._list_ports(device_id=server['id'])
- if port['id'] != old_port['id']]
+ self.new_port_list = [
+ port for port in
+ self.admin_manager.ports_client.list_ports(
+ device_id=server['id'])['ports']
+ if port['id'] != old_port['id']
+ ]
return len(self.new_port_list) == 1
if not test_utils.call_until_true(
@@ -301,10 +305,13 @@
floating_ip, server = self.floating_ip_tuple
# get internal ports' ips:
# get all network ports in the new network
- internal_ips = (p['fixed_ips'][0]['ip_address'] for p in
- self._list_ports(tenant_id=server['tenant_id'],
- network_id=network['id'])
- if p['device_owner'].startswith('network'))
+ internal_ips = (
+ p['fixed_ips'][0]['ip_address'] for p in
+ self.admin_manager.ports_client.list_ports(
+ tenant_id=server['tenant_id'],
+ network_id=network['id'])['ports']
+ if p['device_owner'].startswith('network')
+ )
self._check_server_connectivity(floating_ip,
internal_ips,
@@ -320,8 +327,11 @@
# We ping the external IP from the instance using its floating IP
# which is always IPv4, so we must only test connectivity to
# external IPv4 IPs if the external network is dualstack.
- v4_subnets = [s for s in self._list_subnets(
- network_id=CONF.network.public_network_id) if s['ip_version'] == 4]
+ v4_subnets = [
+ s for s in self.admin_manager.subnets_client.list_subnets(
+ network_id=CONF.network.public_network_id)['subnets']
+ if s['ip_version'] == 4
+ ]
self.assertEqual(1, len(v4_subnets),
"Found %d IPv4 subnets" % len(v4_subnets))
@@ -337,20 +347,8 @@
ip_address, private_key=private_key)
for remote_ip in address_list:
- if should_connect:
- msg = ("Timed out waiting for %s to become "
- "reachable") % remote_ip
- else:
- msg = "ip address %s is reachable" % remote_ip
- try:
- self.assertTrue(self._check_remote_connectivity
- (ssh_source, remote_ip, should_connect),
- msg)
- except Exception:
- LOG.exception("Unable to access {dest} via ssh to "
- "floating-ip {src}".format(dest=remote_ip,
- src=floating_ip))
- raise
+ self.check_remote_connectivity(ssh_source, remote_ip,
+ should_connect)
@test.attr(type='smoke')
@decorators.idempotent_id('f323b3ba-82f8-4db7-8ea6-6a895869ec49')
@@ -624,7 +622,8 @@
self._setup_network_and_servers()
floating_ip, server = self.floating_ip_tuple
server_id = server['id']
- port_id = self._list_ports(device_id=server_id)[0]['id']
+ port_id = self.admin_manager.ports_client.list_ports(
+ device_id=server_id)['ports'][0]['id']
server_pip = server['addresses'][self.network['name']][0]['addr']
server2 = self._create_server(self.network)
@@ -637,21 +636,21 @@
self.check_public_network_connectivity(
should_connect=True, msg="before updating "
"admin_state_up of instance port to False")
- self._check_remote_connectivity(ssh_client, dest=server_pip,
- should_succeed=True)
+ self.check_remote_connectivity(ssh_client, dest=server_pip,
+ should_succeed=True)
self.ports_client.update_port(port_id, admin_state_up=False)
self.check_public_network_connectivity(
should_connect=False, msg="after updating "
"admin_state_up of instance port to False",
should_check_floating_ip_status=False)
- self._check_remote_connectivity(ssh_client, dest=server_pip,
- should_succeed=False)
+ self.check_remote_connectivity(ssh_client, dest=server_pip,
+ should_succeed=False)
self.ports_client.update_port(port_id, admin_state_up=True)
self.check_public_network_connectivity(
should_connect=True, msg="after updating "
"admin_state_up of instance port to True")
- self._check_remote_connectivity(ssh_client, dest=server_pip,
- should_succeed=True)
+ self.check_remote_connectivity(ssh_client, dest=server_pip,
+ should_succeed=True)
@decorators.idempotent_id('759462e1-8535-46b0-ab3a-33aa45c55aaa')
@test.services('compute', 'network')
@@ -677,8 +676,8 @@
'Server should have been created from a '
'pre-existing port.')
# Assert the port is bound to the server.
- port_list = self._list_ports(device_id=server['id'],
- network_id=self.network['id'])
+ port_list = self.admin_manager.ports_client.list_ports(
+ device_id=server['id'], network_id=self.network['id'])['ports']
self.assertEqual(1, len(port_list),
'There should only be one port created for '
'server %s.' % server['id'])
@@ -696,8 +695,8 @@
# Boot another server with the same port to make sure nothing was
# left around that could cause issues.
server = self._create_server(self.network, port['id'])
- port_list = self._list_ports(device_id=server['id'],
- network_id=self.network['id'])
+ port_list = self.admin_manager.ports_client.list_ports(
+ device_id=server['id'], network_id=self.network['id'])['ports']
self.assertEqual(1, len(port_list),
'There should only be one port created for '
'server %s.' % server['id'])
@@ -727,9 +726,11 @@
unschedule_router = (self.admin_manager.network_agents_client.
delete_router_from_l3_agent)
- agent_list_alive = set(a["id"] for a in
- self._list_agents(agent_type="L3 agent") if
- a["alive"] is True)
+ agent_list_alive = set(
+ a["id"] for a in
+ self.admin_manager.network_agents_client.list_agents(
+ agent_type="L3 agent")['agents'] if a["alive"] is True
+ )
self._setup_network_and_servers()
# NOTE(kevinbenton): we have to use the admin credentials to check
@@ -811,8 +812,8 @@
self._create_new_network()
self._hotplug_server()
fip, server = self.floating_ip_tuple
- new_ports = self._list_ports(device_id=server["id"],
- network_id=self.new_net["id"])
+ new_ports = self.admin_manager.ports_client.list_ports(
+ device_id=server["id"], network_id=self.new_net["id"])['ports']
spoof_port = new_ports[0]
private_key = self._get_server_key(server)
ssh_client = self.get_remote_client(fip['floating_ip_address'],
@@ -820,15 +821,15 @@
spoof_nic = ssh_client.get_nic_name_by_mac(spoof_port["mac_address"])
peer = self._create_server(self.new_net)
peer_address = peer['addresses'][self.new_net['name']][0]['addr']
- self._check_remote_connectivity(ssh_client, dest=peer_address,
- nic=spoof_nic, should_succeed=True)
+ self.check_remote_connectivity(ssh_client, dest=peer_address,
+ nic=spoof_nic, should_succeed=True)
ssh_client.set_mac_address(spoof_nic, spoof_mac)
new_mac = ssh_client.get_mac_address(nic=spoof_nic)
self.assertEqual(spoof_mac, new_mac)
- self._check_remote_connectivity(ssh_client, dest=peer_address,
- nic=spoof_nic, should_succeed=False)
+ self.check_remote_connectivity(ssh_client, dest=peer_address,
+ nic=spoof_nic, should_succeed=False)
self.ports_client.update_port(spoof_port["id"],
port_security_enabled=False,
security_groups=[])
- self._check_remote_connectivity(ssh_client, dest=peer_address,
- nic=spoof_nic, should_succeed=True)
+ self.check_remote_connectivity(ssh_client, dest=peer_address,
+ nic=spoof_nic, should_succeed=True)
diff --git a/tempest/scenario/test_network_v6.py b/tempest/scenario/test_network_v6.py
index 2d6ea75..fcf395d 100644
--- a/tempest/scenario/test_network_v6.py
+++ b/tempest/scenario/test_network_v6.py
@@ -110,7 +110,7 @@
@staticmethod
def define_server_ips(srv):
ips = {'4': None, '6': []}
- for net_name, nics in srv['addresses'].items():
+ for nics in srv['addresses'].values():
for nic in nics:
if nic['version'] == 6:
ips['6'].append(nic['addr'])
@@ -143,9 +143,11 @@
@param ssh: RemoteClient ssh instance to server
@param sid: server uuid
"""
- ports = [p["mac_address"] for p in
- self._list_ports(device_id=sid,
- network_id=self.network_v6['id'])]
+ ports = [
+ p["mac_address"] for p in
+ self.admin_manager.ports_client.list_ports(
+ device_id=sid, network_id=self.network_v6['id'])['ports']
+ ]
self.assertEqual(1, len(ports),
message=("Multiple IPv6 ports found on network %s. "
"ports: %s")
@@ -189,25 +191,18 @@
self.assertTrue(test_utils.call_until_true(srv2_v6_addr_assigned,
CONF.validation.ping_timeout, 1))
- self._check_connectivity(sshv4_1, ips_from_api_2['4'])
- self._check_connectivity(sshv4_2, ips_from_api_1['4'])
+ self.check_remote_connectivity(sshv4_1, ips_from_api_2['4'])
+ self.check_remote_connectivity(sshv4_2, ips_from_api_1['4'])
for i in range(n_subnets6):
- self._check_connectivity(sshv4_1,
- ips_from_api_2['6'][i])
- self._check_connectivity(sshv4_1,
- self.subnets_v6[i]['gateway_ip'])
- self._check_connectivity(sshv4_2,
- ips_from_api_1['6'][i])
- self._check_connectivity(sshv4_2,
- self.subnets_v6[i]['gateway_ip'])
-
- def _check_connectivity(self, source, dest):
- self.assertTrue(
- self._check_remote_connectivity(source, dest),
- "Timed out waiting for %s to become reachable from %s" %
- (dest, source.ssh_client.host)
- )
+ self.check_remote_connectivity(sshv4_1,
+ ips_from_api_2['6'][i])
+ self.check_remote_connectivity(sshv4_1,
+ self.subnets_v6[i]['gateway_ip'])
+ self.check_remote_connectivity(sshv4_2,
+ ips_from_api_1['6'][i])
+ self.check_remote_connectivity(sshv4_2,
+ self.subnets_v6[i]['gateway_ip'])
@test.attr(type='slow')
@decorators.idempotent_id('2c92df61-29f0-4eaa-bee3-7c65bef62a43')
diff --git a/tempest/scenario/test_security_groups_basic_ops.py b/tempest/scenario/test_security_groups_basic_ops.py
index fda407c..3d383f7 100644
--- a/tempest/scenario/test_security_groups_basic_ops.py
+++ b/tempest/scenario/test_security_groups_basic_ops.py
@@ -220,30 +220,36 @@
# Checks that we see the newly created network/subnet/router via
# checking the result of list_[networks,routers,subnets]
# Check that (router, subnet) couple exist in port_list
- seen_nets = self._list_networks()
- seen_names = [n['name'] for n in seen_nets]
- seen_ids = [n['id'] for n in seen_nets]
+ seen_nets = self.admin_manager.networks_client.list_networks()
+ seen_names = [n['name'] for n in seen_nets['networks']]
+ seen_ids = [n['id'] for n in seen_nets['networks']]
self.assertIn(tenant.network['name'], seen_names)
self.assertIn(tenant.network['id'], seen_ids)
- seen_subnets = [(n['id'], n['cidr'], n['network_id'])
- for n in self._list_subnets()]
+ seen_subnets = [
+ (n['id'], n['cidr'], n['network_id']) for n in
+ self.admin_manager.subnets_client.list_subnets()['subnets']
+ ]
mysubnet = (tenant.subnet['id'], tenant.subnet['cidr'],
tenant.network['id'])
self.assertIn(mysubnet, seen_subnets)
- seen_routers = self._list_routers()
- seen_router_ids = [n['id'] for n in seen_routers]
- seen_router_names = [n['name'] for n in seen_routers]
+ seen_routers = self.admin_manager.routers_client.list_routers()
+ seen_router_ids = [n['id'] for n in seen_routers['routers']]
+ seen_router_names = [n['name'] for n in seen_routers['routers']]
self.assertIn(tenant.router['name'], seen_router_names)
self.assertIn(tenant.router['id'], seen_router_ids)
myport = (tenant.router['id'], tenant.subnet['id'])
- router_ports = [(i['device_id'], i['fixed_ips'][0]['subnet_id']) for i
- in self._list_ports()
- if net_info.is_router_interface_port(i)]
+ router_ports = [
+ (i['device_id'], f['subnet_id'])
+ for i in self.admin_manager.ports_client.list_ports(
+ device_id=tenant.router['id'])['ports']
+ if net_info.is_router_interface_port(i)
+ for f in i['fixed_ips']
+ ]
self.assertIn(myport, router_ports)
@@ -362,20 +368,12 @@
access_point_ssh, private_key=private_key)
return access_point_ssh
- def _check_connectivity(self, access_point, ip, should_succeed=True):
- if should_succeed:
- msg = "Timed out waiting for %s to become reachable" % ip
- else:
- msg = "%s is reachable" % ip
- self.assertTrue(self._check_remote_connectivity(access_point, ip,
- should_succeed), msg)
-
def _test_in_tenant_block(self, tenant):
access_point_ssh = self._connect_to_access_point(tenant)
for server in tenant.servers:
- self._check_connectivity(access_point=access_point_ssh,
- ip=self._get_server_ip(server),
- should_succeed=False)
+ self.check_remote_connectivity(source=access_point_ssh,
+ dest=self._get_server_ip(server),
+ should_succeed=False)
def _test_in_tenant_allow(self, tenant):
ruleset = dict(
@@ -390,8 +388,8 @@
)
access_point_ssh = self._connect_to_access_point(tenant)
for server in tenant.servers:
- self._check_connectivity(access_point=access_point_ssh,
- ip=self._get_server_ip(server))
+ self.check_remote_connectivity(source=access_point_ssh,
+ dest=self._get_server_ip(server))
def _test_cross_tenant_block(self, source_tenant, dest_tenant):
# if public router isn't defined, then dest_tenant access is via
@@ -399,8 +397,8 @@
access_point_ssh = self._connect_to_access_point(source_tenant)
ip = self._get_server_ip(dest_tenant.access_point,
floating=self.floating_ip_access)
- self._check_connectivity(access_point=access_point_ssh, ip=ip,
- should_succeed=False)
+ self.check_remote_connectivity(source=access_point_ssh, dest=ip,
+ should_succeed=False)
def _test_cross_tenant_allow(self, source_tenant, dest_tenant):
"""check for each direction:
@@ -421,7 +419,7 @@
access_point_ssh = self._connect_to_access_point(source_tenant)
ip = self._get_server_ip(dest_tenant.access_point,
floating=self.floating_ip_access)
- self._check_connectivity(access_point_ssh, ip)
+ self.check_remote_connectivity(access_point_ssh, ip)
# test that reverse traffic is still blocked
self._test_cross_tenant_block(dest_tenant, source_tenant)
@@ -438,7 +436,7 @@
access_point_ssh_2 = self._connect_to_access_point(dest_tenant)
ip = self._get_server_ip(source_tenant.access_point,
floating=self.floating_ip_access)
- self._check_connectivity(access_point_ssh_2, ip)
+ self.check_remote_connectivity(access_point_ssh_2, ip)
def _verify_mac_addr(self, tenant):
"""Verify that VM has the same ip, mac as listed in port"""
@@ -448,7 +446,8 @@
mac_addr = mac_addr.strip().lower()
# Get the fixed_ips and mac_address fields of all ports. Select
# only those two columns to reduce the size of the response.
- port_list = self._list_ports(fields=['fixed_ips', 'mac_address'])
+ port_list = self.admin_manager.ports_client.list_ports(
+ fields=['fixed_ips', 'mac_address'])['ports']
port_detail_list = [
(port['fixed_ips'][0]['subnet_id'],
port['fixed_ips'][0]['ip_address'],
@@ -530,18 +529,19 @@
# Check connectivity failure with default security group
try:
access_point_ssh = self._connect_to_access_point(new_tenant)
- self._check_connectivity(access_point=access_point_ssh,
- ip=self._get_server_ip(server),
- should_succeed=False)
+ self.check_remote_connectivity(source=access_point_ssh,
+ dest=self._get_server_ip(server),
+ should_succeed=False)
server_id = server['id']
- port_id = self._list_ports(device_id=server_id)[0]['id']
+ port_id = self.admin_manager.ports_client.list_ports(
+ device_id=server_id)['ports'][0]['id']
# update port with new security group and check connectivity
self.ports_client.update_port(port_id, security_groups=[
new_tenant.security_groups['new_sg']['id']])
- self._check_connectivity(
- access_point=access_point_ssh,
- ip=self._get_server_ip(server))
+ self.check_remote_connectivity(
+ source=access_point_ssh,
+ dest=self._get_server_ip(server))
except Exception:
for tenant in self.tenants.values():
self._log_console_output(servers=tenant.servers)
@@ -596,23 +596,24 @@
access_point_ssh = self._connect_to_access_point(new_tenant)
server_id = server['id']
- port_id = self._list_ports(device_id=server_id)[0]['id']
+ port_id = self.admin_manager.ports_client.list_ports(
+ device_id=server_id)['ports'][0]['id']
# Flip the port's port security and check connectivity
try:
self.ports_client.update_port(port_id,
port_security_enabled=True,
security_groups=[])
- self._check_connectivity(access_point=access_point_ssh,
- ip=self._get_server_ip(server),
- should_succeed=False)
+ self.check_remote_connectivity(source=access_point_ssh,
+ dest=self._get_server_ip(server),
+ should_succeed=False)
self.ports_client.update_port(port_id,
port_security_enabled=False,
security_groups=[])
- self._check_connectivity(
- access_point=access_point_ssh,
- ip=self._get_server_ip(server))
+ self.check_remote_connectivity(
+ source=access_point_ssh,
+ dest=self._get_server_ip(server))
except Exception:
for tenant in self.tenants.values():
self._log_console_output(servers=tenant.servers)
@@ -640,7 +641,8 @@
sec_groups = []
server = self._create_server(name, tenant, sec_groups)
server_id = server['id']
- ports = self._list_ports(device_id=server_id)
+ ports = self.admin_manager.ports_client.list_ports(
+ device_id=server_id)['ports']
self.assertEqual(1, len(ports))
for port in ports:
self.assertEmpty(port['security_groups'],
diff --git a/tempest/scenario/test_server_advanced_ops.py b/tempest/scenario/test_server_advanced_ops.py
index 4d9e59c..ec839cd 100644
--- a/tempest/scenario/test_server_advanced_ops.py
+++ b/tempest/scenario/test_server_advanced_ops.py
@@ -75,31 +75,15 @@
@test.services('compute')
def test_server_sequence_suspend_resume(self):
# We create an instance for use in this test
- instance = self.create_server()
- instance_id = instance['id']
- LOG.debug("Suspending instance %s. Current status: %s",
- instance_id, instance['status'])
- self.servers_client.suspend_server(instance_id)
- waiters.wait_for_server_status(self.servers_client, instance_id,
- 'SUSPENDED')
- fetched_instance = (self.servers_client.show_server(instance_id)
- ['server'])
- LOG.debug("Resuming instance %s. Current status: %s",
- instance_id, fetched_instance['status'])
- self.servers_client.resume_server(instance_id)
- waiters.wait_for_server_status(self.servers_client, instance_id,
- 'ACTIVE')
- fetched_instance = (self.servers_client.show_server(instance_id)
- ['server'])
- LOG.debug("Suspending instance %s. Current status: %s",
- instance_id, fetched_instance['status'])
- self.servers_client.suspend_server(instance_id)
- waiters.wait_for_server_status(self.servers_client, instance_id,
- 'SUSPENDED')
- fetched_instance = (self.servers_client.show_server(instance_id)
- ['server'])
- LOG.debug("Resuming instance %s. Current status: %s",
- instance_id, fetched_instance['status'])
- self.servers_client.resume_server(instance_id)
- waiters.wait_for_server_status(self.servers_client, instance_id,
- 'ACTIVE')
+ instance_id = self.create_server()['id']
+
+ for _ in range(2):
+ LOG.debug("Suspending instance %s", instance_id)
+ self.servers_client.suspend_server(instance_id)
+ waiters.wait_for_server_status(self.servers_client, instance_id,
+ 'SUSPENDED')
+
+ LOG.debug("Resuming instance %s", instance_id)
+ self.servers_client.resume_server(instance_id)
+ waiters.wait_for_server_status(self.servers_client, instance_id,
+ 'ACTIVE')
diff --git a/tempest/scenario/test_server_basic_ops.py b/tempest/scenario/test_server_basic_ops.py
index 8a3b70d..ddbaf5a 100644
--- a/tempest/scenario/test_server_basic_ops.py
+++ b/tempest/scenario/test_server_basic_ops.py
@@ -45,8 +45,6 @@
def setUp(self):
super(TestServerBasicOps, self).setUp()
- self.image_ref = CONF.compute.image_ref
- self.flavor_ref = CONF.compute.flavor_ref
self.run_ssh = CONF.validation.run_validation
self.ssh_user = CONF.validation.image_ssh_user
@@ -133,8 +131,6 @@
security_group = self._create_security_group()
self.md = {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}
self.instance = self.create_server(
- image_id=self.image_ref,
- flavor=self.flavor_ref,
key_name=keypair['name'],
security_groups=[{'name': security_group['name']}],
config_drive=CONF.compute_feature_enabled.config_drive,
diff --git a/tempest/scenario/test_server_multinode.py b/tempest/scenario/test_server_multinode.py
index 9cc89a4..db91a21 100644
--- a/tempest/scenario/test_server_multinode.py
+++ b/tempest/scenario/test_server_multinode.py
@@ -77,6 +77,8 @@
inst = self.create_server(
availability_zone='%(zone)s:%(host_name)s' % host)
server = self.servers_client.show_server(inst['id'])['server']
+ # ensure server is located on the requested host
+ self.assertEqual(host['host_name'], server['OS-EXT-SRV-ATTR:host'])
servers.append(server)
# make sure we really have the number of servers we think we should
diff --git a/tempest/scenario/test_shelve_instance.py b/tempest/scenario/test_shelve_instance.py
index e950766..75cef88 100644
--- a/tempest/scenario/test_shelve_instance.py
+++ b/tempest/scenario/test_shelve_instance.py
@@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations
# under the License.
+import testtools
+
from tempest.common import compute
from tempest.common import waiters
from tempest import config
@@ -55,7 +57,6 @@
security_groups = [{'name': security_group['name']}]
server = self.create_server(
- image_id=CONF.compute.image_ref,
key_name=keypair['name'],
security_groups=security_groups,
volume_backed=boot_from_volume)
@@ -74,11 +75,15 @@
self.assertEqual(timestamp, timestamp2)
@decorators.idempotent_id('1164e700-0af0-4a4c-8792-35909a88743c')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
@test.services('compute', 'network', 'image')
def test_shelve_instance(self):
self._create_server_then_shelve_and_unshelve()
@decorators.idempotent_id('c1b6318c-b9da-490b-9c67-9339b627271f')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
@test.services('compute', 'volume', 'network', 'image')
def test_shelve_volume_backed_instance(self):
self._create_server_then_shelve_and_unshelve(boot_from_volume=True)
diff --git a/tempest/scenario/test_snapshot_pattern.py b/tempest/scenario/test_snapshot_pattern.py
index 8197d52..6dedd1d 100644
--- a/tempest/scenario/test_snapshot_pattern.py
+++ b/tempest/scenario/test_snapshot_pattern.py
@@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations
# under the License.
+import testtools
+
from tempest import config
from tempest.lib import decorators
from tempest.scenario import manager
@@ -39,6 +41,8 @@
raise cls.skipException("Snapshotting is not available.")
@decorators.idempotent_id('608e604b-1d63-4a82-8e3e-91bc665c90b4')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
@test.services('compute', 'network', 'image')
def test_snapshot_pattern(self):
# prepare for booting an instance
@@ -47,7 +51,6 @@
# boot an instance and create a timestamp file in it
server = self.create_server(
- image_id=CONF.compute.image_ref,
key_name=keypair['name'],
security_groups=[{'name': security_group['name']}])
diff --git a/tempest/scenario/test_stamp_pattern.py b/tempest/scenario/test_stamp_pattern.py
index 0c25664..716c0bf 100644
--- a/tempest/scenario/test_stamp_pattern.py
+++ b/tempest/scenario/test_stamp_pattern.py
@@ -13,8 +13,6 @@
# License for the specific language governing permissions and limitations
# under the License.
-import time
-
from oslo_log import log as logging
import testtools
@@ -63,21 +61,17 @@
snapshot_name = data_utils.rand_name('scenario-snapshot')
snapshot = self.snapshots_client.create_snapshot(
volume_id=volume['id'], display_name=snapshot_name)['snapshot']
-
- def cleaner():
- self.snapshots_client.delete_snapshot(snapshot['id'])
- try:
- while self.snapshots_client.show_snapshot(
- snapshot['id'])['snapshot']:
- time.sleep(1)
- except lib_exc.NotFound:
- pass
- self.addCleanup(cleaner)
- waiters.wait_for_volume_status(self.volumes_client,
- volume['id'], 'available')
- waiters.wait_for_snapshot_status(self.snapshots_client,
- snapshot['id'], 'available')
- self.assertEqual(snapshot_name, snapshot['display_name'])
+ self.addCleanup(self.snapshots_client.wait_for_resource_deletion,
+ snapshot['id'])
+ self.addCleanup(self.snapshots_client.delete_snapshot, snapshot['id'])
+ waiters.wait_for_volume_resource_status(self.volumes_client,
+ volume['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
+ snapshot['id'], 'available')
+ if 'display_name' in snapshot:
+ self.assertEqual(snapshot_name, snapshot['display_name'])
+ else:
+ self.assertEqual(snapshot_name, snapshot['name'])
return snapshot
def _wait_for_volume_available_on_the_system(self, ip_address,
@@ -94,10 +88,12 @@
CONF.compute.build_interval):
raise lib_exc.TimeoutException
- @decorators.skip_because(bug="1205344")
+ @decorators.skip_because(bug="1664793")
@decorators.idempotent_id('10fd234a-515c-41e5-b092-8323060598c5')
@testtools.skipUnless(CONF.compute_feature_enabled.snapshot,
'Snapshotting is not available.')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
@test.services('compute', 'network', 'volume', 'image')
def test_stamp_pattern(self):
# prepare for booting an instance
@@ -107,9 +103,8 @@
# boot an instance and create a timestamp file in it
volume = self.create_volume()
server = self.create_server(
- image_id=CONF.compute.image_ref,
key_name=keypair['name'],
- security_groups=security_group)
+ security_groups=[{'name': security_group['name']}])
# create and add floating IP to server1
ip_for_server = self.get_server_ip(server)
@@ -136,7 +131,7 @@
server_from_snapshot = self.create_server(
image_id=snapshot_image['id'],
key_name=keypair['name'],
- security_groups=security_group)
+ security_groups=[{'name': security_group['name']}])
# create and add floating IP to server_from_snapshot
ip_for_snapshot = self.get_server_ip(server_from_snapshot)
diff --git a/tempest/scenario/test_volume_boot_pattern.py b/tempest/scenario/test_volume_boot_pattern.py
index 5254082..b72dae9 100644
--- a/tempest/scenario/test_volume_boot_pattern.py
+++ b/tempest/scenario/test_volume_boot_pattern.py
@@ -11,6 +11,7 @@
# under the License.
from oslo_log import log as logging
+import testtools
from tempest.common.utils import data_utils
from tempest.common import waiters
@@ -42,16 +43,13 @@
return self.create_volume(name=vol_name, imageRef=img_uuid)
def _get_bdm(self, source_id, source_type, delete_on_termination=False):
- # NOTE(gfidente): the syntax for block_device_mapping is
- # dev_name=id:type:size:delete_on_terminate
- # where type needs to be "snap" if the server is booted
- # from a snapshot, size instead can be safely left empty
-
- bd_map = [{
- 'device_name': 'vda',
- '{}_id'.format(source_type): source_id,
- 'delete_on_termination': str(int(delete_on_termination))}]
- return {'block_device_mapping': bd_map}
+ bd_map_v2 = [{
+ 'uuid': source_id,
+ 'source_type': source_type,
+ 'destination_type': 'volume',
+ 'boot_index': 0,
+ 'delete_on_termination': delete_on_termination}]
+ return {'block_device_mapping_v2': bd_map_v2}
def _boot_instance_from_resource(self, source_id,
source_type,
@@ -81,8 +79,8 @@
self.addCleanup(
self.snapshots_client.wait_for_resource_deletion, snap['id'])
self.addCleanup(self.snapshots_client.delete_snapshot, snap['id'])
- waiters.wait_for_snapshot_status(self.snapshots_client,
- snap['id'], 'available')
+ waiters.wait_for_volume_resource_status(self.snapshots_client,
+ snap['id'], 'available')
# NOTE(e0ne): Cinder API v2 uses name instead of display_name
if 'display_name' in snap:
@@ -98,6 +96,8 @@
@decorators.idempotent_id('557cd2c2-4eb8-4dce-98be-f86765ff311b')
@test.attr(type='smoke')
+ @testtools.skipUnless(CONF.network.public_network_id,
+ 'The public_network_id option must be specified.')
@test.services('compute', 'volume', 'image')
def test_volume_boot_pattern(self):
@@ -233,14 +233,3 @@
# delete instance
self._delete_server(instance)
-
-
-class TestVolumeBootPatternV2(TestVolumeBootPattern):
- def _get_bdm(self, source_id, source_type, delete_on_termination=False):
- bd_map_v2 = [{
- 'uuid': source_id,
- 'source_type': source_type,
- 'destination_type': 'volume',
- 'boot_index': 0,
- 'delete_on_termination': delete_on_termination}]
- return {'block_device_mapping_v2': bd_map_v2}
diff --git a/tempest/services/object_storage/account_client.py b/tempest/services/object_storage/account_client.py
index 9932b4a..4859e75 100644
--- a/tempest/services/object_storage/account_client.py
+++ b/tempest/services/object_storage/account_client.py
@@ -128,6 +128,13 @@
than the specified marker.
DEFAULT: No Marker
+ prefix=[string value Y]
+ Given string value Y, return object names starting with that prefix
+
+ reverse=[boolean value Z]
+ Reverse the result order based on the boolean value Z
+ DEFAULT: False
+
format=[string value, either 'json' or 'xml']
Specify either json or xml to return the respective serialized
response.
diff --git a/tempest/test.py b/tempest/test.py
index 039afa1..970e97c 100644
--- a/tempest/test.py
+++ b/tempest/test.py
@@ -31,7 +31,6 @@
from tempest import config
from tempest import exceptions
from tempest.lib.common import cred_client
-from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
@@ -39,7 +38,11 @@
CONF = config.CONF
-idempotent_id = decorators.idempotent_id
+# TODO(oomichi): This test.idempotent_id should be removed after all projects
+# switch to use decorators.idempotent_id.
+idempotent_id = debtcollector.moves.moved_function(
+ decorators.idempotent_id, 'idempotent_id', __name__,
+ version='Mitaka', removal_version='?')
def attr(**kwargs):
@@ -644,8 +647,3 @@
def assertNotEmpty(self, list, msg=None):
self.assertGreater(len(list), 0, msg)
-
-
-call_until_true = debtcollector.moves.moved_function(
- test_utils.call_until_true, 'call_until_true', __name__,
- version='Newton', removal_version='Ocata')
diff --git a/tempest/tests/api/compute/test_base.py b/tempest/tests/api/compute/test_base.py
index a1da343..6345728 100644
--- a/tempest/tests/api/compute/test_base.py
+++ b/tempest/tests/api/compute/test_base.py
@@ -48,10 +48,14 @@
@mock.patch.multiple(compute_base.BaseV2ComputeTest,
compute_images_client=mock.DEFAULT,
+ servers_client=mock.DEFAULT,
images=[], create=True)
@mock.patch.object(waiters, 'wait_for_image_status')
+ @mock.patch.object(waiters, 'wait_for_server_status')
def test_create_image_from_server_wait_until_active(self,
+ wait_for_server_status,
wait_for_image_status,
+ servers_client,
compute_images_client):
"""Tests create_image_from_server with wait_until='ACTIVE' kwarg."""
# setup mocks
@@ -67,6 +71,35 @@
# make our assertions
wait_for_image_status.assert_called_once_with(
compute_images_client, image_id, 'ACTIVE')
+ wait_for_server_status.assert_called_once_with(
+ servers_client, mock.sentinel.server_id, 'ACTIVE')
+ compute_images_client.show_image.assert_called_once_with(image_id)
+
+ @mock.patch.multiple(compute_base.BaseV2ComputeTest,
+ compute_images_client=mock.DEFAULT,
+ servers_client=mock.DEFAULT,
+ images=[], create=True)
+ @mock.patch.object(waiters, 'wait_for_image_status')
+ @mock.patch.object(waiters, 'wait_for_server_status')
+ def test_create_image_from_server_wait_until_active_no_server_wait(
+ self, wait_for_server_status, wait_for_image_status,
+ servers_client, compute_images_client):
+ """Tests create_image_from_server with wait_until='ACTIVE' kwarg."""
+ # setup mocks
+ image_id = uuidutils.generate_uuid()
+ fake_image = mock.Mock(response={'location': image_id})
+ compute_images_client.create_image.return_value = fake_image
+ compute_images_client.show_image.return_value = (
+ {'image': fake_image})
+ # call the utility method
+ image = compute_base.BaseV2ComputeTest.create_image_from_server(
+ mock.sentinel.server_id, wait_until='ACTIVE',
+ wait_for_server=False)
+ self.assertEqual(fake_image, image)
+ # make our assertions
+ wait_for_image_status.assert_called_once_with(
+ compute_images_client, image_id, 'ACTIVE')
+ self.assertEqual(0, wait_for_server_status.call_count)
compute_images_client.show_image.assert_called_once_with(image_id)
@mock.patch.multiple(compute_base.BaseV2ComputeTest,
diff --git a/tempest/tests/cmd/test_subunit_describe_calls.py b/tempest/tests/cmd/test_subunit_describe_calls.py
index 1c24c37..5f3d770 100644
--- a/tempest/tests/cmd/test_subunit_describe_calls.py
+++ b/tempest/tests/cmd/test_subunit_describe_calls.py
@@ -33,6 +33,16 @@
p.communicate()
self.assertEqual(0, p.returncode)
+ def test_return_code_no_output(self):
+ subunit_file = os.path.join(
+ os.path.dirname(os.path.abspath(__file__)),
+ 'sample_streams/calls.subunit')
+ p = subprocess.Popen([
+ 'subunit-describe-calls', '-s', subunit_file],
+ stdin=subprocess.PIPE)
+ p.communicate()
+ self.assertEqual(0, p.returncode)
+
def test_parse(self):
subunit_file = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
diff --git a/tempest/tests/common/test_preprov_creds.py b/tempest/tests/common/test_preprov_creds.py
index 1c9982c..2fd375d 100644
--- a/tempest/tests/common/test_preprov_creds.py
+++ b/tempest/tests/common/test_preprov_creds.py
@@ -14,14 +14,15 @@
import hashlib
import os
-import testtools
+import shutil
import mock
+import six
+import testtools
+
from oslo_concurrency.fixture import lockutils as lockutils_fixtures
from oslo_config import cfg
from oslotest import mockpatch
-import shutil
-import six
from tempest.common import preprov_creds
from tempest import config
diff --git a/tempest/tests/common/test_waiters.py b/tempest/tests/common/test_waiters.py
index 46f9526..c2f622c 100644
--- a/tempest/tests/common/test_waiters.py
+++ b/tempest/tests/common/test_waiters.py
@@ -66,7 +66,7 @@
client.show_volume = mock_show
volume_id = '7532b91e-aa0a-4e06-b3e5-20c0c5ee1caa'
self.assertRaises(exceptions.VolumeRestoreErrorException,
- waiters.wait_for_volume_status,
+ waiters.wait_for_volume_resource_status,
client, volume_id, 'available')
mock_show.assert_has_calls([mock.call(volume_id),
mock.call(volume_id)])
diff --git a/tempest/tests/common/utils/linux/test_remote_client.py b/tempest/tests/common/utils/linux/test_remote_client.py
index e4f4c04..5be6229 100644
--- a/tempest/tests/common/utils/linux/test_remote_client.py
+++ b/tempest/tests/common/utils/linux/test_remote_client.py
@@ -12,9 +12,9 @@
# License for the specific language governing permissions and limitations
# under the License.
-import fixtures
import time
+import fixtures
from oslo_config import cfg
from oslotest import mockpatch
diff --git a/tempest/tests/lib/cli/test_execute.py b/tempest/tests/lib/cli/test_execute.py
index aaeb6f4..0130454 100644
--- a/tempest/tests/lib/cli/test_execute.py
+++ b/tempest/tests/lib/cli/test_execute.py
@@ -11,9 +11,10 @@
# License for the specific language governing permissions and limitations
# under the License.
-import mock
import subprocess
+import mock
+
from tempest.lib.cli import base as cli_base
from tempest.lib import exceptions
from tempest.tests import base
diff --git a/tempest/tests/lib/services/compute/test_servers_client.py b/tempest/tests/lib/services/compute/test_servers_client.py
index adfaaf2..8d391c1 100644
--- a/tempest/tests/lib/services/compute/test_servers_client.py
+++ b/tempest/tests/lib/services/compute/test_servers_client.py
@@ -1,4 +1,5 @@
# Copyright 2015 IBM Corp.
+# Copyright 2017 AT&T Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@@ -14,6 +15,9 @@
import copy
+import mock
+
+from tempest.lib.services.compute import base_compute_client
from tempest.lib.services.compute import servers_client
from tempest.tests.lib import fake_auth_provider
from tempest.tests.lib.services import base
@@ -172,12 +176,23 @@
"traceback": "fake-trace-back"
}
+ FAKE_SECURITY_GROUPS = [{
+ "description": "default",
+ "id": "3fb26eb3-581b-4420-9963-b0879a026506",
+ "name": "default",
+ "rules": [],
+ "tenant_id": "openstack"
+ }]
+
FAKE_INSTANCE_WITH_EVENTS = copy.deepcopy(FAKE_INSTANCE_ACTIONS)
FAKE_INSTANCE_WITH_EVENTS['events'] = [FAKE_INSTANCE_ACTION_EVENTS]
FAKE_REBUILD_SERVER = copy.deepcopy(FAKE_SERVER_GET)
FAKE_REBUILD_SERVER['server']['adminPass'] = 'fake-admin-pass'
+ FAKE_TAGS = ["foo", "bar"]
+ REPLACE_FAKE_TAGS = ["baz", "qux"]
+
server_id = FAKE_SERVER_GET['server']['id']
network_id = 'a6b0875b-6b5d-4a5a-81eb-0c3aa62e5fdb'
@@ -186,6 +201,7 @@
fake_auth = fake_auth_provider.FakeAuthProvider()
self.client = servers_client.ServersClient(
fake_auth, 'compute', 'regionOne')
+ self.addCleanup(mock.patch.stopall)
def test_list_servers_with_str_body(self):
self._test_list_servers()
@@ -1009,3 +1025,127 @@
server_id=self.server_id,
type='fake-console-type'
)
+
+ def test_list_security_groups_by_server_with_str_body(self):
+ self._test_list_security_groups_by_server()
+
+ def test_list_security_groups_by_server_with_bytes_body(self):
+ self._test_list_security_groups_by_server(True)
+
+ def _test_list_security_groups_by_server(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.list_security_groups_by_server,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ {'security_groups': self.FAKE_SECURITY_GROUPS},
+ server_id=self.server_id,
+ )
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_list_tags_str_body(self, _):
+ self._test_list_tags()
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_list_tags_byte_body(self, _):
+ self._test_list_tags(bytes_body=True)
+
+ def _test_list_tags(self, bytes_body=False):
+ expected = {"tags": self.FAKE_TAGS}
+ self.check_service_client_function(
+ self.client.list_tags,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ expected,
+ server_id=self.server_id,
+ to_utf=bytes_body)
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_update_all_tags_str_body(self, _):
+ self._test_update_all_tags()
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_update_all_tags_byte_body(self, _):
+ self._test_update_all_tags(bytes_body=True)
+
+ def _test_update_all_tags(self, bytes_body=False):
+ expected = {"tags": self.REPLACE_FAKE_TAGS}
+ self.check_service_client_function(
+ self.client.update_all_tags,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ expected,
+ server_id=self.server_id,
+ tags=self.REPLACE_FAKE_TAGS,
+ to_utf=bytes_body)
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_delete_all_tags(self, _):
+ self.check_service_client_function(
+ self.client.delete_all_tags,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ server_id=self.server_id,
+ status=204)
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_check_tag_existence_str_body(self, _):
+ self._test_check_tag_existence()
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_check_tag_existence_byte_body(self, _):
+ self._test_check_tag_existence(bytes_body=True)
+
+ def _test_check_tag_existence(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.check_tag_existence,
+ 'tempest.lib.common.rest_client.RestClient.get',
+ {},
+ server_id=self.server_id,
+ tag=self.FAKE_TAGS[0],
+ status=204,
+ to_utf=bytes_body)
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_update_tag_str_body(self, _):
+ self._test_update_tag()
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_update_tag_byte_body(self, _):
+ self._test_update_tag(bytes_body=True)
+
+ def _test_update_tag(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.update_tag,
+ 'tempest.lib.common.rest_client.RestClient.put',
+ {},
+ server_id=self.server_id,
+ tag=self.FAKE_TAGS[0],
+ status=201,
+ headers={'location': 'fake_location'},
+ to_utf=bytes_body)
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_delete_tag_str_body(self, _):
+ self._test_delete_tag()
+
+ @mock.patch.object(base_compute_client, 'COMPUTE_MICROVERSION',
+ new_callable=mock.PropertyMock(return_value='2.26'))
+ def test_delete_tag_byte_body(self, _):
+ self._test_delete_tag(bytes_body=True)
+
+ def _test_delete_tag(self, bytes_body=False):
+ self.check_service_client_function(
+ self.client.delete_tag,
+ 'tempest.lib.common.rest_client.RestClient.delete',
+ {},
+ server_id=self.server_id,
+ tag=self.FAKE_TAGS[0],
+ status=204,
+ to_utf=bytes_body)
diff --git a/tempest/tests/lib/services/compute/test_versions_client.py b/tempest/tests/lib/services/compute/test_versions_client.py
index 06ecdc3..255a0a3 100644
--- a/tempest/tests/lib/services/compute/test_versions_client.py
+++ b/tempest/tests/lib/services/compute/test_versions_client.py
@@ -13,6 +13,7 @@
# under the License.
import copy
+
from oslotest import mockpatch
from tempest.lib.services.compute import versions_client
diff --git a/tempest/tests/lib/services/test_clients.py b/tempest/tests/lib/services/test_clients.py
index 5db932c..a3b390e 100644
--- a/tempest/tests/lib/services/test_clients.py
+++ b/tempest/tests/lib/services/test_clients.py
@@ -12,10 +12,11 @@
# License for the specific language governing permissions and limitations under
# the License.
+import types
+
import fixtures
import mock
import testtools
-import types
from tempest.lib import auth
from tempest.lib import exceptions
diff --git a/tempest/tests/lib/test_auth.py b/tempest/tests/lib/test_auth.py
index 2f975d2..ac13a13 100644
--- a/tempest/tests/lib/test_auth.py
+++ b/tempest/tests/lib/test_auth.py
@@ -15,9 +15,9 @@
import copy
import datetime
-import testtools
from oslotest import mockpatch
+import testtools
from tempest.lib import auth
from tempest.lib import exceptions
diff --git a/tempest/tests/lib/test_rest_client.py b/tempest/tests/lib/test_rest_client.py
index e6cf047..4a83631 100644
--- a/tempest/tests/lib/test_rest_client.py
+++ b/tempest/tests/lib/test_rest_client.py
@@ -337,6 +337,10 @@
def test_response_410(self):
self._test_error_checker(exceptions.Gone, self.set_data("410"))
+ def test_response_412(self):
+ self._test_error_checker(exceptions.PreconditionFailed,
+ self.set_data("412"))
+
def test_response_413(self):
self._test_error_checker(exceptions.OverLimit, self.set_data("413"))
@@ -460,7 +464,7 @@
def test_response_bigger_than_400(self):
# Any response code, that bigger than 400, and not in
- # (401, 403, 404, 409, 413, 422, 500, 501)
+ # (401, 403, 404, 409, 412, 413, 422, 500, 501)
self._test_error_checker(exceptions.UnexpectedResponseCode,
self.set_data("402"))
diff --git a/tempest/tests/test_list_tests.py b/tempest/tests/test_list_tests.py
index 38d4c5c..a238879 100644
--- a/tempest/tests/test_list_tests.py
+++ b/tempest/tests/test_list_tests.py
@@ -14,9 +14,10 @@
import os
import re
-import six
import subprocess
+import six
+
from tempest.tests import base
diff --git a/tempest/tests/test_wrappers.py b/tempest/tests/test_wrappers.py
deleted file mode 100644
index a4ef699..0000000
--- a/tempest/tests/test_wrappers.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright 2013 IBM Corp.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-import os
-import shutil
-import subprocess
-import tempfile
-
-import six
-
-from tempest.tests import base
-
-DEVNULL = open(os.devnull, 'wb')
-
-
-class TestWrappers(base.TestCase):
- def setUp(self):
- super(TestWrappers, self).setUp()
- # Setup test dirs
- self.directory = tempfile.mkdtemp(prefix='tempest-unit')
- self.addCleanup(shutil.rmtree, self.directory)
- self.test_dir = os.path.join(self.directory, 'tests')
- os.mkdir(self.test_dir)
- # Setup Test files
- self.testr_conf_file = os.path.join(self.directory, '.testr.conf')
- self.setup_cfg_file = os.path.join(self.directory, 'setup.cfg')
- self.passing_file = os.path.join(self.test_dir, 'test_passing.py')
- self.failing_file = os.path.join(self.test_dir, 'test_failing.py')
- self.init_file = os.path.join(self.test_dir, '__init__.py')
- self.setup_py = os.path.join(self.directory, 'setup.py')
- shutil.copy('tempest/tests/files/testr-conf', self.testr_conf_file)
- shutil.copy('tempest/tests/files/passing-tests', self.passing_file)
- shutil.copy('tempest/tests/files/failing-tests', self.failing_file)
- shutil.copy('setup.py', self.setup_py)
- shutil.copy('tempest/tests/files/setup.cfg', self.setup_cfg_file)
- shutil.copy('tempest/tests/files/__init__.py', self.init_file)
- # copy over the pretty_tox scripts
- shutil.copy('tools/pretty_tox.sh',
- os.path.join(self.directory, 'pretty_tox.sh'))
- shutil.copy('tools/pretty_tox_serial.sh',
- os.path.join(self.directory, 'pretty_tox_serial.sh'))
-
- self.stdout = six.StringIO()
- self.stderr = six.StringIO()
- # Change directory, run wrapper and check result
- self.addCleanup(os.chdir, os.path.abspath(os.curdir))
- os.chdir(self.directory)
-
- def assertRunExit(self, cmd, expected):
- p = subprocess.Popen(
- "bash %s" % cmd, shell=True,
- stdout=subprocess.PIPE, stderr=subprocess.PIPE)
- out, err = p.communicate()
-
- self.assertEqual(
- p.returncode, expected,
- "Stdout: %s; Stderr: %s" % (out, err))
-
- def test_pretty_tox(self):
- # Git init is required for the pbr testr command. pbr requires a git
- # version or an sdist to work. so make the test directory a git repo
- # too.
- subprocess.call(['git', 'init'], stderr=DEVNULL)
- self.assertRunExit('pretty_tox.sh passing', 0)
-
- def test_pretty_tox_fails(self):
- # Git init is required for the pbr testr command. pbr requires a git
- # version or an sdist to work. so make the test directory a git repo
- # too.
- subprocess.call(['git', 'init'], stderr=DEVNULL)
- self.assertRunExit('pretty_tox.sh', 1)
-
- def test_pretty_tox_serial(self):
- self.assertRunExit('pretty_tox_serial.sh passing', 0)
-
- def test_pretty_tox_serial_fails(self):
- self.assertRunExit('pretty_tox_serial.sh', 1)
diff --git a/test-requirements.txt b/test-requirements.txt
index 475fb16..936d5aa 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -1,11 +1,12 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
-hacking<0.13,>=0.12.0 # Apache-2.0
+hacking>=0.12.0,!=0.13.0,<0.14 # Apache-2.0
# needed for doc build
-sphinx!=1.3b1,<1.4,>=1.2.1 # BSD
+sphinx>=1.5.1 # BSD
oslosphinx>=4.7.0 # Apache-2.0
reno>=1.8.0 # Apache-2.0
mock>=2.0 # BSD
coverage>=4.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
+flake8-import-order==0.11 # LGPLv3
diff --git a/tools/check_logs.py b/tools/check_logs.py
index caad85c..f82b387 100755
--- a/tools/check_logs.py
+++ b/tools/check_logs.py
@@ -19,10 +19,10 @@
import gzip
import os
import re
-import six
-import six.moves.urllib.request as urlreq
import sys
+import six
+import six.moves.urllib.request as urlreq
import yaml
diff --git a/tools/find_stack_traces.py b/tools/find_stack_traces.py
index f2da27a..2ba8b16 100755
--- a/tools/find_stack_traces.py
+++ b/tools/find_stack_traces.py
@@ -18,9 +18,10 @@
import gzip
import pprint
import re
+import sys
+
import six
import six.moves.urllib.request as urlreq
-import sys
pp = pprint.PrettyPrinter()
diff --git a/tools/generate-tempest-plugins-list.py b/tools/generate-tempest-plugins-list.py
index 03e838e..acb29af 100644
--- a/tools/generate-tempest-plugins-list.py
+++ b/tools/generate-tempest-plugins-list.py
@@ -25,6 +25,7 @@
import json
import re
+
import requests
url = 'https://review.openstack.org/projects/'
diff --git a/tools/pretty_tox.sh b/tools/pretty_tox.sh
deleted file mode 100755
index 0b83b91..0000000
--- a/tools/pretty_tox.sh
+++ /dev/null
@@ -1,14 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-set -o pipefail
-
-TESTRARGS=$1
-python setup.py testr --testr-args="--subunit $TESTRARGS" | subunit-trace --no-failure-debug -f
-retval=$?
-# NOTE(mtreinish) The pipe above would eat the slowest display from pbr's testr
-# wrapper so just manually print the slowest tests.
-echo -e "\nSlowest Tests:\n"
-testr slowest
-exit $retval
diff --git a/tools/pretty_tox_serial.sh b/tools/pretty_tox_serial.sh
deleted file mode 100755
index 1f8204e..0000000
--- a/tools/pretty_tox_serial.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/usr/bin/env bash
-
-echo "WARNING: This script is deprecated and will be removed in the near future. Please migrate to tempest run or another method of launching a test runner"
-
-set -o pipefail
-
-TESTRARGS=$@
-
-if [ ! -d .testrepository ]; then
- testr init
-fi
-testr run --subunit $TESTRARGS | subunit-trace -f -n
-retval=$?
-testr slowest
-
-exit $retval
diff --git a/tox.ini b/tox.ini
index 46823d8..d8d390e 100644
--- a/tox.ini
+++ b/tox.ini
@@ -147,6 +147,7 @@
show-source = True
exclude = .git,.venv,.tox,dist,doc,*egg
enable-extensions = H106,H203,H904
+import-order-style = pep8
[testenv:releasenotes]
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html